Cathode ray tube
Based on Wikipedia: Cathode ray tube
The last major manufacturer of cathode-ray tubes shut down in 2015, marking the end of a technology that had defined visual media for nearly a century. Yet something strange happened after the CRT disappeared from store shelves: people started hunting for them. Retro gamers, vintage computing enthusiasts, and professional archivists began scouring thrift stores and warehouses, because it turned out that some experiences simply cannot be replicated on modern screens.
This is the story of how scientists accidentally discovered a new form of matter while playing with glowing tubes, how that discovery became the window through which humanity watched the twentieth century unfold, and why some people still swear nothing else compares.
The Mysterious Rays
In the late nineteenth century, physicists were obsessed with vacuum tubes. These were glass containers from which most of the air had been pumped out, leaving an environment where strange things happened when you applied electricity. Julius Plücker and Johann Wilhelm Hittorf noticed something peculiar: when they sent electrical current through one of these tubes, the glass itself began to glow. Something was traveling from the negative electrode—the cathode—to the positive electrode, and whatever it was could cast shadows.
They called them cathode rays, though nobody knew what they actually were.
The mystery deepened when Arthur Schuster showed in 1890 that these rays could be bent by electric fields, and William Crookes demonstrated they could be deflected by magnets. Whatever was streaming through these tubes responded to electromagnetic forces, which meant it had to be some kind of charged particle.
The breakthrough came in 1897 when J.J. Thomson managed to measure the ratio of mass to electric charge in cathode rays. His calculations revealed something astonishing: these particles were far smaller than any atom. Thomson had discovered the electron—the first subatomic particle ever identified, though the Irish physicist George Johnstone Stoney had actually coined the name "electron" six years earlier for the hypothetical unit of electric charge.
This was revolutionary. For centuries, atoms were thought to be indivisible—the word "atom" comes from the Greek for "uncuttable." Thomson had just proven that atoms contained even smaller components. The cathode ray tube, originally just a laboratory curiosity, had cracked open the structure of matter itself.
From Scientific Instrument to Display Device
While Thomson was rewriting physics, a German physicist named Ferdinand Braun was thinking about practical applications. In 1897—the same year Thomson identified the electron—Braun modified a cathode ray tube by coating the inside of the glass with phosphor, a substance that glows when struck by electrons. He had created the first oscilloscope, a device that could draw electrical waveforms as visible lines on a screen.
The Braun tube, as it became known, was designed for measuring electrical signals. Braun himself had no idea he had just invented the foundation of twentieth-century television.
The conceptual leap came from Alan Archibald Campbell-Swinton, a Scottish engineer and fellow of the Royal Society. In 1908, he published a letter in the journal Nature describing how "distant electric vision" could work. His idea was elegant: use one cathode ray tube at the transmitting end to scan an image line by line, converting light into electrical signals, then use another tube at the receiving end to reverse the process, painting the image back onto a phosphor screen.
Campbell-Swinton never built this system himself. He was describing a vision—literally—of what would become possible.
The Race to Television
The early cathode ray tubes had a significant limitation: they used "cold cathodes," meaning they relied on extremely high voltages to rip electrons directly from metal surfaces. This required dangerous amounts of electricity and produced weak, unreliable beams.
The solution came in 1922 when John Bertrand Johnson and Harry Weiner Weinhart of Western Electric developed the hot cathode. Instead of forcing electrons out with brute electrical force, they heated a metal filament until electrons literally boiled off its surface—a phenomenon called thermionic emission. Hot cathodes required much lower voltages and produced much stronger electron beams.
Johnson, incidentally, would later give his name to "Johnson noise," the random electrical fluctuations caused by the thermal motion of electrons in any conductor. The man who helped make television possible also discovered one of its fundamental limitations.
With practical tubes now available, inventors around the world began racing to create working television systems. In Japan, Kenjiro Takayanagi demonstrated a CRT television receiver in 1926, initially displaying images with just 40 lines of resolution—barely enough to make out rough shapes. By 1927, he had improved this to 100 lines, and by 1928, he became the first person to transmit recognizable human faces electronically.
Meanwhile, in the United States, a twenty-one-year-old named Philo Farnsworth was working on his own electronic television system. His first successful demonstration came in 1927, though the legal battles over television patents would consume much of his life.
Vladimir Zworykin, a Russian-born engineer working for RCA, gave the cathode ray tube its formal name in 1929. RCA trademarked their version as the "Kinescope"—a term they eventually released to the public domain in 1950, recognizing that the technology had become too universal to monopolize.
How a CRT Actually Works
Understanding a cathode ray tube requires thinking about electrons as tiny bullets that can be aimed, accelerated, and made to paint with light.
At the back of the tube sits the electron gun. Despite its dramatic name, this is simply a heated cathode—typically a tungsten coil that gets hot enough to emit electrons—surrounded by a series of electrodes that shape and focus the electron stream into a tight beam. Think of it like a flashlight that produces electrons instead of photons.
Once the electrons leave the gun, they need to be steered. Television and computer monitor CRTs use magnetic deflection: coils of wire wrapped around the tube's neck that create magnetic fields capable of bending the electron beam horizontally and vertically. Oscilloscopes typically use electrostatic deflection instead, using charged metal plates that push and pull the beam. Magnetic deflection can move the beam faster and more precisely, which matters when you're trying to paint an entire image sixty times per second.
The beam then accelerates toward the screen, drawn by a high-voltage anode. The electrons strike the phosphor coating on the inside of the screen, and the phosphor converts their kinetic energy into visible light. Different phosphor compounds glow in different colors and fade at different rates, which is why oscilloscope screens (which need fast response) use different phosphors than television screens (which need colors to persist long enough for the human eye to perceive them as continuous images).
In a color CRT, there are actually three electron guns, each responsible for one primary color: red, green, and blue. The screen is coated with tiny dots or stripes of phosphor in these three colors, and a perforated metal sheet called a shadow mask (or in Sony's Trinitron design, an aperture grille) ensures that each electron beam only hits its corresponding color of phosphor.
The entire front of the screen is scanned in a pattern called a raster—the electron beam sweeps from left to right, top to bottom, typically sixty times per second in North America or fifty times in Europe, tracing out hundreds of horizontal lines. The persistence of the phosphor glow, combined with the persistence of human vision, creates the illusion of a stable, continuous image.
The Engineering of Glass
A CRT is essentially a glass bottle with most of the air pumped out. This sounds simple until you realize what it implies.
At sea level, air pressure pushes on every surface with a force of about fourteen pounds per square inch. A television screen might be twenty inches across, giving it an area of over three hundred square inches—which means the atmosphere is pressing on it with a force exceeding four thousand pounds. The only thing preventing the tube from collapsing is the strength of the glass itself.
If a CRT does fail, it fails catastrophically. The implosion—air rushing in to fill the vacuum—can hurl glass shards with tremendous force. Early CRTs were genuinely dangerous objects.
The glass also had to solve a more subtle problem: X-rays. When high-speed electrons slam into matter, they can produce X-radiation. The voltages inside a color CRT are high enough to generate soft X-rays, which meant the screen glass had to contain lead or barium-strontium compounds to absorb this radiation before it could escape into living rooms.
Manufacturing CRT glass was its own specialized industry. The glass arrived at CRT factories as separate components—the screen, the funnel (the cone-shaped body), and the neck—that were then fused together with precision flames. The glass had to be virtually free of contaminants and defects, since any flaw could become a catastrophic weak point under the constant stress of atmospheric pressure.
Screen glass was particularly finicky. Its optical properties affected color reproduction: transmittance (how much light passes through) had to be carefully balanced. More transparent glass made for brighter images but lower contrast. Darker glass improved contrast but required the electron guns to work harder to produce visible images. Standard transmittance values ranged from 86 percent down to 30 percent, with computer monitors typically using darker glass to improve text readability.
The curvature of the screen also evolved over time. Early CRTs had noticeably curved faces—partly for structural strength, partly because flat glass was harder to manufacture precisely. The radius of curvature increased from 30 inches in early sets to 68 inches by the 1980s, eventually becoming completely flat. Flat screens reduced annoying reflections but required thicker glass at the edges to maintain structural integrity, which is why flat-screen CRTs were significantly heavier than their curved predecessors.
The Rise and Fall of an Industry
In the mid-1990s, factories around the world were producing 160 million cathode ray tubes per year. The CRT had become one of the most manufactured objects in human history, a piece of precision engineering mass-produced on a staggering scale.
The technology kept improving. In 1968, Sony introduced the Trinitron, which used a single electron gun with three cathodes instead of three separate guns. This allowed for brighter images and the distinctive slightly cylindrical screen that became a Sony trademark for decades. In 1987, Zenith developed truly flat-screen CRTs for computer monitors—though these remained expensive and were never widely adopted for televisions. In 1990, Sony released the first high-definition CRT, a product now considered a historical artifact by Japan's national museum.
CRTs grew larger over the decades. Screen sizes increased from 20 inches in 1938 to 21 inches by 1955, then to 25, 30, and 35 inches over the following decades. The world's largest CRT television was Sony's 45-inch model from 1989—and only one working example is known to still exist.
But size proved to be the CRT's Achilles heel. The physics of the technology meant that making screens larger required making tubes deeper and heavier. A 32-inch CRT television could weigh over 150 pounds and protrude two feet from the wall. Larger screens became impractical for most homes.
Liquid crystal displays, plasma screens, and eventually OLED panels faced no such constraints. They could be made thin, light, and essentially any size. When flat-panel prices dropped in the early 2000s, the CRT's fate was sealed.
The transition happened with startling speed. CRT computer monitor sales peaked in 2000 at 90 million units; LCD monitors outsold CRTs by 2003. CRT television sales peaked in 2005 at 130 million units, but within a few years, major manufacturers had abandoned the technology entirely. Hitachi stopped making CRTs in 2001, Sony followed in 2004, and the last major producer—an Indian company called Videocon that recycled old tubes—shut down in 2015.
In 2012, the European Commission fined Samsung and several other companies for price-fixing in the CRT market. Similar penalties followed in the United States and Canada. The irony was inescapable: companies were being punished for colluding to raise prices on a technology that was already dying.
The Roads Not Taken
Before flat panels conquered the market, engineers explored some fascinating alternatives that never reached mass production.
The Aiken tube, invented in 1960, was essentially a CRT reimagined as a flat panel. Instead of a deep funnel with the electron gun at the back, it used a complex system of electrostatic and magnetic deflection to bend the electron beam through a shallow, flat enclosure. It was even envisioned for aircraft head-up displays. But patent disputes delayed its development, and by the time the legal issues were resolved, RCA had invested too heavily in conventional CRTs to change course.
In the mid-2000s, Canon and Sony both developed flat-panel displays that used electron emission—the same basic physics as CRTs, but reimagined. Canon's surface-conduction electron-emitter display and Sony's field-emission display both placed tiny electron sources directly behind the phosphor screen, eliminating the need for a deep tube with a distant electron gun. Each subpixel had its own electron emitter, essentially making every pixel a miniature CRT.
These technologies produced beautiful images. They combined the rich colors and deep blacks of CRT phosphors with the thin profile of flat panels. But LCD technology improved faster than anyone expected, and prices dropped to levels that made exotic alternatives uneconomical. Neither technology ever reached mass production.
Why Some Things Only Work on CRTs
Here's something that surprises most people: certain video games literally cannot be played on modern displays. Not "don't look as good"—actually cannot function at all.
Light guns are the most famous example. The Nintendo Zapper, the Sega Menacer, and countless arcade shooters all depended on a fundamental property of how CRTs draw images. When you pull the trigger, the gun's sensor looks at the screen during a precise moment when the electron beam is painting the target area. The game knows exactly when each part of the screen is being illuminated, so it can determine whether you're pointing at a valid target based on when the sensor sees light.
Modern displays don't work this way. LCD and OLED screens illuminate their entire surface simultaneously, and they introduce processing delays that break the timing relationship between the controller and the display. A light gun pointed at a modern TV sees only a blur of light with no way to determine precise position.
Beyond light guns, there's the question of visual aesthetics. Many games from the 1980s and 1990s were designed with CRT characteristics in mind. The soft edges between pixels, the slight color bleeding, the way phosphors created a subtle glow around bright objects—these weren't flaws to be corrected but features that artists deliberately exploited. Pixel art that looks jagged and harsh on modern displays was designed to be softened by CRT physics.
Perhaps most importantly for competitive gamers, CRTs have essentially zero input lag. When you press a button, the electron beam can paint the result on screen within a single frame—roughly 16 milliseconds at 60 frames per second. Modern displays must process the incoming signal, store it in memory, apply any scaling or enhancement, and then illuminate pixels that take time to change state. This chain of delays might add up to 50 milliseconds or more—imperceptible for watching movies, but the difference between victory and defeat in fast-paced games.
The Stubborn Persistence of Old Technology
Walk into the cockpit of a Boeing 747-400 or an Airbus A320, and you might be startled to find cathode ray tubes staring back at you. These aircraft, many still in active service, use CRT instruments in their "glass cockpits"—so called because they replaced traditional mechanical gauges with electronic displays.
Airlines like Lufthansa still operate these systems. The navigation data comes from floppy disks.
This isn't technological ignorance or stubborn nostalgia. Replacing cockpit instruments requires extensive recertification, pilot retraining, and astronomical costs. If the existing equipment works safely—and CRTs are remarkably reliable—there's no compelling reason to undertake such a massive upgrade. At least one company still manufactures new CRTs specifically for these aviation and military applications where nothing else will do.
The military faces similar calculations. Equipment designed for CRT displays may have decades of service life remaining, and the displays themselves are proven reliable in harsh conditions. Modern flat panels might be lighter and more power-efficient, but "better" in some abstract sense doesn't justify the cost of replacing equipment that works.
The Vacuum at the Heart of It All
Every cathode ray tube contains a small piece of nothingness—a vacuum less than a millionth of atmospheric pressure. This emptiness is essential. If there were air molecules inside the tube, electrons would scatter off them like billiard balls, never reaching the screen in coherent beams. The vacuum lets electrons fly freely, steered only by the electromagnetic fields that guide them to their destinations.
Maintaining this vacuum was a constant engineering challenge. Glass is never perfectly impermeable; over years, tiny amounts of air leak through. CRT manufacturers developed "getters"—reactive metals that absorb stray gas molecules—to maintain tube vacuum throughout the product's lifetime. The silver-gray spot visible inside many CRTs is the getter, still doing its job decades after manufacture.
Early CRTs often failed because they couldn't maintain adequate vacuum. Allen B. DuMont's achievement in the 1930s—building the first CRTs that lasted a thousand hours of operation—was primarily an achievement in vacuum engineering. His tubes held their emptiness long enough to make commercial television practical.
The Weight of the Past
A technology that shaped the twentieth century doesn't simply disappear. The world manufactured billions of CRTs over the decades, and most of that glass—heavy, leaded, difficult to recycle—ended up in landfills or warehouses. The environmental cost of the CRT era is still being counted.
But something unexpected happened after the industry collapsed: a secondary market emerged. Retro gaming enthusiasts, professional video archivists, and broadcast engineers began searching for the best surviving CRTs. A Sony Trinitron monitor that might have sold for fifty dollars in 2005 now commands hundreds. The truly exceptional models—professional broadcast monitors, rare large-screen sets—have become genuinely valuable artifacts.
This isn't mere nostalgia. For applications that depend on specific CRT characteristics, no modern display can substitute. When a film restoration team needs to see exactly what audiences saw when a movie premiered in 1985, they need a 1985-era display. When a competitive Street Fighter player needs zero-lag response, only a CRT will do.
The cathode ray tube began as a scientific instrument for studying the fundamental nature of matter. It became the window through which humanity watched the moon landing, the fall of the Berlin Wall, the birth of the digital age. It was replaced by better technologies—thinner, lighter, larger, more efficient in every measurable way.
And yet.
In basements and gaming cafes, in aircraft cockpits and military installations, in the archives of museums and the collections of enthusiasts, cathode ray tubes continue to glow. The technology may be obsolete, but it turns out that obsolete is not the same as unnecessary. Some things can only be seen through the phosphorescent glow of electrons striking glass—and for those things, we keep the old tubes running.
``` The essay transforms the encyclopedic Wikipedia content into an engaging narrative that opens with the counterintuitive hook of people hunting for "obsolete" technology, traces the scientific discovery of electrons through cathode rays, explains how CRTs work in accessible terms, and concludes with why this supposedly dead technology persists. It's structured for enjoyable audio listening with varied sentence and paragraph lengths, spelled-out concepts, and a narrative flow that builds from scientific discovery through industrial dominance to modern niche applications.