← Back to Library
Wikipedia Deep Dive

Transistor

The directory doesn't exist and I'm being blocked from creating it. Let me output the HTML content directly so you can save it: ```html

Based on Wikipedia: Transistor

There are more transistors on Earth than grains of sand on every beach, every desert, every riverbed combined. By 2018, humanity had manufactured more than thirteen sextillion of them—that's a thirteen followed by twenty-one zeros. We've created more transistors than there are stars in the observable universe. And yet most people have never seen one, never held one, and couldn't tell you what one does.

This tiny device, smaller than a red blood cell in modern chips, is the atom of the digital age.

What a Transistor Actually Does

At its core, a transistor is an electrical switch with no moving parts. Think of a light switch on your wall—flip it up, electricity flows to the bulb, light appears. Flip it down, the circuit breaks, darkness returns. A transistor does essentially the same thing, but instead of your finger doing the flipping, electricity itself does the job.

Here's where it gets interesting: a transistor uses a small amount of electrical current to control a much larger current. Imagine you're trying to open a massive dam gate. You couldn't possibly push it open yourself, but you could flip a small switch that activates a motor that opens the gate. A trickle of effort releases a flood of water. That relationship—small input controlling large output—is called gain, and it's the property that makes transistors revolutionary.

This gain allows transistors to do two essential things. First, they can amplify signals. The faint electrical whisper picked up by a microphone can be boosted into a sound loud enough to fill a stadium. Second, they can switch on and off billions of times per second, representing the ones and zeros that form the language of every computer ever built.

Before the Transistor: The Age of Glowing Glass

To understand why the transistor matters, you need to understand what it replaced.

In 1907, an American inventor named Lee de Forest created the thermionic triode—a vacuum tube that could amplify electrical signals. It looked like a small light bulb, because that's essentially what it was: a glass envelope with the air pumped out, containing metal elements that glowed when heated. The triode made radio broadcasts possible. It made long-distance telephone calls possible. It made the first computers possible.

It was also terrible.

Vacuum tubes ran hot. They consumed enormous amounts of power. They burned out constantly, like light bulbs do, requiring frequent replacement. They were fragile, shattering if you looked at them wrong. And they were big—a single tube was about the size of your thumb.

The ENIAC, one of the first general-purpose electronic computers, completed in 1945, used 17,468 vacuum tubes. It filled an entire room, weighed thirty tons, and consumed 150 kilowatts of power—enough to run about fifty modern homes. The tubes failed so frequently that the machine was only operational about half the time. Engineers kept a warehouse of spare tubes on hand.

Everyone knew there had to be a better way.

The Dream That Wouldn't Work

The idea for a solid-state replacement—something that could amplify signals without vacuum tubes—actually appeared remarkably early. In 1925, a physicist named Julius Edgar Lilienfeld filed a patent in Canada for what he called a field-effect transistor. He filed similar patents in the United States in 1926 and 1928.

Lilienfeld's concept was elegant: use an electric field to control the flow of current through a semiconductor material. No vacuum required. No heating element. No glass bulb to shatter.

There was just one problem. It didn't work.

Lilienfeld never published any research demonstrating a functioning device. He never showed a working prototype. The idea was sound—we know that now—but the materials science of the 1920s simply couldn't deliver semiconductors pure enough to make the concept practical. His patents gathered dust, largely forgotten.

A German inventor named Oskar Heil patented a similar device in Europe in 1934. Same problem. The materials didn't exist to build it.

For two decades, the field-effect transistor remained a beautiful theory that stubbornly refused to become reality.

Murray Hill, New Jersey, December 1947

The breakthrough, when it finally came, happened almost by accident—and it wasn't the device anyone had been trying to build.

At Bell Labs, the research arm of AT&T, a team led by physicist William Shockley had been attempting to create a working field-effect transistor. They tried and tried. Nothing worked. The team grew frustrated. The electric field simply wouldn't penetrate the semiconductor material the way theory predicted it should.

Two of Shockley's researchers, John Bardeen and Walter Brattain, decided to investigate why. In November and December of 1947, they conducted a series of experiments on germanium crystals, trying to understand the mysterious "surface states" that seemed to be blocking the field effect.

On December 23, 1947, they pressed two gold point contacts into a crystal of germanium—and something remarkable happened. A signal came out stronger than the signal that went in.

They had created amplification without a vacuum tube.

It wasn't a field-effect transistor. It wasn't what they'd been trying to build. It was something new: a point-contact transistor. The gold contacts were positioned close together on the germanium surface, and somehow, current flowing through one contact controlled the current flowing through the other.

The device was fragile. It was inconsistent. It was difficult to manufacture. But it worked.

Naming the Revolution

What do you call a device that transfers a signal across a resistor? John R. Pierce, a colleague at Bell Labs, coined the name by blending "transfer" and "resistor" into a portmanteau: transistor.

When Bell Labs' lawyers began preparing patent applications, Shockley proposed that the patent be based on the field-effect principle he'd been pursuing—with himself named as inventor. The lawyers did their homework. They dug up Lilienfeld's decades-old patents and advised against it. The field-effect concept wasn't new. What Bardeen and Brattain had created was.

The three men would share the 1956 Nobel Prize in Physics "for their researches on semiconductors and their discovery of the transistor effect." It was a recognition that would launch one of the most contentious priority disputes in the history of technology—Shockley's ego would never fully accept sharing credit—but that's a story for another time.

The Same Discovery, Six Months Later, Four Thousand Miles Away

Here's a fact that rarely gets mentioned: the transistor was invented twice, independently, within six months.

In Paris, at a Westinghouse subsidiary called Compagnie des Freins et Signaux, two German physicists named Herbert Mataré and Heinrich Welker were working on the same problem. Mataré had spent World War II developing crystal rectifiers for German radar systems. He knew semiconductors intimately.

By June 1948, Mataré and Welker had produced a working amplifying device using germanium—strikingly similar to what Bardeen and Brattain had achieved in December 1947. When they learned that Bell Labs had beaten them to publication, they rushed their "transistron" into production for France's telephone network, filing a patent application in August 1948.

This pattern of simultaneous discovery appears throughout the history of science. When the right ideas are in the air and the materials become available, multiple teams often converge on the same breakthrough independently. It suggests that the transistor wasn't just a stroke of genius by a few individuals—it was an invention whose time had come.

The Sandwich That Changed Everything

The point-contact transistor was a scientific triumph and a manufacturing nightmare. Those delicate gold contacts had to be positioned with extraordinary precision. The devices were inconsistent. They were expensive to produce. They were, in a word, impractical.

Shockley, driven by his relentless ambition and genuine brilliance, pushed forward. If the field-effect approach wouldn't work, and the point-contact approach was too finicky, maybe there was a third path.

In June 1948, he filed a patent for something called a bipolar junction transistor—a semiconductor "sandwich" with three layers. Imagine three pieces of bread stacked together, where the bread on the ends is made of one type of semiconductor material (let's call it N-type, for "negative," because it has extra electrons) and the bread in the middle is made of a different type (P-type, for "positive," because it has "holes" where electrons could be).

When you apply a small current to the middle layer, it controls a much larger current flowing between the outer layers. No fragile point contacts required. The device could be manufactured as a single solid piece of material.

On April 12, 1950, Bell Labs chemists Gordon Teal and Morgan Sparks successfully created a working bipolar junction transistor. They called it the "sandwich transistor," and on July 4, 1951—American Independence Day—Bell Labs announced it to the world.

Into Your Pocket

The first transistor radios appeared with stunning speed.

In August 1953, at the Düsseldorf International Radio Fair, a small German company called INTERMETALL—founded by Herbert Mataré, one of the independent inventors of the transistor—displayed the first prototype pocket transistor radio. The following year, in October 1954, the Regency TR-1 went on sale in the United States. It was produced by a partnership between Texas Instruments and a company called Industrial Development Engineering Associates.

The TR-1 was about the size of a pack of cards. It used four transistors and came in six colors: black, ivory, mandarin red, cloud gray, mahogany, and olive green. It cost $49.95—roughly $550 in today's money—and it changed how people related to technology.

Before the transistor radio, electronics were furniture. Radios were large wooden cabinets that sat in the living room. The television was a piece of furniture you arranged your couch around. Electronics belonged to the home.

The transistor radio belonged to you. You could carry it. Take it to the beach. Listen on the bus. Hear rock and roll without your parents controlling the dial.

The Sony TR-63, released in 1957, perfected the formula. Seven million units sold worldwide by the mid-1960s. Sony's success with the TR-63 essentially ended the vacuum tube era. By the late 1950s, transistors had become the dominant electronic technology.

The Switch to Silicon

Early transistors were made from germanium, the element that Bardeen and Brattain used in their 1947 breakthrough. Germanium worked, but it had problems. It was expensive. It didn't handle heat well. Its electrical properties drifted unpredictably.

Silicon, the second most abundant element in Earth's crust after oxygen, was the obvious alternative. Sand is mostly silicon dioxide. The raw material was essentially free. Silicon also handles heat better and offers more stable electrical performance.

But silicon was much harder to purify to the extreme levels transistors required. Even tiny impurities—a few atoms per billion—could ruin a transistor's performance.

On January 26, 1954, Morris Tanenbaum at Bell Labs produced the first working silicon transistor. A few months later, Texas Instruments announced the first commercial silicon transistor. The key figure was Gordon Teal, who had previously worked at Bell Labs and specialized in growing crystals of extraordinary purity.

Silicon would become synonymous with the entire electronics industry. We don't call it "Germanium Valley."

The Transistor That Powers Everything

Remember Lilienfeld's field-effect transistor, the one that worked beautifully in theory but couldn't be built in the 1920s? Researchers never stopped trying.

The problem was those stubborn surface states—electrons trapped at the interface between the semiconductor and the air that blocked the electric field from penetrating the material. In 1955, two Bell Labs researchers named Carl Frosch and Lincoln Derick stumbled onto a solution by accident.

While growing silicon crystals, they accidentally grew a thin layer of silicon dioxide—essentially glass—over the surface. This layer, it turned out, protected the silicon underneath. It "passivated" the surface, eliminating those troublesome surface states.

Building on this discovery, Mohamed Atalla and Dawon Kahng at Bell Labs proposed a new transistor structure in 1959. They would use a thin layer of silicon dioxide as an insulator between the control terminal and the semiconductor. This metal-oxide-semiconductor structure—the "MOS" in MOSFET—finally made the field-effect transistor practical.

In 1960, Atalla, Kahng, and their team successfully demonstrated a working MOSFET (metal-oxide-semiconductor field-effect transistor). Thirty-five years after Lilienfeld's patent, the field-effect transistor had finally arrived.

Why the MOSFET Won

The MOSFET didn't just work. It scaled.

Bipolar junction transistors were effective, but they had limitations. They consumed relatively significant power. They generated heat. As you made them smaller, these problems got worse. There was a floor below which you couldn't shrink them.

MOSFETs were different. Making them smaller actually made them better—faster, more power-efficient, cheaper to produce. This scalability would prove to be the most important property in all of electronics.

The MOSFET could also be manufactured with astonishing simplicity. The fabrication process was highly automatable. Raw materials were cheap (silicon from sand, oxygen from air). Yields improved as the manufacturing processes matured. The cost per transistor fell exponentially.

Today, the MOSFET is the most numerously manufactured object in human history. More than thirteen sextillion by 2018, and the number grows every second.

Building Computers from Sand

A single transistor is a switch. But combine transistors in specific arrangements, and you can build logic gates—simple circuits that perform basic logical operations. An AND gate outputs "on" only if both its inputs are "on." An OR gate outputs "on" if either input is "on." A NOT gate inverts its input.

Stack logic gates together, and you can build circuits that add numbers. Stack those together, and you can build circuits that multiply. Add memory circuits, and you can store the results. Add control circuits, and you can follow a sequence of instructions.

You have built a computer.

A modern microprocessor contains billions of transistors—as of 2023, advanced chips contain as many as 92 billion transistors on a single piece of silicon smaller than a fingernail. Some specialized chips designed for artificial intelligence contain over two trillion transistors. Each one is a switch, turning on and off billions of times per second, collectively executing the software that runs our world.

The Transistor's Children

In 1963, at Fairchild Semiconductor, Chih-Tang Sah and Frank Wanlass invented CMOS—complementary metal-oxide-semiconductor technology. CMOS circuits use pairs of transistors, one that turns on when given a positive signal and one that turns on when given a negative signal. This arrangement dramatically reduces power consumption because one transistor in each pair is always off, preventing current from flowing through to ground.

CMOS became the standard technology for almost all digital circuits. Your computer, your phone, your television, your car's engine management system, your refrigerator's temperature controller—all CMOS.

In 1967, Dawon Kahng and Simon Sze created the floating-gate MOSFET, which could trap electrical charge and retain it without power. This became the basis for flash memory—the technology in your USB drives, your solid-state hard drives, and your phone's storage.

In 1989, Digh Hisamoto at Hitachi developed the FinFET, a three-dimensional transistor structure that allowed scaling to continue even as conventional planar transistors reached their limits. FinFETs are what power the chips in your phone today.

The Invention of the Future

The Institute of Electrical and Electronics Engineers, known as IEEE (pronounced "eye-triple-E"), maintains a list of "Milestones"—achievements it considers transformative to human civilization. The 1947 invention of the point-contact transistor at Bell Labs was named an IEEE Milestone in 2009. The junction transistor (1948) and the MOSFET (1959) are also recognized.

The United States Patent and Trademark Office calls the MOSFET "a groundbreaking invention that transformed life and culture around the world."

These are not exaggerations. The transistor made the digital revolution possible. Before the transistor, computers were room-sized machines owned by governments and large corporations. After the transistor, computing power spread to businesses, then homes, then pockets, then wrists. The smartphone in your pocket has more computing power than all the computers that existed in 1960 combined.

The transistor enabled the internet, GPS navigation, video calls, streaming music, digital cameras, modern medicine, space exploration, and artificial intelligence. It enabled you to read these words.

Thirteen Sextillion and Counting

Every year, the semiconductor industry produces more transistors than all the grains of rice harvested worldwide. The cost per transistor has fallen from several dollars in the 1950s to a fraction of a billionth of a dollar today. This is the most dramatic price reduction of any manufactured product in human history.

And still it continues. As I write this, engineers are developing new transistor architectures that will allow the scaling to continue for at least another decade. Gate-all-around transistors. Two-dimensional semiconductor channels. New materials beyond silicon.

In 1947, three physicists in New Jersey pressed two gold wires into a germanium crystal and changed the world. The device they created—fragile, expensive, inconsistent—would evolve into the most abundant manufactured object in human history, the foundation of a global industry worth hundreds of billions of dollars, the basis of a digital civilization that touches every human life.

All from a simple idea: use a small electrical signal to control a larger one. A switch with no moving parts.

That's a transistor. And there are more of them on Earth right now than any other thing humans have ever made.

``` The rewritten article is complete. I was unable to write it directly to `docs/wikipedia/transistor/index.html` due to permission restrictions on creating new directories. You'll need to create the directory and save the HTML content above.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.