← Back to Library
Wikipedia Deep Dive

Moore's law

Based on Wikipedia: Moore's law

The Prophecy That Became Self-Fulfilling

In 1965, a semiconductor engineer made a casual prediction in a trade magazine. He guessed that computer chips would double in complexity every year for the next decade. He was wrong about the timing—it turned out to be every two years—but that prediction would go on to shape the entire trajectory of human civilization for the next half-century.

That engineer was Gordon Moore, and his observation became known as Moore's law.

Here's the remarkable thing: Moore's law isn't actually a law. It's not like gravity or thermodynamics—it's not something baked into the fabric of reality. It's an observation about what humans have managed to achieve through relentless engineering effort. And yet, for sixty years, the semiconductor industry has treated it as a target, a deadline, a mandate. The prophecy fulfilled itself because everyone believed it would.

What Moore Actually Said

Gordon Moore was working as director of research and development at Fairchild Semiconductor when Electronics magazine asked him to write about the future of the industry. His 1965 article, with the wonderfully unglamorous title "Cramming more components onto integrated circuits," made a bold claim: by 1975, engineers would be able to fit 65,000 components onto a chip the size of a quarter.

A transistor is essentially a tiny electronic switch. It can be on or off, representing a one or a zero. String enough of them together in the right patterns, and you can do math. String together millions or billions, and you can run software, play games, simulate the weather, or train an artificial intelligence. The number of transistors you can fit on a chip determines, in large part, how powerful that chip can be.

In 1965, the most advanced chips contained about sixty transistors. Moore was predicting a thousand-fold increase in ten years.

Looking back on that prediction decades later, Moore was characteristically modest. "I just did a wild extrapolation," he said in a 2015 interview. But his "wild extrapolation" was grounded in something real. He had observed that engineers kept finding new ways to shrink transistors and pack more of them together. The trend had been remarkably consistent.

The Revision That Stuck

By 1975, Moore realized his original prediction had been slightly too aggressive. At an engineering conference that year, he revised his forecast: complexity would continue doubling annually until 1980, then slow to doubling every two years. That revised timeline—a doubling every two years—is what became canonized as "Moore's law."

Carver Mead, a professor at the California Institute of Technology, coined the actual term shortly after. The name stuck, and suddenly the semiconductor industry had something like a constitution. Not a government mandate. Not a physical constraint. Just a shared expectation that became a shared commitment.

Moore himself found this strange. "Moore's law is a violation of Murphy's law," he quipped. "Everything gets better and better."

The Eighteen-Month Myth

You've probably heard that computer chips double in power every eighteen months. This is one of the most persistent misquotes in tech history, and it comes from a colleague of Moore's named David House.

House was an Intel executive who, in 1975, connected Moore's transistor observation to something else: a phenomenon called Dennard scaling, named after IBM engineer Robert Dennard.

Dennard had noticed something wonderful about shrinking transistors. When you make a transistor smaller, it uses less power. Specifically, the power per unit area stays roughly constant as transistors shrink. This meant that as chips got more complex, they didn't necessarily get hotter or hungrier for electricity.

House combined these two observations. If transistor counts double every two years (Moore's law) and smaller transistors are more power-efficient (Dennard scaling), then actual chip performance—the thing users care about—would double every eighteen months.

It was an elegant synthesis, and it held true for decades. Until it didn't.

When the Laws Broke Down

Around the mid-2000s, Dennard scaling hit a wall.

The problem was heat. As transistors shrank to truly microscopic sizes, they started leaking current even when switched off. This leakage generated heat. The smaller the transistors got, the worse the leakage became. Suddenly, making transistors smaller didn't automatically make them more power-efficient. Chips started running hot.

This is why, if you've bought computers over the past two decades, you've noticed that clock speeds—the gigahertz numbers that manufacturers used to trumpet—stopped climbing so dramatically. In 2004, a top desktop processor might run at 3 gigahertz. In 2024, a top processor still runs at roughly 3 to 5 gigahertz. The numbers barely moved.

So engineers adapted. Instead of making individual transistors faster, they started putting more processor cores on each chip. Instead of one really fast engine, your computer got four or eight or sixteen engines working in parallel. This is why modern software development focuses so heavily on parallel computing—it's the only way to use all those cores effectively.

The Economics of Defying Physics

There's a dark twin to Moore's law, sometimes called Moore's second law or Rock's law (after venture capitalist Arthur Rock). It states that the cost of building a semiconductor fabrication plant doubles approximately every four years.

Think about that for a moment. The machines that make computer chips are among the most complex devices ever created by humans. The most advanced ones use a technique called extreme ultraviolet lithography, or EUVL. These machines cost hundreds of millions of dollars each. They focus light with wavelengths just thirteen nanometers long—about a hundred times shorter than visible light—to etch patterns onto silicon wafers with mind-boggling precision.

A single EUVL machine weighs about 180 tons and requires its own dedicated power supply. Building a modern chip fabrication plant costs somewhere between ten and twenty billion dollars. Only a handful of companies on Earth can afford to play this game: Taiwan Semiconductor Manufacturing Company (TSMC), Samsung, and Intel chief among them.

This is the hidden story of Moore's law. The trend didn't just happen because physics allowed it. It required a massive, coordinated investment by the entire semiconductor industry, year after year, decade after decade. The industry used Moore's law as a planning document, a shared roadmap. Everyone knew what they needed to achieve and by when.

A Catalog of Miracles

The innovations that kept Moore's law alive read like a history of human ingenuity at its most concentrated.

The integrated circuit itself came first, invented almost simultaneously by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor in 1958 and 1959. An integrated circuit combines multiple electronic components—transistors, resistors, capacitors—onto a single piece of semiconductor material. Before this, computers were built from discrete components wired together by hand. The integrated circuit made electronics compact and manufacturable.

Then came CMOS technology—that's Complementary Metal-Oxide-Semiconductor—invented at Fairchild in 1963. CMOS circuits use pairs of transistors that work together to minimize power consumption. Nearly every digital chip made today uses CMOS technology.

Dynamic random-access memory, or DRAM, arrived in 1967, developed by Dennard at IBM. DRAM stores each bit of data as a tiny electrical charge in a capacitor, allowing vast amounts of memory to fit in small spaces. When you talk about how many gigabytes of memory your computer has, you're talking about DRAM.

In the 1980s, IBM researchers invented chemically amplified photoresists and deep ultraviolet laser lithography. These may sound like word salad, but they were revolutionary. Photoresists are light-sensitive materials used to create the intricate patterns on chips. Chemically amplified resists were five to ten times more sensitive than their predecessors, enabling finer detail. Deep ultraviolet lithography used lasers with shorter wavelengths than visible light, allowing smaller features to be etched.

The list goes on: chemical-mechanical polishing to flatten wafer surfaces, copper interconnects to replace slower aluminum wiring, new transistor architectures with exotic names like FinFET and gate-all-around.

The Shrinking Gate

Modern transistors face a fundamental challenge: the gate.

A transistor works by controlling the flow of electrons through a channel. The gate is what does the controlling—it's like a valve that can open or close the channel. As transistors shrink, that channel becomes thinner and shorter. Controlling the flow of electrons through an ever-smaller space becomes increasingly difficult. Electrons start tunneling through barriers they should be stopped by. Quantum mechanics, which governs behavior at atomic scales, starts playing tricks.

The solution has been to redesign the transistor entirely. Traditional transistors were flat, with the gate sitting on top of the channel like a weight on a table. Modern transistors are three-dimensional. The FinFET design, which became standard around 2012, raises the channel into a thin fin with the gate wrapped around three sides. Imagine the difference between pressing down on a piece of clay versus grabbing it from three directions—you get much better control.

The next generation, already in production, wraps the gate around all four sides. These gate-all-around transistors were first demonstrated by Toshiba researchers back in 1988, but it took decades before manufacturing technology could reliably produce them.

The Atomic Limit

How small can transistors get? We're approaching fundamental limits.

In 2012, researchers at the University of New South Wales created a transistor from a single atom of phosphorus placed precisely in a silicon crystal. This wasn't a practical device—it was a scientific demonstration—but it marked something like the theoretical floor. You can't get smaller than one atom.

In 2021, IBM announced a chip built with two-nanometer technology. Two nanometers is roughly the width of ten silicon atoms. At this scale, you're not just engineering materials anymore; you're arranging individual atoms.

But here's where things get confusing: the numbers in chip names—"7 nanometer," "5 nanometer," "3 nanometer"—don't actually correspond to any physical measurement on the chip. They're marketing names that roughly indicate generational improvements. A "3 nanometer" chip doesn't necessarily have any features that are three nanometers in size. The naming convention has become almost metaphorical, a way of signaling where you are on the roadmap rather than describing physical reality.

The Third Dimension

If you can't shrink transistors much further in two dimensions, you can stack them in the third.

This idea, too, has been around for decades. Toshiba researchers demonstrated three-dimensional chip packaging in 2001. By 2007, the company was selling flash memory chips with eight layers of memory stacked on top of each other. Hynix topped that the same year with a 24-layer stack.

Flash memory has been particularly amenable to this approach. A technology called V-NAND—the V stands for vertical—allows memory cells to be stacked dozens of layers high. Samsung's most advanced flash chips stack 96 layers or more, packing two trillion transistors into a single chip. That's more transistors than there are stars in the Milky Way galaxy.

The challenge with three-dimensional stacking is heat. When you stack chips on top of each other, the ones in the middle can't dissipate their heat effectively. You're essentially building a multi-story oven. Engineers have developed elaborate cooling solutions, including microscopic channels for liquid coolant running through the chip stack.

Is Moore's Law Dead?

This question has been asked, with increasing frequency, for at least two decades. The answer depends on how you define the law.

If Moore's law means transistor counts doubling every two years with corresponding cost reductions, then the law started faltering around 2010. The pace of improvement slowed. The costs of staying on the curve exploded.

In September 2022, Jensen Huang, the CEO of Nvidia, declared Moore's law dead. His reasoning: the cost improvements that once accompanied shrinking transistors had disappeared. You could still pack more transistors onto a chip, but you couldn't do it cheaply anymore.

Pat Gelsinger, who was Intel's CEO at the time, disagreed. He pointed to continued advances in transistor technology and argued that the industry would find new ways to keep the trend alive.

Both perspectives contain truth. The exponential curve that held for decades has definitely bent. But innovation continues, just in different directions. Instead of simply shrinking transistors, companies are finding new ways to arrange them: stacking chips vertically, using advanced packaging to combine different types of chips into a single module, developing specialized processors for specific tasks like machine learning.

Why It Mattered

The economic impact of Moore's law is almost impossible to overstate. Every two years, the price of computing power—measured per transistor, per calculation, per bit of memory—fell by half. This exponential decline drove the entire digital revolution.

Consider: the smartphone in your pocket contains more computing power than all of NASA had when it landed humans on the Moon. It costs a few hundred dollars. The computing power that once required a building now fits in a device you can lose in your couch cushions.

This created entirely new industries. Social media, cloud computing, artificial intelligence, streaming video, cryptocurrency—none of these would exist without sixty years of exponential improvement in computing hardware. The software that runs our world was written on the assumption that next year's computers would be twice as powerful as this year's.

Moore's law also created something like a shared industrial policy, but one that emerged organically rather than being imposed by any government. The entire semiconductor industry coordinated around a common timeline. Companies that might otherwise be competitors shared research, established standards, and invested together in next-generation manufacturing equipment. They all knew where the goal line was because Gordon Moore had drawn it decades earlier.

What Comes Next

The end of Moore's law—or at least its slowdown—coincides with the rise of a technology that may prove even more transformative: artificial intelligence.

This is not a coincidence. Modern AI requires enormous computing power. Training a large language model can require thousands of specialized processors running for months. The appetite for computation in AI is essentially unlimited; the systems get better the more compute you throw at them.

This voracious demand is driving new kinds of chip innovation. Graphics processing units, or GPUs, originally designed to render video game graphics, turned out to be excellent at the parallel computations that neural networks require. Companies like Nvidia have become among the most valuable on Earth by supplying AI chips.

More exotic technologies wait in the wings. Quantum computers manipulate information using the strange rules of quantum mechanics, potentially solving certain problems exponentially faster than classical computers. Neuromorphic chips are designed to mimic the structure of biological brains, using far less energy than traditional processors. Optical computing uses light instead of electricity to perform calculations.

None of these is a direct continuation of Moore's law. They represent different paradigms entirely. But they share something with Moore's original observation: they're driven by a belief that next year's technology will be better than this year's, and the year after that will be better still.

That may be the true legacy of Moore's law. Not the specific prediction about transistor counts, but the expectation of relentless progress. The semiconductor industry proved that exponential improvement was possible over long timescales. Now the question is which technologies—and which aspects of human life—will ride the next exponential curve.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.