Exascale computing
Based on Wikipedia: Exascale computing
In late 2022, a machine at Oak Ridge National Laboratory in Tennessee quietly became the fastest computer ever built. Called Frontier, it could perform more than one quintillion calculations per second. That's a one followed by eighteen zeros. To put it another way: if every person on Earth did one calculation per second, it would take us about four years to match what Frontier does in a single second.
This milestone has a name: exascale computing.
The prefix "exa" comes from the Greek word for "six," referring to the sixth power of a thousand (10^18). When we talk about exascale computers, we're talking about machines that can perform at least one exaFLOPS—one quintillion floating-point operations per second. A floating-point operation is essentially a mathematical calculation involving decimal numbers, the kind of math that underlies everything from weather simulations to protein folding predictions.
Why This Matters
You might wonder why anyone needs a computer this powerful. The answer lies in problems we currently cannot solve.
Consider weather forecasting. Today's models can predict weather reasonably well for about a week out. Beyond that, the chaotic nature of atmospheric systems makes predictions increasingly unreliable. With exascale computing, we can run simulations at much finer resolutions—modeling smaller pockets of air, tracking more particles, accounting for more variables. The result isn't just better seven-day forecasts; it's the possibility of accurate two-week or even three-week predictions.
Climate modeling faces similar constraints. To truly understand how carbon emissions will affect global temperatures over decades, scientists need to simulate oceans, atmospheres, ice sheets, and ecosystems all interacting together. Each additional layer of complexity requires exponentially more computing power.
Then there's personalized medicine. Imagine simulating how a specific drug will interact with your particular genetic makeup before you ever swallow a pill. Or modeling how a cancer will respond to different treatment combinations. These simulations require understanding molecular interactions at scales that demand exascale power.
Perhaps most intriguingly, exascale computing reaches roughly the same processing power as the human brain—at least at the neural level. This was the target of the Human Brain Project, an ambitious European initiative that aimed to simulate an entire human brain in a computer. The project has since been restructured, but the computational milestone it aimed for has been achieved.
The Race to Exascale
Getting here wasn't easy. For years, there was a genuine race among nations to build the first exascale computer, and progress was slower than many predicted.
In 2009, at a supercomputing conference, experts projected that exascale systems would arrive by 2018. They were optimistic. By 2014, the rate of improvement in the world's fastest supercomputers had slowed noticeably, leading some observers to question whether exascale was achievable by 2020.
The challenge wasn't just raw speed. It was power consumption.
Computers generate heat. The faster they run, the more heat they produce. At exascale levels, cooling becomes a massive engineering challenge. Early projections suggested an exascale computer might need hundreds of megawatts of electricity—the output of a small power plant—just to operate. Making these machines practical required fundamental advances in processor efficiency.
The first petascale computer (a thousand times less powerful than exascale) entered operation in 2008. It took fourteen more years to achieve the next thousandfold improvement.
A Technical Asterisk
Before Frontier claimed the title, other systems had technically crossed the exaFLOPS barrier—but with caveats.
In March 2020, the distributed computing project Folding@home became the first to break the barrier. But Folding@home isn't a single supercomputer. It's a network of hundreds of thousands of ordinary computers around the world, all donating their spare processing power to simulate protein folding. While impressive, this distributed approach doesn't count in the traditional supercomputer rankings.
The Japanese supercomputer Fugaku—named after Mount Fuji—achieved 1.42 exaFLOPS in June 2020, but using a different benchmark called HPL-AI that allows lower-precision calculations. The gold standard for supercomputer rankings is the High Performance LINPACK benchmark, which requires 64-bit double-precision calculations. It's like the difference between measuring a car's speed on a straight track versus through a slalom course. Both are valid measures, but the community has agreed on one standard for comparison.
Frontier was the first to cross the exaFLOPS threshold using that standard metric. As of late 2024, El Capitan at Lawrence Livermore National Laboratory in California has taken the crown, running at 1.742 exaFLOPS—about sixty percent faster than Frontier.
The American Dominance
As of November 2024, the United States remains the only country with publicly acknowledged exascale supercomputers. There are three of them: Frontier at Oak Ridge, Aurora at Argonne National Laboratory, and El Capitan at Lawrence Livermore.
This didn't happen by accident. It was the result of sustained government investment over more than fifteen years.
In 2008, two branches of the Department of Energy—the Office of Science and the National Nuclear Security Administration—began funding exascale development. Sandia National Laboratory and Oak Ridge National Laboratory were tasked with designing the architecture. In 2015, President Barack Obama signed an executive order creating a National Strategic Computing Initiative specifically to accelerate exascale development.
The investments were substantial. By 2012, the United States had allocated $126 million just for exascale research. Individual machines cost even more: Aurora cost $600 million, as did El Capitan.
El Capitan's primary purpose—though not its only one—is nuclear weapons modeling. The United States stopped testing nuclear weapons in 1992, but it still needs to ensure its existing stockpile remains reliable and safe. Computer simulations have replaced underground tests, and exascale computing makes those simulations far more accurate.
What About Everyone Else?
China presents a fascinating puzzle. According to reports, China actually has two operational exascale computers: Tianhe-3 (also called Xingyi) and Sunway OceanLight, with a third under construction. But neither appears on the TOP500, the official ranking of the world's fastest supercomputers.
Why? China stopped submitting its most powerful machines to the ranking in 2019, likely due to escalating technology tensions with the United States. The TOP500 relies on voluntary submissions, so China's absence says nothing definitive about its capabilities. What we know comes from academic papers and government announcements, which suggest China may have quietly achieved exascale computing around the same time as the United States, if not before.
Japan's Fugaku, while not technically exascale by the standard metric, remains among the world's most powerful machines and was explicitly designed with energy efficiency in mind—consuming less than 30 megawatts, a fraction of what early exascale projections feared.
Europe has been working toward exascale through the European High-Performance Computing Joint Undertaking, established in 2018 with a budget of around one billion euros. Their goal was an exascale machine by 2022 or 2023. In 2025, Germany inaugurated JUPITER, which ranks fourth globally but claims the number-one position on the Green500—a ranking that measures energy efficiency rather than raw speed. JUPITER runs entirely on renewable energy and features advanced cooling and energy reuse systems.
The United Kingdom announced a £900 million investment in exascale computing in March 2023. The project was cancelled eighteen months later, in August 2024—a reminder that these efforts require not just money but sustained political will.
India has announced plans for an exascale computer called Param Shankh, to be powered by an indigenous 96-core processor based on the ARM architecture, nicknamed AUM (ॐ). Taiwan has been working toward exascale with technology transferred from Fujitsu of Japan. These efforts remain works in progress.
The Programming Problem
Building an exascale computer is only half the challenge. Using it effectively is the other half.
Software that runs well on today's supercomputers doesn't automatically run well on exascale machines. The architecture is fundamentally different. An exascale computer might contain millions of processor cores, all needing to work together on the same problem. Coordinating that many components requires new programming approaches that many scientists haven't learned.
Think of it like an orchestra. A string quartet can play without a conductor—four musicians can watch each other and stay synchronized. But an orchestra of a thousand musicians needs entirely different coordination. You can't just add more musicians to a quartet and expect it to work.
Researchers have recognized that developing applications for exascale platforms requires new programming models and runtime systems. The Exascale Computing Project, funded by the Department of Energy, has been working on software alongside hardware, training scientists to write code that can actually harness these machines' full power.
Exotic Approaches
Not everyone is pursuing exascale through conventional means.
In 2013, the Intelligence Advanced Research Projects Activity—IARPA, essentially the research arm of the American intelligence community—launched something called the Cryogenic Computer Complexity program. The idea: build supercomputers using superconducting circuits that operate at temperatures near absolute zero.
Superconducting electronics have almost no electrical resistance, meaning they waste almost no energy as heat. This addresses one of exascale computing's biggest challenges. The drawback is that keeping circuits at temperatures colder than outer space requires elaborate cooling systems. IBM, Raytheon, and Northrop Grumman have all received contracts to develop this technology.
Meanwhile, the field of neuromorphic computing takes inspiration from the brain itself. Rather than simulating neurons on conventional hardware, neuromorphic chips are designed to mimic how neurons actually work—processing information through patterns of electrical spikes rather than traditional binary calculations. These chips excel at pattern recognition and learning tasks, potentially offering exascale-level performance for specific applications at a fraction of the power consumption.
Beyond Exascale
Even as exascale computing matures, researchers are already thinking about the next milestone: zettascale computing, a thousand times more powerful still.
To achieve this, we'll likely need to move beyond traditional silicon-based processors entirely. Quantum computing offers one path, using the strange properties of quantum mechanics to perform certain calculations exponentially faster than classical computers. Optical computing—using light instead of electricity—offers another, potentially eliminating the heat problems that plague current designs.
The history of supercomputing suggests that zettascale will arrive, probably within the next decade or two. Each milestone seemed impossibly distant until it suddenly wasn't. The first megaFLOPS computer arrived in the 1960s. Gigascale came in the 1980s. Terascale in the 1990s. Petascale in 2008. Exascale in 2022.
The pattern holds: roughly a thousandfold improvement every decade or so.
What Exascale Enables
The most exciting applications of exascale computing may be ones we haven't yet imagined. But several are already taking shape.
In 2018, before true exascale arrived, a team using the Summit supercomputer at Oak Ridge won the Gordon Bell Prize—one of computing's highest honors—for analyzing genomic data at unprecedented scale. They processed the equivalent of a quintillion calculations per second while searching for patterns in human DNA that might explain disease susceptibility. Their work hints at a future where personalized medicine becomes routine rather than exceptional.
Climate scientists are using exascale machines to model global warming scenarios at resolutions previously impossible. Rather than treating the Pacific Ocean as a single entity, they can now simulate individual currents, eddies, and temperature gradients. This granularity reveals dynamics that coarser models miss entirely.
Materials scientists are designing new substances by simulating atomic interactions. Want a battery that holds more charge? A solar panel that converts more sunlight? A plastic that biodegrades safely? These questions can now be explored computationally before anyone synthesizes a molecule in a lab.
And then there's artificial intelligence. Training large language models and neural networks requires enormous computational resources. While AI training typically happens on clusters of graphics processing units rather than traditional supercomputers, the line between these approaches is blurring. El Capitan, for instance, uses AMD graphics processors specifically chosen to accelerate artificial intelligence tasks.
The Efficiency Question
For all their power, exascale computers face a fundamental constraint: energy.
El Capitan consumes around 40 megawatts of electricity. That's enough to power roughly 30,000 American homes. The electricity bill alone runs into tens of millions of dollars annually. Add cooling costs, maintenance, and staffing, and operating an exascale machine becomes a significant budget line even for governments.
This is why the Green500 ranking exists alongside the TOP500. It's no longer enough to be the fastest; efficiency matters too. JUPITER's claim to the top Green500 spot while running entirely on renewable energy represents a genuine achievement—not just in raw performance, but in sustainable computing.
The pressure toward efficiency is driving innovation. New cooling techniques circulate liquid directly through computer chips rather than relying on air conditioning. Some facilities are experimenting with capturing waste heat and using it to warm nearby buildings. The energy crisis of exascale computing is becoming a catalyst for broader advances in sustainable technology.
A Measure of Progress
There's something humbling about exascale computing. These machines represent the accumulated ingenuity of thousands of engineers, physicists, and computer scientists working across decades. They embody breakthroughs in chip fabrication, cooling technology, network design, software engineering, and materials science.
And yet, for all their power, they still fall short of nature's solutions. The human brain—a three-pound organ operating on about 20 watts of power, roughly what a dim light bulb uses—performs functions that exascale computers cannot replicate. Consciousness, creativity, the ability to learn from a handful of examples rather than millions: these remain beyond our computational reach.
Exascale computing is a milestone, not a destination. It expands what's possible in science and engineering. It enables simulations and analyses that were previously unthinkable. But it also reminds us how far we have yet to go.
The race continues.