Technological singularity
Based on Wikipedia: Technological singularity
The Last Invention Humanity Would Ever Need to Make
In 1965, a British mathematician named Irving John Good made a prediction that still haunts researchers today. He imagined a machine smart enough to improve its own design. That machine would then create an even smarter version of itself. And that version would create something smarter still. The process would repeat, faster and faster, until intelligence exploded beyond anything humans could comprehend.
Good called this hypothetical machine "the last invention that man need ever make."
He added a chilling caveat: this would only work out well for us "provided that the machine is docile enough to tell us how to keep it under control."
This idea—that technological progress might one day accelerate beyond our ability to understand or manage it—has a name. Researchers call it the technological singularity, or simply the singularity. The term borrows from physics, where a singularity refers to a point where normal rules break down entirely, like the center of a black hole where space and time twist into something incomprehensible.
Where the Idea Came From
The first person to discuss a coming singularity in technology wasn't a science fiction writer or a Silicon Valley entrepreneur. It was John von Neumann, one of the twentieth century's greatest mathematicians. Von Neumann helped design the atomic bomb, invented game theory, and laid the foundations for modern computing. He was, by any measure, among the most brilliant minds who ever lived.
Shortly before his death in 1957, von Neumann had a conversation with his colleague Stanislaw Ulam about where technology seemed to be heading. Ulam later recalled that their discussion "centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."
Von Neumann wasn't making a precise prediction. He was expressing unease. Progress seemed to be speeding up in a way that felt fundamentally different from anything in human history.
The concept gained wider attention through Vernor Vinge, a mathematician and science fiction author. In 1983, Vinge compared the coming technological transition to "the knotted space-time at the center of a black hole." A decade later, in a famous 1993 essay called "The Coming Technological Singularity," he wrote that once we create intelligences greater than our own, we will have triggered the end of the human era. The new superintelligence would upgrade itself at rates we couldn't begin to follow.
Vinge predicted this would happen sometime between 2005 and 2030.
The Intelligence Explosion
The core argument is deceptively simple.
Designing better machines is an intellectual activity. If you build a machine that's better at intellectual activities than humans are, that machine will be better at designing machines than humans are. So it will design something even smarter. That smarter thing will design something smarter still.
Each generation emerges faster than the last because each one is more capable than its predecessor. What might take humans a decade to develop, the first superhuman intelligence might accomplish in a year. Its successor might need only a month. Then a week. Then a day.
This is what researchers mean by an intelligence explosion.
The result would be a superintelligence—an entity whose cognitive abilities dwarf those of the smartest humans the way ours dwarf those of insects. We might not even be able to understand what it's thinking or why it does what it does, just as a mouse cannot understand calculus no matter how patiently you try to explain it.
Why Some People Worry
Stephen Hawking, the theoretical physicist famous for his work on black holes and the nature of time, was among those who took the threat seriously. He expressed concern that artificial superintelligence could lead to human extinction. Not because a superintelligent machine would necessarily be malicious, but because its goals might simply not include keeping humans around.
Consider an analogy. When humans build a highway, we don't hate the anthills we pave over. We're not even thinking about them. They're just not relevant to our goals. A superintelligence might view humanity the same way—not as enemies to be destroyed, but as obstacles or resources or simply irrelevant details in whatever vast project it has decided to pursue.
The philosopher Nick Bostrom has written extensively about these risks. He points out that we would only get one chance to get the initial design right. An intelligence explosion, once begun, would move too fast for course corrections. By the time we realized we had made a mistake, it would already be too late.
Why Some People Are Skeptical
Not everyone finds these arguments convincing.
A remarkable list of prominent technologists and scientists have expressed doubt that a singularity will ever occur. Paul Allen, who co-founded Microsoft, has argued against it. So has Jeff Hawkins, who invented the Palm Pilot and now researches how the brain works. Steven Pinker, the Harvard psychologist, remains skeptical. So does Roger Penrose, the Nobel Prize-winning physicist.
Their objections vary, but several themes emerge.
One is the problem of diminishing returns. Throughout history, improvement in any particular technology tends to follow what's called an S-curve. Progress starts slowly, then accelerates dramatically, then levels off as you approach fundamental limits. The internal combustion engine improved rapidly for decades, but modern car engines aren't dramatically better than those from thirty years ago. We've squeezed out most of the easy gains.
Critics argue that artificial intelligence will likely follow the same pattern. The early improvements are astonishing precisely because we're harvesting the low-hanging fruit. The closer we get to human-level intelligence, the harder each additional step becomes.
Another objection concerns the gap between narrow and general intelligence. Today's AI systems are extraordinarily good at specific tasks. A chess program can defeat any human grandmaster. A language model can write passable poetry. An image recognition system can identify faces with superhuman accuracy.
But none of these systems understand what they're doing in the way humans do. A chess program has no concept of what chess is, or why anyone would want to play it, or what it would mean to be proud of winning. The jump from narrow competence to genuine understanding may be far larger than singularity proponents assume.
Computer scientist Grady Booch put the critique memorably. He described the singularity as "sufficiently imprecise, filled with emotional and historic baggage, and touches some of humanity's deepest hopes and fears that it's hard to have a rational discussion therein."
The Race Between Hardware and Understanding
Much of the singularity debate comes down to a question about how intelligence works.
One camp believes that intelligence is fundamentally about computation. If you have enough computing power, running the right algorithms, you can replicate anything a human brain does. On this view, the main obstacle to superhuman AI is simply building fast enough computers.
The other camp believes intelligence involves something more mysterious—perhaps something we don't yet understand about the brain, or about consciousness, or about the nature of understanding itself. On this view, faster computers alone won't be enough.
Moore's Law, named after Intel co-founder Gordon Moore, states that the number of transistors on a microchip doubles approximately every two years. This exponential growth in computing power has held remarkably steady since the 1960s. If it continued indefinitely, we would eventually have computers powerful enough to simulate the human brain in detail.
But Moore's Law appears to be reaching its physical limits. Transistors are now so small that they're approaching the size of individual atoms. At that scale, quantum effects make components unreliable. Most experts believe that traditional silicon-based computing cannot continue its exponential improvement much longer.
Some researchers point to quantum computing as a potential successor technology that might continue the exponential trend. Others argue that even quantum computers face fundamental limits. The debate remains unsettled.
Predictions and Their Track Record
People have been predicting artificial general intelligence—machines with human-level reasoning across all domains—for as long as the field of AI has existed.
In 1965, I.J. Good thought it was "more probable than not" that an ultra-intelligent machine would be built before the year 2000.
In 1988, the roboticist Hans Moravec predicted that computers capable of human-level AI would exist in supercomputers by 2010.
In 1993, Vernor Vinge predicted the singularity would occur between 2005 and 2030.
Ray Kurzweil, perhaps the singularity's most famous proponent, predicted in his 2005 book "The Singularity Is Near" that we would achieve human-level AI around 2029 and reach the singularity by 2045. He reaffirmed these predictions in 2024.
The track record of such predictions is not encouraging. Researchers have consistently underestimated how difficult it would be to create genuine machine intelligence. Each decade brings impressive advances, but the goal of human-level AI remains elusive.
A 2025 survey of scientists and industry experts found that most expected artificial general intelligence to arrive by 2100—still a remarkably confident prediction, but a far cry from the imminent transformations promised by earlier forecasters.
What Would a Singularity Actually Mean?
If a technological singularity did occur, what would happen next?
The honest answer is: we don't know. That's almost the definition of a singularity—a point beyond which we cannot make reliable predictions.
Kurzweil, ever the optimist, imagines a kind of merger between human and machine intelligence. He writes that "The Singularity will allow us to transcend the limitations of our biological bodies and brains." He envisions a future where "there will be no distinction, post-Singularity, between human and machine."
Others envision darker possibilities. A superintelligence might have goals that conflict with human flourishing. It might view humans as threats to be neutralized, or as resources to be exploited, or simply as irrelevant to its purposes.
Some researchers have proposed more gradual scenarios. Robin Hanson, an economist at George Mason University, has written about a future where human minds are scanned and uploaded into computers, creating digital versions of human consciousness. These "uploads" might represent a stepping stone toward more radical forms of machine intelligence—or they might replace biological humans entirely.
The Argument About Timing
Even among those who believe a singularity is possible, there's fierce disagreement about when it might occur.
Some point to the rapid recent progress in large language models and argue we're closer than ever. Systems like GPT-4 and Claude demonstrate capabilities that seemed like science fiction just a few years ago. Perhaps the remaining distance to human-level AI is shorter than skeptics believe.
Others argue that these impressive demonstrations mask fundamental limitations. Current AI systems still make bizarre errors that no human would make. They lack common sense understanding of the physical world. They cannot truly reason or plan or learn from experience the way humans do. The gap between fluent language generation and genuine understanding may be vast.
A 2017 survey of machine learning researchers found them almost evenly split. About 29% thought an intelligence explosion was quite likely or likely. About 50% thought it was unlikely or quite unlikely. The remaining 21% said the odds were about even.
What This Means for You
Should you worry about the singularity?
The question might be less about whether it will happen than about what we do in the meantime. If there's even a small chance that superintelligent AI could pose existential risks, the argument goes, we should invest heavily in making sure any such systems are designed safely.
This has led to a growing field called AI safety, or AI alignment. Researchers in this area work on technical problems like: How do you specify what you want an AI system to do in a way that doesn't have unintended consequences? How do you verify that a system is actually pursuing the goals you intended? How do you build systems that remain under human control even as they become more capable?
These seem like prudent questions to ask regardless of whether you think a singularity is imminent or even possible.
Meanwhile, even without a singularity, artificial intelligence is already transforming the world. Automation is reshaping labor markets. Algorithms influence what we see, read, and buy. AI systems make decisions about loans, medical diagnoses, criminal sentencing, and military targeting. These changes are happening now, not in some speculative future.
Perhaps the most important insight from the singularity debate is simply this: technology shapes human destiny in ways we don't fully understand or control. Whether or not we ever create a superintelligence, we have already created technologies that profoundly affect how we live, think, and relate to one another. The question of how to guide technological change for human benefit is urgent regardless of what happens at the far end of the curve.
The Last Word (For Now)
In 1950, Alan Turing—often called the father of computer science—wrote a paper called "Computing Machinery and Intelligence." He asked a simple question: Can machines think?
Turing proposed a test. If a machine could converse with a human in such a way that the human couldn't tell whether they were talking to a person or a computer, the machine should be considered intelligent.
He didn't claim to know whether such machines would ever exist. He simply pointed out that we had no good argument for why they couldn't.
Seventy-five years later, we still don't have that argument. Nor do we have machines that genuinely think. We have something in between: systems that are frighteningly capable in some ways and embarrassingly limited in others.
The singularity remains what it has always been—a hypothesis, not a prophecy. It might happen. It might not. It might happen sooner than we expect, or later, or never.
What we can say with confidence is that the questions it raises are not going away. How do we build intelligent systems that remain aligned with human values? How do we ensure that the benefits of artificial intelligence are widely shared? How do we prepare for a future we cannot predict?
These are the questions that matter, whether or not the singularity ever arrives.