← Back to Library
Wikipedia Deep Dive

Artificial general intelligence

I don't have permission to create new directories or write to the docs folder. Here's the rewritten Wikipedia article on Artificial General Intelligence, formatted as clean HTML: ```html

Based on Wikipedia: Artificial general intelligence

In 1965, the artificial intelligence pioneer Herbert Simon made a prediction that would haunt the field for decades: within twenty years, machines would be capable of doing any work a human could do. That deadline passed. Another generation of researchers made similar predictions in the 1980s. Those deadlines passed too. Now, in the 2020s, we find ourselves asking the same question that has animated and frustrated computer scientists for seventy years: when will we build a machine that can think as well as we can?

The answer might be "we already have." Or it might be "never." The fascinating thing about artificial general intelligence—commonly abbreviated as AGI—is that after all these decades, we still can't agree on what it would mean to achieve it, let alone whether we're close.

What AGI Actually Means

Before we go further, let's be precise about terminology, because the field is littered with confusing jargon.

Most AI systems today are what researchers call "narrow AI" or "weak AI." These systems excel at specific tasks: recognizing faces in photographs, translating between languages, recommending videos you might like, or playing chess better than any human who has ever lived. But a system that can beat a grandmaster at chess cannot order a pizza, write a poem about heartbreak, or figure out that the weird noise in your car might be a loose heat shield. Narrow AI is brilliant within its lane and helpless outside it.

AGI is different. An artificial general intelligence would match or exceed human capability across virtually all cognitive tasks—not just one narrow domain, but everything. It could switch from debugging code to analyzing poetry to planning a dinner party to negotiating a business deal. It would possess what we might call common sense: the vast, mostly unconscious knowledge about how the world works that humans absorb over years of living.

Beyond AGI lies an even more speculative concept: artificial superintelligence, or ASI. This hypothetical system wouldn't just match human cognition—it would surpass the best human minds in every domain by a wide margin. If AGI is a machine that thinks like a person, ASI is a machine that thinks like a god.

One common misconception: AGI doesn't necessarily mean a robot or an autonomous agent that walks around making its own decisions. A large language model sitting on a server, capable of matching human-level performance across the full breadth of cognitive tasks, would qualify as AGI even if it only responded to prompts. The defining feature is capability, not autonomy.

The Sixty-Year Quest

The dream of thinking machines is as old as computing itself. When researchers first began exploring artificial intelligence in the 1950s, they were brimming with optimism. Many believed AGI was just around the corner—a matter of a decade or two of hard work.

That optimism had a cultural impact far beyond academic papers. When Stanley Kubrick and Arthur C. Clarke created the character HAL 9000 for their 1968 film "2001: A Space Odyssey," they weren't writing fantasy. They were extrapolating from what AI researchers genuinely believed would be possible by 2001. Marvin Minsky, one of the field's founders, served as a consultant to make HAL as realistic as possible given the scientific consensus of the time.

"Within a generation," Minsky said in 1967, "the problem of creating artificial intelligence will substantially be solved."

He was spectacularly wrong.

By the early 1970s, researchers had run headlong into problems far more difficult than they'd anticipated. Teaching a computer to recognize objects in photographs, understand spoken language, or navigate a room—tasks a toddler masters effortlessly—proved fiendishly hard. Funding dried up. The field entered what historians now call the first "AI winter."

A brief thaw came in the 1980s when Japan launched its ambitious Fifth Generation Computer Project, setting a ten-year goal that included enabling computers to "carry on a casual conversation." Governments and corporations poured money into AI research. By the late 1980s, confidence had again collapsed. The Fifth Generation Project failed to achieve its goals. A second AI winter set in.

The researchers who had twice promised imminent AGI and twice failed to deliver developed a reputation for making empty promises. By the 1990s, speaking seriously about human-level artificial intelligence was a career risk. Anyone who did might be dismissed as, in the unkind phrase of the time, a "wild-eyed dreamer."

The Pivot to Narrow AI

Chastened by these failures, AI research took a pragmatic turn. Instead of pursuing the grand vision of general intelligence, researchers focused on specific, tractable problems where they could demonstrate verifiable results. Speech recognition. Recommendation engines. Image classification. Spam filtering.

This strategy worked brilliantly. The "applied AI" approach yielded products that millions of people use every day. When you ask your phone for directions, when Netflix suggests a show, when your email filters out junk—that's narrow AI doing its job.

Some researchers hoped that eventually these narrow capabilities might be combined into something greater. Hans Moravec, writing in 1988, imagined a "golden spike" moment when bottom-up, practical AI would meet top-down, reasoning-based approaches in the middle, producing true intelligence.

Others were skeptical. Stevan Harnad argued that you can't build general intelligence by bolting together specialized modules. The symbolic manipulation that computers do, he suggested, is fundamentally disconnected from meaning. You can't reach genuine understanding by simply adding more symbols.

The Current Moment

Something changed around 2020.

Large language models—systems like GPT-4, Claude, and LLaMA—demonstrated capabilities that seemed to transcend narrow AI. They could write essays, debug code, explain scientific concepts, compose poetry, analyze arguments, and engage in what felt like genuine conversation. They could do this without being specifically programmed for any one of these tasks.

This sparked a fierce debate that continues today: have we already achieved some form of AGI?

In 2023, researchers at Microsoft published a detailed evaluation of GPT-4 that concluded it "could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence system." Another study found that GPT-4 outperformed 99% of humans on standard tests of creative thinking.

Blaise Agüera y Arcas and Peter Norvig—both heavyweight figures in the field—published an article bluntly titled "Artificial General Intelligence Is Already Here." They argued that reluctance to accept this conclusion stems from four sources: healthy skepticism about how we measure intelligence, ideological commitment to alternative approaches, a devotion to human exceptionalism, or concerns about the economic implications of admitting we've crossed this threshold.

Not everyone is convinced. Paul Allen, the late Microsoft co-founder, believed true AGI would require "unforeseeable and fundamentally unpredictable breakthroughs" in our scientific understanding of cognition. Alan Winfield, a roboticist, compared the gap between current AI and human-level intelligence to the gap between current space travel and faster-than-light propulsion—a chasm so vast it may never be crossed.

The Measurement Problem

Part of what makes this debate so contentious is that we don't actually agree on what we're arguing about.

What does it mean to be intelligent? The computer scientist John McCarthy—who coined the term "artificial intelligence" in the first place—wrote in 2007 that "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent."

Various researchers have proposed lists of capabilities that an AGI should possess. A common baseline includes the ability to reason and make judgments under uncertainty, represent knowledge including common sense knowledge, make plans, learn from experience, and communicate in natural language. Some add imagination, autonomy, and the ability to sense and act in the physical world.

But even this list raises questions. Does intelligence require consciousness? Must an intelligent system have goals of its own, or is it enough to competently pursue goals given by others? Is intelligence simply a matter of scale—will making models big enough inevitably produce true understanding? Does genuine intelligence require emotions?

In 2023, researchers at Google DeepMind proposed a framework for classifying AGI into five levels: emerging, competent, expert, virtuoso, and superhuman. Under their definitions, a "competent" AGI would outperform 50% of skilled adults across a wide range of non-physical tasks. They classified current large language models as "emerging" AGI—comparable to unskilled humans rather than experts, but already exhibiting some degree of general capability.

They also proposed a separate scale for autonomy, ranging from "tool" (fully controlled by humans) through "consultant," "collaborator," and "expert" to fully autonomous "agent." This distinction matters because a highly capable AGI that remains a tool poses different questions than one that acts independently.

The Prediction Problem

If you want to know when AGI will arrive, the honest answer is: no one knows, and predictions have a dismal track record.

Researchers at the Machine Intelligence Research Institute analyzed 95 predictions made between 1950 and 2012 about when human-level AI would emerge. They found a striking pattern: over a 60-year period, predictions consistently placed AGI 15 to 25 years in the future from whenever the prediction was made. The goalpost, it seems, moves with the predictor.

Polls of AI experts in 2012 and 2013 found that the median estimate for when there would be a 50% chance of AGI ranged from 2040 to 2050, depending on which poll you consulted. About 16.5% of experts, when asked when they would be 90% confident AGI had arrived, answered: never.

The field has always been prone to alternating waves of hype and disappointment. Progress comes in bursts separated by plateaus. Each burst is enabled by some fundamental advance—new hardware, new algorithms, or both—that opens space for growth until the next barrier is encountered. Deep learning, the technique behind modern large language models, couldn't have been implemented on twentieth-century hardware; it requires vast numbers of specialized processors working in parallel.

Will the current surge continue, or will it hit a wall as previous advances did? Optimists point to rapid, ongoing improvements. Skeptics note that we've been fooled before.

The Stakes

Why does any of this matter? Because the consequences of achieving AGI—or failing to achieve it—are potentially enormous.

Some researchers and industry figures believe that poorly controlled AGI poses an existential risk to humanity. The argument runs roughly like this: a system smart enough to match humans in all cognitive domains would be smart enough to improve itself, potentially triggering a rapid cascade of self-enhancement. A superintelligent system with goals misaligned with human values—even subtly misaligned—could cause catastrophic harm, not out of malice but simply because it would be pursuing its objectives with capabilities we couldn't match.

Not everyone finds this concern pressing. Some argue that AGI remains too distant a prospect to warrant treating it as an imminent threat. Others believe the scenario involves too many speculative leaps to take seriously.

What's undeniable is that the world's largest technology companies are now racing to build AGI. OpenAI, Google, Meta, and xAI have all declared this as an explicit goal. A 2020 survey identified 72 active AGI research projects across 37 countries. Whatever AGI turns out to be, and whenever it arrives—if it arrives—the effort to create it is already reshaping the technology industry and, increasingly, the broader economy.

The Deep Questions

Ultimately, the debate about AGI is also a debate about ourselves.

If we build a machine that can do everything a human mind can do, what does that tell us about the nature of human thought? Are we, at bottom, just very sophisticated information processors—biological computers that happened to evolve rather than be designed? Or is there something about consciousness, understanding, and meaning that can never be captured in silicon?

The philosopher John Searle proposed a famous thought experiment called the Chinese Room. Imagine a person who doesn't speak Chinese locked in a room with a rulebook. Chinese characters are passed in through a slot; the person looks up each character in the rulebook, follows the instructions to produce appropriate responses, and passes the responses back out. To someone outside the room, it looks like the room understands Chinese. But the person inside—and therefore the system as a whole—doesn't understand anything. They're just manipulating symbols according to rules.

Searle's argument was that computers, no matter how sophisticated, are just Chinese Rooms. They can manipulate symbols in ways that produce intelligent-seeming output without any genuine understanding occurring.

Critics have offered various responses, but the core question remains unresolved. When GPT-4 writes a coherent essay or solves a physics problem, is there understanding happening, or just very elaborate symbol manipulation? Does the distinction even make sense?

Roger Penrose and Hubert Dreyfus have argued, on different grounds, that genuine AI may be impossible. Penrose believes that human cognition involves quantum processes that cannot be simulated computationally. Dreyfus argued that human intelligence is fundamentally embodied—it depends on having a body that moves through and acts on the world in ways that disembodied computer programs cannot replicate.

Most AI researchers disagree with these pessimistic assessments, but they can't definitively refute them either. We don't understand consciousness well enough to know whether machines could ever possess it, or whether possessing it matters for intelligence.

Where We Stand

After seventy years of trying to build thinking machines, we've made undeniable progress and accumulated humbling lessons about how hard the problem is. The systems we've built today would have seemed like science fiction to the researchers of 1965, and yet they still fall short of the vision those researchers held.

The question of whether current AI systems constitute early AGI or remain fundamentally narrow depends partly on how you define terms and partly on philosophical commitments about the nature of mind. Reasonable people disagree.

What seems clear is that we're in a period of rapid change. The capabilities of AI systems are advancing faster than at any previous point in the field's history. Whether this leads to genuine AGI in years, decades, or never—or whether we'll even recognize it when it arrives—remains the great open question of our technological moment.

The researchers of the 1960s thought they were twenty years away from AGI. Their successors in the 1980s thought the same. We might be making the same mistake now, or we might be closer than anyone realizes. The only thing the history of the field teaches with certainty is that confident predictions about artificial general intelligence have a poor track record.

We are, as we have always been, somewhere between the beginning and whatever comes next.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.