← Back to Library
Wikipedia Deep Dive

Moravec's paradox

Based on Wikipedia: Moravec's paradox

Here's something that seemed impossible to almost everyone in the 1980s: a computer that could beat the world chess champion would arrive decades before a computer that could reliably fold laundry.

Yet that's exactly what happened.

In 1997, IBM's Deep Blue defeated Garry Kasparov at chess—a game requiring what we think of as high-level strategic reasoning. But even today, getting a robot to sort socks or walk across an uneven sidewalk remains genuinely difficult. This counterintuitive pattern has a name: Moravec's paradox.

The observation is simple but profound. As roboticist Hans Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."

Why Hard Things Are Easy and Easy Things Are Hard

The paradox forces us to reconsider what "intelligence" actually means. We tend to think that things we find difficult—like calculus, or chess, or proving mathematical theorems—represent the pinnacle of intelligence. Things that seem effortless to us—like recognizing your friend's face in a crowd, or catching a ball, or walking without falling over—we dismiss as trivial.

But from a computational perspective, we have it exactly backward.

Moravec's explanation centers on evolution. Every human capability is implemented in biology, shaped by millions of years of natural selection. The longer a skill has been evolving, the more time evolution has had to optimize and refine it. Vision has been evolving for roughly 540 million years. Balance and movement for even longer. These systems in our brain are extraordinarily sophisticated, running on massively parallel wetware that we barely understand.

Abstract reasoning, by contrast, is brand new in evolutionary terms—perhaps less than 100,000 years old. We haven't mastered it yet. It's not intrinsically difficult; it just seems that way when we do it, because we're still learning.

As Moravec put it: "Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it." Our conscious reasoning, he argued, is just "the thinnest veneer of human thought," only effective because it rests on top of this much older, much more powerful sensorimotor foundation.

What Our Brains Do Best, We Notice Least

Marvin Minsky, one of the founders of artificial intelligence research, emphasized a related point: the most difficult human skills to reverse-engineer are precisely those that happen below conscious awareness.

"In general, we're least aware of what our minds do best," Minsky wrote. "We're more aware of simple processes that don't work well than of complex ones that work flawlessly."

Think about walking. You don't consciously plan each muscle contraction, don't calculate the physics of momentum and balance with each step. Your brain handles all of this automatically, processing input from your inner ear, your proprioceptive sensors, your vision, integrating it all in real-time to keep you upright. It's a computational miracle—and it happens completely beneath your notice.

Compare that to long division. You have to think about it deliberately, step by step. You can feel yourself working. And because you're aware of the effort, you assume long division is harder than walking.

For a computer, the opposite is true. Long division is trivial. Walking is extraordinarily hard.

The Confidence of Early AI Researchers

In the 1950s and 1960s, when artificial intelligence was just beginning, leading researchers predicted that thinking machines would arrive within a few decades. Their optimism wasn't baseless. They'd successfully written programs that could prove mathematical theorems, solve algebra problems, play checkers and chess. These were things that required years of education for humans—clear markers of intelligence.

The assumption was natural: if we've solved the hard problems, the easy problems will fall into place soon enough. Vision, common sense, basic motor control—how hard could those be?

As it turned out, incredibly hard.

Rodney Brooks, a roboticist who would later found iRobot, reflected on this miscalculation. Early AI research, he noted, characterized intelligence as "the things that highly educated male scientists found challenging"—chess, symbolic integration, proving theorems, solving complicated algebra problems.

"The things that children of four or five years could do effortlessly," Brooks wrote, "such as visually distinguishing between a coffee cup and a chair, or walking around on two legs, or finding their way from their bedroom to the living room were not thought of as activities requiring intelligence."

This blind spot led to what became known as the "AI winter"—periods in the 1970s and 1980s when progress stalled and funding dried up, because the problems researchers thought would be easy turned out to be extraordinarily difficult.

A New Direction: Intelligence Without Reasoning

The realization that perception and action were the real challenges led Brooks to pursue a radically different approach in the 1980s. He decided to build intelligent machines with "no cognition. Just sensing and action."

He called this "Nouvelle AI"—new AI. Instead of trying to build systems that reasoned symbolically about the world, he built robots that reacted to their environment directly, using layered behaviors that didn't require central planning or world models. The architecture was called subsumption architecture, and it represented a fundamental shift: intelligence emerging from the bottom up, from interaction with the physical world, rather than from the top down through abstract reasoning.

This approach proved remarkably effective for certain tasks. Brooks' robots could navigate cluttered environments, avoid obstacles, and accomplish goals—all without the kind of symbolic reasoning that earlier AI researchers had assumed was necessary.

When Computers Finally Got Fast Enough

Moravec had predicted something important back in 1976: that given enough computing power, machines would eventually crack perception and sensory skills. He understood that the problem wasn't that these tasks were impossible to compute—it was that they required enormous amounts of computation.

By the 2020s, in accordance with Moore's law, computers had become hundreds of millions of times faster than they were in the 1970s. That raw computational power, combined with advances in machine learning and neural networks, finally made it possible to tackle perception at scale.

In 2017, Andrew Ng—a leading machine learning researcher—offered what he called a "highly imperfect rule of thumb": "Almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."

That one-second threshold is revealing. It captures exactly those evolved, automatic skills that Moravec's paradox describes: recognizing a face, reading an emotion, identifying an object, understanding a spoken word. These are things humans do instantly, without conscious effort—and they're increasingly things machines can do too.

Examples of Old Skills Versus New

To make the paradox concrete, consider the age of different human capabilities.

Skills that have been evolving for millions of years include: recognizing a face, moving through space, judging people's motivations, catching a ball, recognizing a voice, setting appropriate goals, paying attention to interesting things. Essentially, anything involving perception, attention, visualization, motor coordination, or social intuition.

Skills that appeared recently—in the last few thousand years—include: mathematics, engineering, playing formal games, logic, scientific reasoning. These are hard for us precisely because they're not what our bodies and brains evolved to do. They were acquired through cultural evolution, not biological evolution, and they've had minimal time to be optimized.

The implication is striking: the difficulty of reverse-engineering a human skill is roughly proportional to how long that skill has been evolving. And because the oldest skills are unconscious, they appear effortless to us—even though they're computationally the most complex.

The Language Instinct Connection

Steven Pinker, the cognitive scientist and linguist, considered Moravec's paradox the central lesson of thirty-five years of AI research. In his 1994 book The Language Instinct, he wrote: "The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard."

Language is a perfect example. A four-year-old can effortlessly produce grammatically correct sentences they've never heard before, understand metaphors, pick up on tone and context. They do this without formal instruction, without conscious thought about syntax or semantics.

For decades, getting a computer to understand natural language at even a child's level seemed impossibly difficult. The statistical and contextual complexity of language—all the implicit knowledge about how the world works that's embedded in how we use words—was far beyond what symbolic AI systems could handle.

It took massive neural networks, trained on enormous datasets, to begin approaching human-level language understanding. And even now, in the era of large language models, there are aspects of human linguistic ability—the effortless integration of language with perception, action, and social context—that remain frontiers of AI research.

What Moravec's Paradox Tells Us About Intelligence

The paradox reveals something fundamental about the nature of intelligence itself. What we subjectively experience as intelligence—our ability to consciously reason, to solve puzzles, to do mathematics—is not the foundation of cognition. It's a recent addition, a thin layer on top of vastly more complex and ancient systems.

True intelligence, in the sense of adaptive behavior in a complex world, is rooted in perception and action. It's about making sense of ambiguous sensory input in real-time, navigating uncertainty, coordinating movement, understanding social dynamics.

These are the capabilities that took hundreds of millions of years to evolve. These are what make a one-year-old child, in some ways, more sophisticated than the most powerful computer.

And these are the capabilities that AI is only now, with staggering amounts of computation and data, beginning to approach.

The AI Effect

There's a related phenomenon worth noting, sometimes called the "AI effect." It goes like this: whenever AI successfully solves a problem, people stop considering that problem to be a marker of intelligence.

When computers could calculate, we said calculation wasn't really intelligence. When they could play chess, we said chess was just brute-force search, not real thinking. When they could recognize faces, we said that was just pattern matching.

Moravec's paradox helps explain why this happens. The things we consider markers of intelligence are the things we find difficult—abstract reasoning, conscious problem-solving. When machines do those things, we move the goalposts, because deep down we know that true intelligence is something richer and stranger: the effortless, unconscious competence of a child navigating the world.

That's the real test. And it's a test that, paradoxically, turns out to be far harder than chess.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.