Language acquisition
Based on Wikipedia: Language acquisition
A chimpanzee named Nim Chimpsky spent years learning American Sign Language. He could make signs for words. He could ask for bananas. But something was missing.
Nim never asked a question.
Not once, in all those years of training, did he sign anything equivalent to "What's that?" or "Why?" He learned to request things he wanted, but he never showed curiosity about the world through language. He never combined his signs in novel ways to express new ideas. He never told a story or made a joke.
This failure—and it was a failure, according to the researchers who worked with him—reveals something profound about what makes human language unique. It's not just about learning words. It's about something far stranger and more powerful: the ability to take a finite set of sounds and combine them into an infinite number of meanings, and more importantly, to want to.
The Mystery of the First Five Years
Consider what a human infant accomplishes by age five.
A child born anywhere on Earth, into any culture, exposed to any of the roughly seven thousand languages humans speak, will master the grammatical rules of that language with almost no explicit instruction. Nobody sits a toddler down and explains the subjunctive mood. No one teaches a three-year-old the difference between "who" and "whom." Yet by kindergarten, children routinely produce sentences they've never heard before, understand subtle grammatical distinctions, and do all of this while also learning to walk, recognize faces, and not fall down stairs.
This is, frankly, bizarre.
The rules of grammar are incredibly complex. Linguists spend entire careers trying to describe them formally, and they still argue about the basics. Yet every neurologically typical five-year-old has already internalized these rules well enough to use them automatically, without conscious thought, while simultaneously eating goldfish crackers and asking why the sky is blue.
How does this happen?
Two Ancient Ideas
Philosophers have wondered about this for millennia. Plato suspected that the connection between words and their meanings must be somehow innate—built into us from birth. He couldn't explain exactly how a child knows that the sounds "dog" refer to that furry thing over there, but he intuited that simply hearing the word spoken couldn't be the whole story.
Sanskrit grammarians debated this question for over twelve hundred years. Their argument came down to a deceptively simple question: Is our ability to understand words a gift from the gods, or something we learn from watching our elders? When a child learns the word for "cow," is she accessing some deep innate knowledge, or is she just observing that trusted adults make certain sounds while pointing at large bovines?
This ancient debate never really went away. It just put on different clothes.
The Behaviorist Gamble
In 1957, the psychologist B.F. Skinner proposed an elegant solution. Language, he argued, is learned the same way pigeons learn to peck levers for food pellets: through reward and reinforcement.
Here's how it would work. A baby makes random sounds. Eventually, by chance, some of those sounds approximate a word—"mama," say. The mother responds with delight, picks up the baby, provides food or comfort. The baby's brain registers: that sound led to good outcomes. Over time, the sounds get refined. The baby learns that certain combinations of sounds, in certain contexts, produce rewards. Language emerges from millions of these tiny reinforcement loops.
It's a tidy theory. It fits with everything psychologists knew about how animals learn. It requires no mysterious innate knowledge, no special language-learning machinery in the brain. Just stimulus, response, reinforcement. Simple.
There was just one problem.
The Child Who Said "Gived"
Children don't actually learn language the way Skinner predicted.
Consider a common pattern that any parent has witnessed. A two-year-old correctly uses the past tense of irregular verbs: "I went to the store." "Daddy gave me a cookie." These are words the child has heard and memorized. So far, Skinner's theory holds.
But then something strange happens. The child starts making errors she never made before. Suddenly it's "I goed to the store" and "Daddy gived me a cookie." Where did these words come from? No adult says "goed" or "gived." The child certainly hasn't been rewarded for using them. If anything, parents gently correct these mistakes.
But the corrections don't work. Children ignore them. They keep saying "goed" and "gived" for months or even years, until one day, without any obvious triggering event, they switch back to the correct forms.
What's happening here is that the child has figured out a rule. In English, you typically form the past tense by adding "-ed" to a verb. The child has unconsciously extracted this pattern and is applying it systematically—even to verbs where it doesn't work. She's not mimicking. She's not being rewarded. She's doing something far more sophisticated: she's reasoning about the underlying structure of language.
This is exactly what Skinner's theory cannot explain.
Enter Chomsky
In 1959, a young linguist named Noam Chomsky published a review of Skinner's book that essentially ended the behaviorist approach to language acquisition. He called Skinner's ideas "largely mythology" and "a serious delusion."
Chomsky proposed something radically different. What if humans are born with specialized mental machinery for acquiring language? Not the ability to speak any particular language—that clearly has to be learned—but a kind of universal grammar, a set of built-in expectations about what human languages can and cannot do.
Think of it like this. A child isn't a blank slate waiting to be written on by experience. She's more like a computer that comes pre-loaded with a language acquisition device—a piece of mental software that already knows the general shape of human languages. When she hears people speaking around her, this software switches on and starts figuring out which specific settings apply to her particular language.
Is the language she's learning one that puts verbs before objects, like English? Or one that puts objects before verbs, like Japanese? Is it a language with strict word order, or one that relies on word endings to show relationships? The child doesn't have to discover these possibilities from scratch. She already knows they're the options. She just has to figure out which ones her language uses.
This would explain why children learn language so fast, with so little explicit instruction, despite the overwhelming complexity of the task. They're not building grammatical knowledge from zero. They're activating knowledge that was there all along, just waiting for the right input to configure it.
The Poverty of the Stimulus
Chomsky made another crucial observation. The language that children hear—the "input"—is remarkably impoverished compared to what they end up knowing.
Adults speak to children in fragments and incomplete sentences. They make mistakes. They trail off. They interrupt themselves. And yet children don't end up speaking in fragments. They learn the complete grammatical system of their language, including rules they've never heard explicitly stated and probably couldn't articulate if you asked them.
More strikingly, children know what's not allowed. A five-year-old English speaker knows instinctively that "What did you see the man that bought?" is ungrammatical, even though she's never heard anyone say it (precisely because it's ungrammatical, so no one does). How does she know? No one told her. She never heard a speaker produce that sentence and get corrected. The knowledge seems to come from nowhere.
Unless, Chomsky argued, it comes from within.
The Opposition
Not everyone was convinced.
A group of linguists and psychologists argued that Chomsky was making language more mysterious than it needed to be. Sure, language acquisition is impressive. But so is learning to ride a bicycle. So is learning to recognize faces. So is learning to throw a ball and hit a target. These are all complex skills that children master without explicit instruction. Do we need to posit an innate "bicycle acquisition device"? A "face recognition device"? A "ball-throwing device"?
Maybe, these critics suggested, language is acquired through the same general-purpose learning mechanisms that handle everything else. Maybe children are just very, very good learners, and the apparent "poverty of the stimulus" isn't as impoverished as Chomsky claims. Maybe adults provide more useful information than we realize, through their intonation, their gestures, the contexts in which they use words.
This debate—nature versus nurture, innate knowledge versus learned knowledge—has been running in some form for over two thousand years. It shows no signs of being resolved soon.
What Animals Can't Do
One way to understand what's special about human language is to look at what other animals can and can't accomplish.
Many animals communicate. Bees dance to indicate the location of flowers. Vervet monkeys have different alarm calls for different predators—one sound for eagles, another for leopards, another for snakes. Dolphins have complex vocalizations that researchers are still trying to decode.
But none of these systems have what linguists call "productivity"—the ability to combine elements in new ways to create messages that have never been produced before. A vervet monkey can't combine its eagle call with its leopard call to warn about a flying leopard. A bee can't modify its dance to describe a flower garden it merely imagines. These animals are limited to a fixed repertoire of signals.
Humans aren't. You understand this sentence perfectly well, even though I can guarantee you've never read it before: "The anxious penguin practiced the trombone while contemplating the meaning of purple." That sentence has probably never been written in the history of the world until just now. Yet you processed it effortlessly. This is what human language does. It takes a finite number of sounds and rules and generates an infinite space of possible meanings.
The Experiments
Could we teach this ability to other species?
The most famous attempts involved chimpanzees, our closest genetic relatives. In the 1960s and 70s, researchers raised chimps in human environments, surrounded by language, and tried to teach them to communicate.
The chimp Washoe reportedly learned over three hundred signs in American Sign Language. She seemed to combine them meaningfully. When she saw a swan for the first time, she signed "water bird." This looked like the kind of creative combination that defines human language.
But then came Nim Chimpsky. (The name was a pun on Noam Chomsky—researchers have senses of humor too.) Nim was raised from infancy in a human family and received intensive language training. He learned many signs. He could request food and toys. He could respond to questions.
But when researcher Herbert Terrace analyzed the data carefully, he found something troubling. Nim's multi-sign utterances weren't really combinations in the grammatical sense. They were mostly repetitions and imitations of his trainers' signs. When he seemed to produce a novel combination, it was usually cued—consciously or unconsciously—by his teachers.
More damning: Nim's utterances never got more complex. Human children's sentences grow in sophistication over time. They add subordinate clauses, relative clauses, embedded questions. Nim's "sentences" stayed at roughly the same level of complexity no matter how long he trained.
And he never asked a question. Not once.
Terrace went back and reanalyzed the Washoe data. He found similar patterns. What had looked like creative language use was, he concluded, sophisticated mimicry. The chimps had learned that producing certain signs in certain contexts led to rewards. But they hadn't acquired the underlying system that allows humans to generate new meanings from old elements.
The Critical Period
If there is something special about human language, is there a deadline for acquiring it?
The most dramatic evidence comes from tragic natural experiments—children raised without exposure to language during their early years.
In early 19th-century France, a boy was discovered living wild in the forests of Aveyron. Victor, as he came to be called, was probably around twelve years old and showed no evidence of ever having learned to speak. A physician named Jean-Marc-Gaspard Itard spent years trying to teach him. Victor learned to recognize some written words and could comprehend simple commands. But he never acquired productive language. He never learned to speak in sentences or to communicate novel thoughts.
A more recent case was Genie, a girl discovered in Los Angeles in 1970. Her father, who was severely disturbed, had kept her isolated in a small room from infancy, strapped to a potty chair during the day and caged in a crib at night. He forbade anyone to speak to her or around her. When she was discovered at age thirteen, she had essentially no language.
Researchers worked intensively with Genie for years. She made progress. She acquired a substantial vocabulary and could understand much of what was said to her. But her grammar never developed normally. Her sentences remained telegraphic and simple. She never mastered the complex syntactic rules that five-year-olds acquire without apparent effort.
These cases suggest that there may be a critical period for language acquisition—a window during childhood when the brain is optimally configured to learn language. Miss that window, and full acquisition may be impossible.
But the evidence is complicated. Genie had suffered severe trauma and neglect that affected many aspects of her development, not just language. It's hard to know whether her language difficulties stemmed from missing the critical period or from her broader cognitive and emotional damage.
The Emerging Consensus
After decades of debate, most researchers now accept a middle position. Language acquisition involves both nature and nurture, both innate capacities and learning from experience.
The human brain clearly has features that support language. No other species learns language the way humans do, regardless of training. This suggests something biological is going on—some aspect of how our brains are wired that makes language acquisition possible.
But the specific language we learn is determined entirely by experience. A child born to Korean parents but raised from birth by English speakers will grow up speaking English, not Korean. The capacity for language may be innate, but the content must be learned.
The remaining argument is about the details. How much of the capacity is specific to language, versus being general-purpose learning machinery? How much work does the innate component do, versus the learning component? These questions remain genuinely open.
The Fundamental Process
Whatever the theoretical debates, we know something about how the practical process unfolds.
It begins before birth. Fetuses can hear sounds from outside the womb, and newborns show preferences for their mother's voice and their native language within days of birth. The process of tuning into language starts remarkably early.
There are two guiding principles that seem universal across children and languages. First, perception always comes before production. Children understand words before they can say them, understand sentences before they can produce them, understand grammar before they can deploy it correctly.
Second, the system develops one step at a time. Children don't suddenly wake up speaking in complete sentences. They move through predictable stages: babbling, then single words, then two-word combinations, then increasingly complex structures. Each stage builds on the previous one.
The very first task is learning to distinguish the sounds that matter in your language. Every language uses a different subset of the sounds that human vocal tracts can produce. Japanese doesn't distinguish between "r" and "l" sounds the way English does. French nasalizes vowels in ways English doesn't. English makes distinctions that don't exist in other languages.
Newborns can perceive all these distinctions. They're sensitive to differences between sounds regardless of which language they'll eventually learn. But within the first year, this ability narrows. The brain tunes itself to the sounds that occur in the language being heard and becomes less sensitive to distinctions that don't matter in that language.
This is why native Japanese speakers often struggle to hear the difference between "r" and "l" in English. It's not that they can't produce the sounds—with practice, they can. But their perceptual system was tuned during infancy to treat those sounds as equivalent, and retuning it is difficult.
Recursion: The Engine of Infinity
Here is perhaps the strangest thing about human language.
Our brains are finite. They contain a large but limited number of neurons. We can remember only so many words. We can process only so much information at a time.
And yet we can produce and understand an infinite number of sentences.
This seems impossible. How can a finite system generate infinite outputs?
The answer is a property called recursion. Human language allows structures to be embedded inside structures of the same type, and this embedding can repeat indefinitely.
Consider: "The dog barked." Simple sentence. Now embed it: "The dog that the cat chased barked." Still grammatical. Embed again: "The dog that the cat that the rat bit chased barked." Getting harder to process, but still grammatical. You can keep going: "The dog that the cat that the rat that the flea annoyed bit chased barked."
At some point, the sentence becomes too complex for human memory to track. But there's no point at which it becomes ungrammatical. The rules of English allow this kind of embedding to continue without limit. In principle, you could embed forever. The infinity is there, even if our finite brains can't always follow it.
Researchers have identified three recursive mechanisms that appear in all human languages: relativization (embedding a clause that modifies a noun, like "the dog that barked"), complementation (embedding a clause as the object of a verb, like "I think that you're right"), and coordination (joining elements with "and" or "or," which can chain indefinitely: "A and B and C and D...").
These mechanisms give human language its power. With a finite set of words and rules, we can express anything we can think—and many things we haven't thought yet.
What We're Still Learning
The study of language acquisition continues to evolve.
In the 1990s and 2000s, the debate shifted from whether there was innate knowledge to what kind of innate knowledge there might be. Some researchers proposed that the inborn capacity is specifically linguistic—a language acquisition device of some sort. Others argued that it's more general: infants have powerful abilities to track statistical patterns, to interpret the intentions of others, to categorize objects and events. Maybe these general capacities, applied to linguistic input, are enough to explain language acquisition without positing any language-specific innate knowledge.
New theories emerged emphasizing the social context of language learning. Children don't just hear words floating in a void. They hear them in situations, directed at objects, accompanied by gestures and facial expressions and actions. Maybe this rich contextual information provides enough scaffolding that general learning mechanisms can do the job.
The debate continues. But one thing has become clear. The question isn't simply "nature or nurture"—it's how nature and nurture interact. Human infants clearly come pre-equipped with something that allows them to learn language in ways no other species can match. And they clearly need to be exposed to language during their early years for this capacity to develop fully. The interesting questions are about the details: exactly what capacities are innate, exactly what input is required, and exactly how the two come together.
Why It Matters
You're reading this essay right now—which means you successfully acquired language as a child. You probably don't remember learning it. Unlike learning to ride a bicycle or learning long division, language acquisition happens so early and so automatically that it leaves no explicit memories.
But it was, in a sense, the most important thing you ever learned. Language doesn't just allow you to communicate. It shapes how you think. It lets you hold abstract ideas in your mind, manipulate them, combine them. It lets you learn from others' experiences without having to undergo them yourself. It lets you plan for the future and reflect on the past. It lets you ask questions—which, as Nim Chimpsky's silence reminds us, is not something any other species seems moved to do.
The ancient Sanskrit grammarians who debated whether language was innate or learned were asking the right question, even if they couldn't have imagined the tools we now have to investigate it. The behaviorists who thought they could explain language through reward and punishment were wrong about the mechanism but right that learning matters. Chomsky was right that something innate must be involved, even if the exact nature of that something remains controversial.
What we do know is this: sometime in the first few years of your life, your brain performed an astonishing feat. It took the messy, fragmentary, error-filled speech that surrounded you and extracted from it a complete grammatical system—a system that allows you to understand and produce sentences you've never heard before, to express thoughts that have never been thought before, to ask questions about things that have never been questioned before.
How exactly this happens remains one of the great puzzles of human nature. But that it happens at all is one of the great miracles.