Alan Kay
Based on Wikipedia: Alan Kay
The Man Who Thought the Computer Revolution Hasn't Happened Yet
In 1968, Alan Kay dragged himself to a conference in San Francisco with a raging fever. He could barely stand. What he witnessed that day—Douglas Engelbart's legendary "Mother of All Demos"—would haunt him for the rest of his career. Not because it scared him, but because it showed him what computers could become. And six decades later, he's still frustrated that we haven't fully gotten there.
Kay didn't just complain about it. He went out and invented much of what we now take for granted: the overlapping windows on your screen, the programming paradigm that powers most of the software you use, and the very concept of the laptop computer. Then he spent the rest of his life arguing that we stopped too soon.
A Three-Year-Old Who Knew the Teachers Were Lying
Kay learned to read at three. By first grade, he'd devoured 150 books. This created an immediate problem.
I had the misfortune or the fortune to learn how to read fluently starting about the age of three, so I had read maybe 150 books by the time I hit first grade, and I already knew the teachers were lying to me.
That tension—between what institutions claim to know and what's actually true—would define his entire approach to computing. Kay never trusted the conventional wisdom. He questioned everything, including his own ideas.
His early life was peripatetic. Born in Springfield, Massachusetts in 1940, his family bounced around following his father's career in physiology before landing in the New York area. He attended Brooklyn Technical High School, one of those legendary specialized schools that produced generation after generation of scientists and engineers. Then came Bethany College in West Virginia, where he studied biology and mathematics—not computer science, which barely existed as a discipline.
After college, Kay taught guitar in Denver for a year. A professional musician teaching guitar. This matters more than it might seem.
The Musician Who Became a Programmer
The United States Army drafted Kay, but he qualified for officer training in the Air Force instead. There, something unexpected happened: he passed an aptitude test and became a computer programmer. The military, for all its bureaucracy, had accidentally placed a creative polymath in front of a machine that would consume his imagination.
After his discharge, Kay returned to school with fresh eyes. He earned a bachelor's degree in mathematics and molecular biology from the University of Colorado Boulder in 1966. That same fall, he started graduate school at the University of Utah—a place that would prove pivotal.
Utah's computer science department was new, founded by David Evans, who'd been recruited from Berkeley. Working alongside Evans was Ivan Sutherland, whose 1963 doctoral thesis on Sketchpad had essentially invented computer graphics. Sketchpad let users draw directly on a screen with a light pen, manipulating graphical objects in ways that seem obvious now but were revolutionary then.
Kay credits Sutherland's thesis with shaping his entire understanding of objects and programming. The idea that you could have independent entities on a screen, each with its own behavior, that you could manipulate directly—this became the seed of something much larger.
His own dissertation, completed in 1969, described a language called FLEX: "A Flexible Extendable Language." The name captured his philosophy. Computers shouldn't force users into rigid structures. They should bend to human intention.
The Meeting That Changed Everything
In 1968, Kay met Seymour Papert and learned about Logo, a programming language designed to teach children. Logo was a dialect of Lisp—a family of languages beloved by artificial intelligence researchers—but simplified and made approachable. You could tell a little on-screen turtle to move and turn, drawing pictures as it went. Children loved it.
Through Logo, Kay discovered the educational theories of Jean Piaget, Jerome Bruner, and Lev Vygotsky. These psychologists had spent decades studying how children actually learn—not through passive absorption of facts, but through active construction of understanding. They called this approach constructionism. Learning happens when you build things, when you experiment, when you make mistakes and figure out why.
This wasn't just academic theory for Kay. It became a mission. If children learned by building, and if computers could be tools for building almost anything, then computers could be the most powerful educational technology ever invented. But only if we designed them right.
Then came December 9, 1968. That fevered day in San Francisco.
Douglas Engelbart, working at the Stanford Research Institute, demonstrated a system that included the mouse, hypertext links, video conferencing, collaborative document editing, and windowed interfaces. In 1968. Most of these features wouldn't reach ordinary users for another twenty or thirty years. Some still haven't been fully realized.
It was one of the greatest experiences in my life.
Kay saw the future that day. He also saw how far away it remained.
The Xerox Years: Inventing the Modern Computer
In 1970, Kay joined Xerox PARC—the Palo Alto Research Center—in California. Xerox, the copier company, had established this laboratory to invent the future of the office. They succeeded beyond anyone's expectations, though Xerox itself famously failed to capitalize on most of what PARC created.
Over the next decade, Kay and his colleagues built the Alto, the first computer that would be recognizable to a modern user. It had a graphical display with windows that could overlap each other. You could point at things with a mouse and click to select them. Documents looked on screen like they would look when printed. This was 1973.
The Alto was never a commercial product. It was a research prototype, and each unit cost more than a car. But it proved that personal computing could work the way Kay had imagined.
To program the Alto and its successors, Kay developed Smalltalk. This wasn't just another programming language. It was an entirely new way of thinking about software.
Object-Oriented Programming: The Big Idea That Got Misunderstood
Kay coined the term "object-oriented programming," and he's spent decades regretting it.
I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging."
To understand what he means, think about how traditional programs work. You write step-by-step instructions: do this, then do that, then do the other thing. The program is a recipe, and the computer follows it blindly.
Object-oriented programming flips this around. Instead of instructions, you have objects—little autonomous entities that know things and can do things. These objects communicate by sending messages to each other. "Hey, bank account, what's your balance?" "Hey, window, please close yourself." Each object decides how to respond to the messages it receives.
Some of these concepts had appeared earlier, in a language called Simula 67 developed at the Norwegian Computing Center. That's where the words "object" and "class" first appeared in this context. But Kay and his team at PARC took the idea much further. In Smalltalk, everything was an object. Numbers, letters, windows, buttons, even the system itself—all objects, all communicating through messages.
The reason Kay regrets focusing on "objects" is that people latched onto the wrong part. They focused on the nouns—the things—rather than the verbs—the communication. A well-designed system isn't really about having a bunch of separate objects. It's about how those objects talk to each other, how they coordinate, how they form something larger than themselves.
This distinction matters enormously in practice. Object-oriented programming today often produces tangled messes of interdependent code. Kay's original vision was the opposite: independent modules that interact through clean, simple messages. Think of biological cells, each one self-contained, communicating through chemical signals. Or think of the internet, where computers send packets to each other without needing to know the internal details of how each machine works.
The Dynabook: A Computer for Children
While developing Smalltalk and the graphical interface, Kay was pursuing a more audacious goal. He wanted to design a computer for children.
Not a simplified computer, not a toy. A powerful machine that children could use to learn, to create, to explore ideas that would otherwise be inaccessible. He called this imaginary device the Dynabook.
The Dynabook would be portable—small enough to carry around. It would have a screen you could read anywhere, like a book. It would be networked, connected to vast libraries of information. And it would be programmable, so children could build their own simulations and tools.
In 1972, this was science fiction. There was no technology that could make it real. But Kay wasn't designing for 1972. He was designing for whenever the technology caught up.
Decades later, we have laptops and tablets that match or exceed the Dynabook's physical specifications. But Kay would argue we've failed to realize the vision in the ways that matter most. Our devices are largely consumption machines. You can watch videos, scroll social media, play games designed by others. But the programming, the creation, the construction of understanding—that remains inaccessible to most users.
The One Laptop Per Child project, founded by Kay's friend Nicholas Negroponte, tried to address this gap. Kay was deeply involved, focusing especially on educational software using Squeak and Etoys, descendants of his Smalltalk work. The goal was to give children in developing countries not just access to information, but tools to build with.
The Wandering Years
After Xerox PARC, Kay moved through a series of institutions, always pushing against the same walls.
From 1981 to 1984, he was Chief Scientist at Atari, the video game company. In 1984, he became an Apple Fellow, joining the company that had famously commercialized many of PARC's ideas in the Macintosh. He stayed at Apple for over a decade, until the company closed its Advanced Technology Group in 1997.
Then came a stint at Walt Disney Imagineering as a Disney Fellow, recruited by his friend Bran Ferren. When Ferren left to start Applied Minds with Danny Hillis—one of the most interesting technologists of his generation—the Fellows program ended.
In 2001, Kay founded the Viewpoints Research Institute, a nonprofit dedicated to children, learning, and advanced software development. For years, Viewpoints was based at Applied Minds in Glendale, California. The institute closed in 2018, but its research agenda captured Kay's enduring obsession: could we build software systems that are radically simpler than what we have today?
The Twenty-Thousand-Line Question
Commercial software is enormous. Operating systems contain millions of lines of code. Web browsers, office suites, databases—all bloated beyond comprehension. Even Kay couldn't read and understand all the code in a typical application.
This bothered him profoundly. In a 2006 proposal to the National Science Foundation, Kay asked a provocative question: how small could a practical computing system be if we designed it from scratch with modern ideas?
The conglomeration of commercial and most open source software consumes in the neighborhood of several hundreds of millions of lines of code these days. We wonder: how small could be an understandable practical "Model T" design that covers this functionality? 1M lines of code? 200K LOC? 100K LOC? 20K LOC?
The STEPS project—a recursive acronym standing for "STEPS Toward Expressive Programming Systems"—tried to find out. The goal was to build a complete personal computing system in around 20,000 lines of code. Not a toy. A real system with graphics, networking, document processing, the works.
Twenty thousand lines. That's about the size of one small feature in a modern application. Kay believed that if we rethought our fundamental assumptions—if we designed languages and systems that better matched human thought—we could achieve dramatic reductions in complexity.
Whether STEPS fully succeeded is debatable. But the question it asked remains urgent. Every line of code is a liability, a potential bug, a source of confusion. The simpler we can make our systems while retaining their power, the more humans can understand and control them.
The Revolution That Hasn't Happened
Kay has given this lecture many times, at conferences and universities around the world: "The Computer Revolution Hasn't Happened Yet."
It sounds paradoxical. We carry supercomputers in our pockets. We have instant access to most of human knowledge. We can video chat with anyone on the planet. How can the revolution not have happened?
Kay would answer: look at what we use these miracles for. We scroll. We consume. We react. We rarely create, rarely build, rarely think deeply. The computer was supposed to be a bicycle for the mind—Steve Jobs used that phrase, but the idea came from the same PARC culture that shaped Kay. Instead, we've built something more like a television for the mind. We watch.
Even programmers, Kay argues, have stopped thinking carefully about fundamentals. Modern software development often means gluing together libraries and frameworks that no one fully understands. The craft of making things simple, of designing systems that humans can reason about, has been lost in the rush to ship features.
His lectures draw on his experiences with Sketchpad, Simula, Smalltalk—the pioneering systems that showed what computing could become. He saw the good ideas firsthand. He knows they weren't universally implemented. And he's frustrated that we settled for less.
The Whole Person
It would be easy to paint Kay as a grumpy prophet, shouting about unfulfilled potential. But that would miss the breadth of his life.
He was a professional jazz guitarist. He composed music. He designed theatrical sets. He still plays classical organ as an amateur. His grandfather was Clifton Johnson, an author, illustrator, and photographer. His uncle was Irving Johnson, a sailor and adventurer who wrote about his voyages. Creativity runs in the family.
Kay is married to Bonnie MacBird, a writer, actress, and producer. Together they represent a household where technology and the arts aren't separate domains but intertwined ways of understanding the world.
His honors fill pages. The Turing Award in 2003, computing's equivalent of the Nobel Prize, recognized his work on object-oriented programming and Smalltalk. The Kyoto Prize in 2004, one of Japan's highest honors for technology. The Draper Prize, shared with other PARC alumni, for their contributions to personal computing. Honorary doctorates, academy fellowships, lifetime achievement awards—all the recognition a career can accumulate.
But if you asked Kay what matters, he'd probably talk about children. About learning. About the gap between what computers could be and what they are.
The Message Behind the Objects
There's a deep irony in Kay's career. He invented ideas that became wildly successful—object-oriented programming is now the dominant paradigm, graphical interfaces are universal, laptops are everywhere. But he believes we got them wrong.
The objects shouldn't have been the point. The messaging was the point. The communication, the interaction, the way independent parts form coherent wholes.
The windows on your screen are descendants of what Kay built at PARC. But they're also cruder, more limited, less dynamic than what he envisioned. The Dynabook was supposed to be a tool for thought, not a consumption device with a touch screen.
And that's the thing about visionaries. Sometimes they succeed so thoroughly that everyone uses their inventions. But using isn't the same as understanding. Adoption isn't the same as realization.
Kay is 84 now. He's spent six decades thinking about how computers could amplify human capability—not just productivity or entertainment, but genuine understanding. The question he keeps asking is whether we'll ever finish what Engelbart started in that 1968 demo, what Kay and his colleagues prototyped at PARC, what the Dynabook was supposed to become.
The computer revolution hasn't happened yet. That's his story, and he's sticking to it. The rest of us are still deciding whether to prove him right or wrong.