The Theater of the Unreal
Deep Dives
Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:
-
ELIZA
14 min read
The article discusses Joseph Weizenbaum's ELIZA program and the 'Eliza Effect' as foundational to understanding how chatbots induce uncertainty about machine vs human interaction. Understanding ELIZA's history and technical approach provides crucial context for the article's argument about AI as theater.
-
Turing test
12 min read
Central to the article's thesis that AI development has always been about 'sustaining the suspension of disbelief.' The article directly engages with Turing's 1950 proposal and subsequent debates about what it means for machines to 'act like they think.'
-
Poetics (Aristotle)
17 min read
The article quotes Aristotle's foundational claim that 'theater is the imitation of an action' as central to its argument about AI as theatrical performance. Understanding Aristotle's theory of mimesis and dramatic structure provides the philosophical framework the author is invoking.
I’m going to start this essay with a timestamp: August 2025. It’s about a week since the disastrous release of OpenAI’s GPT-5, a couple of weeks since OpenAI claimed a valuation of $300 billion, and about three months since ChatGPT helpfully offered a 16-year old named Adam Raine advice about the best way to hang himself. No doubt in the coming weeks and months the headlines will just keep coming, from tragedy to farce and back again. But here’s something I’m sure will not change: generative AI is theater.
Or rather it’s a kind of theater that doesn’t acknowledge itself as such. It presents itself as a productivity tool, an encyclopedia, an educator, a therapist, a financial advisor, an editor, or any number of other things. And that category error makes large language models dangerous: a terrible, deformed pseudo-theater that produces strange and destabilizing effects on its “audience.”
Ever since Alan Turing first proposed the Turing Test in 1950, and reframed the question of artificial intelligence from “can machines think?” to “can machines act like they think?”, AI development has, in practice, been about sustaining the suspension of disbelief. What bolsters the illusion? What breaks it? What techniques can engineers come up with to make the machine’s outputs more plausible, more convincing, more human-like?
To take two examples: Turing himself suggests inserting some hard-coded pauses into the program before the chatbot answers a question to give the illusion of thinking time. He also recommends introducing intentional mistakes to some questions, the kinds of mistakes a human would make doing a complicated math problem in her head. Even the father of AI was not above a little showmanship.
There have been decades of debate ever since about what it means for a machine to “act” like it’s thinking. In the 1990s, cognitive scientist Stevan Harnad rephrased Turing’s rephrased question as “whether or not machines can do what thinkers like us can do,” but this hardly resolves the ambiguity. The whole point of Turing’s formulation was to sidestep the problem that we have no idea what thinking is. By defining “acting like thinking” as “doing what thinkers do” Harnad still leaves us nowhere.
To be clear, when Harnad writes about the Turing Test he is not trying to unravel the mystery of human consciousness. He aims rather to establish that the Turing Test is
...This excerpt is provided for preview purposes. Full article content is available on the original publication.
