The Shape of Artificial Intelligence
Deep Dives
Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:
-
Dartmouth workshop
10 min read
Linked in the article (6 min read)
-
Deep learning
1 min read
Linked in the article (41 min read)
-
Expert system
11 min read
Linked in the article (21 min read)
I. Spooky shapes at a distance
The shape of things only becomes legible at a distance.
For instance, history demands temporal distance. The phrase “the Western Roman Empire fell in 476 AD” only became a fact once historians began to investigate the entire period by zooming in and out on the primary sources, compressing a gradual transformation into a clean endpoint. While the deposition of Romulus Augustus, the last Western emperor, was recorded at the time in 476 AD, its status as the fall emerged later, when distance allowed patterns across centuries of political and administrative decay to crystallize into the shape of a broken empire.
Distance can also be spatial rather than temporal. In Peru, large ground drawings—now known as the Nazca Lines—were used as markers or signals on the landscape. From ground level, they are difficult to interpret. Their meaning only becomes clear when viewed from above, where the full shapes can be seen at once.

Although AI is nearing its 70th birthday, it’s been only five years since ChatGPT was launched, eight since the transformer paper was published, and thirteen since AlexNet’s victory on the ImageNet challenge, which implies the deep learning revolution is barely a wayward teenager. I think, however, that we must try to give a clearer shape to the current manifestation of AI (chatbots, large language models, etc.). We are the earliest historians of this weird, elusive technology, and as such, it’s our duty to begin a conversation that’s likely to take decades (or centuries, if we remain alive by then) to be fully fleshed out, once spatial and temporal distance reveal what we’re looking at.
(In Why Obsessing Over AI Today Blinds Us to the Bigger Picture, one of my favorite essays of 2025, I argued that new technologies take a long time to settle into our habits, traditions, and ways of working. So long, in fact, that trying to end the discussion early with a definitive theoretical claim—“AI art is not art because X”—misses the point. That kind of claim was the core of an essay by science-fiction author Ted Chiang, published in The New Yorker in 2024, which I addressed in my piece. I still stand by my position. To be clear:
...This excerpt is provided for preview purposes. Full article content is available on the original publication.