Maintaining Human Intelligence in the AI Era | David Krakauer (President of the Santa Fe Institute)
Thank you to the partners who make this possible
Brex: The banking solution for startups.
Enterpret: Transform feedback chaos into actionable customer intelligence
Persona: Trusted identity verification for any use case
David Krakauer is a leading complex systems researcher and the president of the Santa Fe Institute, a unique institution dedicated to studying complex systems across disciplines. In this episode, David challenges conventional wisdom about AI, arguing that large language models pose a more immediate threat to humanity than commonly discussed existential risks—not by destroying us directly, but by eroding our cognitive capabilities through addictive, low-quality information.
We explore:
Why David believes LLMs aren't intelligent at all and how the AI community misunderstands emergence
The three dimensions of intelligence: inference, representation, and strategy—and which one LLMs lack
How AI acts as a "competitive" rather than "complementary" cognitive technology, atrophying our thinking abilities
What makes great minds unique, from analogical reasoning to the cultivation of unconscious creativity
How Cormac McCarthy's approach to knowledge and creativity offers lessons for the AI age
Why David believes the greatest threat from AI isn't existential risk but cognitive atrophy
How to protect your mind against AI's addictive pull and maintain cognitive autonomy
Explore the episode
Timestamps
(00:00) Intro
(04:39) The Santa Fe Institute’s approach to complex systems
(06:45) Murray Gell-Mann’s ‘Odysseus vs. Apollonian’
(10:35) How SFI was shaped by the legacy of Los Alamos
(12:45) Traits David looks for in great minds
(14:43) Cormac McCarthy on naivety and how thoughtful people treat knowledge
(19:24) A simple explanation of complexity science
(22:50) Why vantage point doesn’t matter when studying systems
(24:36) Aesthetic preferences among complexity scientists
(26:07) Films and directors with complexity science themes
(29:57) Why David argues LLMs are not intelligent
(32:10) What’s missing in the study of LLMs
(36:40) The three qualities of intelligence and how LLMs measure up
(42:19) Lessons from "The Glass Bead Game"
(44:00) David’s perspective on reinforcement learning
(45:38) The greatest threat of LLMs: overreliance and the decline of thinking
(47:40) Competitive vs. complementary cognitive artifacts
(51:55) Why exposing yourself to quality ideas matters
(54:00) How to derisk LLM use
(58:32) Cormac McCarthy’s legacy at SFI and
...This excerpt is provided for preview purposes. Full article content is available on the original publication.
