Richard Danzig on AI and Cyber
Deep Dives
Explore related topics with these Wikipedia articles, rewritten for enjoyable reading:
-
The Game of Chess (Sofonisba Anguissola)
12 min read
Linked in the article (6 min read)
-
Joshua Lederberg
1 min read
Linked in the article (8 min read)
-
Creative destruction
14 min read
Linked in the article (27 min read)
We’re kicking off our Powerful AI and National Security series with the great Richard Danzig. He was Clinton’s Secretary of the Navy, is on the board of RAND, and has done a great many other things. He is also the author of the recent paper, Artificial Intelligence, Cybersecurity and National Security: The Fierce Urgency of Now. What will it take for America to, as Danzig puts it, get out of bed?
Our co-host today is Teddy Collins, who spent five years at DeepMind before serving in the Biden White House and helping to write the 2024 AI National Security Memorandum.
Thanks to the Hudson Institute for sponsoring this episode.
Do note we conducted this interview in July of 2025.
We discuss:
Why present bias and slow adaptation leave the national security establishment unprepared, and what real AI readiness requires today,
Why relying on a future “messianic” AGI instead of present-day “spiky” breakthroughs is a strategic error,
How the Department of War’s rigid, siloed structure chronically underweights domains like cyber and AI,
Parallels with the 16th century, including the age of exploration and the jump from feudalism to capitalism,
Plus: What AI is doing to expert confidence, Richard Danzig’s advice for parents, and book recommendations.
Listen now in your favorite podcast app.

A Continuous Revolution
Jordan Schneider: You start this paper with a 10-page section about the sorts of things we can reasonably expect AI to unlock rapidly when it comes to cybersecurity. Why don’t you run through a few of those to give folks a sense of what’s at stake here?
Richard Danzig: As everybody is noting, AI is a vastly transformative technology. Some people analogize it to the development of electricity. One analogy that appeals to me is that it’s like the coming of the market. If people sitting in 1500 tried to anticipate the consequences of the jump from feudalism to capitalism, they’d have an extraordinarily difficult job guessing what the next two centuries might look like. From restructuring of family life because people are no longer apprenticing in the family, to movement to the cities, changes in public health, and the rise of the nation-state — we just couldn’t predict it. In the same way, I don’t think we can predict the consequences of AI with much
...This excerpt is provided for preview purposes. Full article content is available on the original publication.