SAIL Media

SAIL Media

Nukes and AI

WarTalk launches!

Jordan Schneider and Phoebe Chow
Mar 19, 2026
∙ Paid
This post originally appeared in ChinaTalk.

“Recent studies have shown that in war games, AI is substantially more prone to resorting to nuclear weapons use than humans.”

To discuss nuclear weapons and AI, we’re joined by Pranay Vaddi, former senior director for arms control, disarmament, and nonproliferation on the NSC. He’s now in a new policy role at Sandia Labs and at MIT. Chris McGuire also joins us. Before working on chips, Chris served as State Department’s lead subject matter expert on U.S.-Russia nuclear weapons and arms control policy.

The first part of our conversation covers:

  • How the US and China agreed AI should never be allowed to decide to use nuclear weapons and why that’s only the starting point

  • Where AI could enter (and is starting to creep into) nuclear command, control, and early warning systems

  • Whether better data and decision support actually reduce nuclear risk or just make escalation faster and more opaque

  • How much automation is too much, from targeting systems to fully autonomous weapons

  • What happens when AI systems outperform humans in domains where we’ve insisted on “human in the loop”

  • Future AI capabilities that could make the oceans transparent, and what that would mean for the survivability of nuclear submarines

Plus, why AI systems in war game simulations are more trigger-happy than humans, why the US doesn’t need an automated nuclear chain of command — but Russia does, and what “slightly less insane” nuclear decision-making might look like.

Keep reading with a 7-day free trial

Subscribe to SAIL Media to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2026 SAIL media, LLC · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture