LLAW’s NUCLEAR WORLD TODAY, #1067, Monday, (10/06/2025)

“End Nuclear Insanity Before Nuclear Insanity Ends Humanity.” ~llaw


 

On My Mind Today:

 

Today’s “Featured Story” — from “Politico” is a coincidentally timely addition to my LLAW’s NUCLEAR WORLD TODAY post from yesterday, supporting and amplifying what I had to say about the deadly future we may be ignorantly self-creating with “AI” and “All Things Nuclear”.

Yet we blindly seem to be bumbling forward for little more knowledge than an insane badly mistaken couple of typical thoughts in our heads — corporate wealth and future human comforts. I can only say that none of this will work as we foresee it because both “All Things Nuclear” and “Artificial Intelligence” will never give us what we expect of them, and both will usher us down the path to extinction.

I chose this story to illuminate its future gravity, but there is also an extremely important back-story occurring right now, today, relative to the immanent potential involving both nuclear power and nuclear war that I encourage anyone who comes across my Post today to read in addition to Today’s Featured Story.

Here is the link and leadline to the added story:

Is Russia’s Putin gambling with the safety of Ukraine’s nuclear stations? – Al Jazeera

Al Jazeera

… nuclear power plant. Rescues and police officers attend anti-radiation drills for case of an emergency situation at. Emergency workers attend anti …

~llaw

 

Today’s Featured Story:

 

File:POLITICOLOGO.svg - Wikimedia Commons

 

 

‘Swarms of Killer Robots’: Why AI is Terrifying the American Military

 

A Q&A with a former Pentagon insider on the AI debates that could shape the future of national security.

The OpenAI logo appears on a mobile phone. | AP Photo/Michael Dwyer

By Calder McHugh10/06/2025 10:00 AM EDT

  • Calder McHugh is deputy editor of POLITICO Nightly.

AI technology is poised to transform national security. In the United States, experts and policymakers are already experimenting with large language models that can aid in strategic decision-making in conflicts and autonomous weapons systems (or, as they are more commonly called, “killer robots”) that can make real-time decisions about what to target and whether to use lethal force.

But these new technologies also pose enormous risks. The Pentagon is filled with some of the country’s most sensitive information. Putting that information in the hands of AI tools makes it more vulnerable, both to foreign hackers and to malicious inside actors who want to leak information, as AI can comb through and summarize massive amounts of information better than any human. A misaligned AI agent can also quickly lead to decision-making that unnecessarily escalates conflict.

“These are really powerful tools. There are a lot of questions, I think, about the security of the models themselves,” Mieke Eoyang, the deputy assistant secretary of Defense for cyber policy during the Joe Biden administration, told POLITICO Magazine in a wide-ranging interview about these concerns.

In our conversation, Eoyang also pointed to expert fears about AI-induced psychosis, the idea that long conversations with a poorly calibrated large language model could spiral into ill-advised escalation of conflicts. And at the same time, there’s a somewhat countervailing concern she discussed — that many of the guardrails in place on public LLMs like ChatGPT or Claude, which discourage violence, are in fact poorly suited to a military that needs to be prepared for taking lethal action.

Eoyang still sees a need to quickly think about how to deploy them — in the parlance of Silicon Valley, “going fast” without “breaking things,” as she wrote in a recent opinion piece. How can the Pentagon innovate and minimize risk at the same time? The first experiments hold some clues.

This interview has been edited for length and clarity.

Why specifically are current AI tools poorly suited for military use?

There are a lot of guardrails built into the large language models that are used by the public that are useful for the public, but not for the military. For instance, you don’t want your average civilian user of AI tools trying to plan how to kill lots of people, but it’s explicitly in the Pentagon’s mission to think about and be prepared to deliver lethality. So, there are things like that that may not be consistent between the use of a civilian AI model and the military AI model.

Why is the tweak not as simple as giving an existing, public AI agent more leeway on lethality?

A lot of the conversations around AI guardrails have been, how do we ensure that the Pentagon’s use of AI does not result in overkill? There are concerns about “swarms of AI killer robots,” and those worries are about the ways the military protects us. But there are also concerns about the Pentagon’s use of AI that are about the protection of the Pentagon itself. Because in an organization as large as the military, there are going to be some people who engage in prohibited behavior. When an individual inside the system engages in that prohibited behavior, the consequences can be quite severe, and I’m not even talking about things that involve weapons, but things that might involve leaks.

Even before AI adoption, we’ve had individuals in the military with access to national security systems download and leak large quantities of classified information, either to journalists or even just on a video game server to try and prove someone wrong in an argument. People who have AI access could do that on a much bigger scale.

What does a disaster case for internal AI misuse look like?

In my last job at the Pentagon, a lot of what we worried about was how technology could be misused, usually by adversaries. But we also must realize that adversaries can masquerade as insiders, and so you have to worry about malicious actors getting their hands on all those tools.

There are any number of things that you might be worried about. There’s information loss, there’s compromise that could lead to other, more serious consequences.

There are consequences that could come from someone’s use of AI that lead them to a place of AI psychosis, where they might engage in certain kinds of behaviors in the physical world that are at odds with reality. This could be very dangerous given the access that people have to weapons systems in the military.

Watch: The Conversation

 

‘They’re causing real harm’: Kevin Hassett on the Dems’ shutdown strategy

There are also concerns with the “swarms of killer robots” people are worried about, which involve escalation management. How do you ensure that you’re not engaging in overkill? How do you ensure that the AI is responding in the way that you want? And those are other challenges that the military is going to have to worry about and get their AI to help them think through.

On that last point, we published a piece in POLITICO Magazine recently from Michael Hirsh in which he reported that almost all public AI models preferred aggressive escalation toward a nuclear war when presented with real-life scenarios. They didn’t seem to understand de-escalation. Has that been your experience in working with these tools?

I think one of the challenges that you have with AI models, especially those that are trained on the past opus of humans, is that the tendency toward escalation is a human cognitive bias already. It already happens without AI. So what you’re enabling with AI is for that to come through faster. And unless you’re engineering in some way to say, “Hey, check your cognitive biases,” it will give you that response.

So does the Pentagon need to develop its own AI tools?

I think that they need to be working on how to develop tools that are consistent with the ways that the Pentagon operates, which are different than the ways the civilian world operates, and for different purposes. But it really depends on which mission set we’re talking about. A lot of this conversation has been about AI around large language models and decision support. There’s a whole different branch of AI that the military needs to engage in, and it’s about navigating the physical world. That’s a totally different set of challenges and technologies.

When you think about the idea of unmanned systems, how do they navigate the world? That’s technology like self-driving cars. Those are inputs that are not the same as taking in large quantities of human text. They’re about: How do you make sense of the world?

Is there a general need for more understanding of the utility of AI in the military? What are some ways that high-ranking officials at the Pentagon misunderstand AI?

It’s not a fully baked technology yet, and so there are activities like moving consideration of AI into the research and development space for the Pentagon, which the Donald Trump administration did, that make a lot of sense. That allows you to do testing and work through some of these new features and develop these technology models in ways that refine them. This means that when they land on the desks of a wider range of Pentagon personnel, they’ve worked through some of these kinks.

What’s the way forward with AI tools when it’s so difficult to prevent misuse?

One of the things that we need to do going forward is to be much more specific about the particular missions for which we are thinking about adopting AI into the Pentagon. The Pentagon is a trillion-dollar enterprise, and it’s going to have a lot of business functions that’s like any other business in the United States — basic business functions like booking travel or payroll.

And then, there are areas that are more military-unique, and those may deserve more specialized study, because there is not this civilian ecosystem that is also involved in the testing and development of these technologies. The Pentagon may have to fund their own research into things like understanding unidentified objects coming toward the United States or robots that need to navigate a battlefield or making sense of lots of different strands of intelligence reporting.

 


Thanks for reading LLAW’s All Nuclear Daily Digest! Subscribe for free to receive new posts and support my work.

ABOUT THE FOLLOWING ACCESS TO “LLAW’s All Nuclear Daily Digest” & RELATED MEDIA:

 

There are 7 categories, including a bonus non-nuclear category for news about the Yellowstone caldera and other volcanic and caldera activity around the world that also play an important role in the survival of human and other life.

 

The feature categories provide articles and information about ‘all things nuclear’ for you to pick from, usually with up to 3 links with headlines concerning the most important media stories in each category, but sometimes fewer and occasionally even none (especially so with the Yellowstone Caldera). The Categories are listed below in their usual order:

  1. All Things Nuclear
  2. Nuclear Power
  3. Nuclear Power Emergencies
  4. Nuclear War Threats
  5. Nuclear War
  6. Yellowstone Caldera (Note: There is one Yellowstone Caldera bonus story available in today’s Post.)
  7. IAEA News (Friday’s only)

A current Digest of major nuclear media headlines with automated links is listed below by nuclear Category (in the above listed order). If a Category heading does not appear in the daily news Digest, it means there was no news reported from this Category today. Generally, the three best articles in each Category from around the nuclear world(s) are Posted. Occasionally, if a Post is important enough, it may be listed in multiple Categories.

TODAY’S NUCLEAR WORLD NEWS, Monday, (10/06/2025)

 

All Things Nuclear

 

NEWS

Killer Robots, AI Psychosis and Nuclear War: The Pentagon’s Biggest AI Fears – POLITICO

Politico

… about malicious actors getting their hands on all those tools. There are any number of things that you might be worried about. There’s information …

US Department of Energy makes surprising announcement about nuclear power – Yahoo

Yahoo

Every single United States aircraft carrier is powered by a nuclear reactor, but its ally across the pond opted for a different fuel source. Slash …

After Decades, Relief for Some Harmed by the Nuclear Weapons Industry

The Equation – Union of Concerned Scientists

This expansion doesn’t provide everything communities need, but it is a small and necessary step towards justice, and also a big win in these …

Nuclear Power

 

NEWS

New nuclear push brings old dangers back — and bigger than ever

The Hill

Kevin Kamps is the Radioactive Waste Watchdog at Beyond Nuclear. Tags Andrew Cuomo Kathy Hochul Keir Starmer nuclear Nuclear power nuclear reactor …

Duke Energy plans new nuclear buildout in 2025 strategic plan – American Nuclear Society

American Nuclear Society

Duke Energy, which operates the largest nuclear reactor fleet in the U.S., is evaluating adding large LWRs as well as SMRs. It is specifically eyeing …

Explainer: Could US and Russia extend last nuclear weapons treaty? – Reuters

Reuters

Strategic weapons are usually long-range and designed to influence the outcome of a war rather than merely a battle, by destroying centres of power, …

Nuclear Power Emergencies

 

NEWS

Is Russia’s Putin gambling with the safety of Ukraine’s nuclear stations? – Al Jazeera

Al Jazeera

… nuclear power plant. Rescues and police officers attend anti-radiation drills for case of an emergency situation at. Emergency workers attend anti …

Brits urged to buy emergency item as Russia issues threat to strike 23 UK areas – Daily Star

Daily Star

A security expert has issued crucial advice ahead of the horrors of a nuclear apocalypse, as Russian senator and war veteran Dmitry Rogozin sent a …

Nuclear War Threats

 

NEWS

‘Swarms of Killer Robots’: Why AI is Terrifying the American Military – POLITICO

Politico

But these new technologies also pose enormous risks. The Pentagon is … nuclear war when presented with real-life scenarios. They didn’t …

Iran Hits Back After Trump’s Threat – Newsweek

Newsweek

… threats from Washington over its nuclear program. Iranian military … war in June remains in place. He added that recent naval drills across …

“On A Darkling Plain” – Humankind’s Core Obligation to Prevent Nuclear War – Modern Diplomacy

Modern Diplomacy

Prima facie, Mr. Trump has little understanding of nuclear risks in any form, and could quickly create conditions leading to direct military …

Nuclear War

 

NEWS

Russia’s Nuclear Deterrence Put to the Test by the War in Ukraine – Ifri

Ifri

From the outset of its “special military operation” (SVO) against Ukraine on February 24, 2022, Russia, which possesses one of the world’s largest …

Trump says Putin’s offer on nuclear arms control ‘sounds like a good idea’ | Reuters

Reuters

Trump, who has expressed disappointment in Putin for not moving to end the war in Ukraine, was not asked directly on Sunday about the prospect of …

Killer Robots, AI Psychosis and Nuclear War: The Pentagon’s Biggest AI Fears – POLITICO

Politico

A Q&A with a former Pentagon insider on the AI debates that could shape the future of national security.

Yellowstone Caldera

 

NEWS

Yellowstone projection shocks readers — 86,000 tremors suggest a non-hawaiian style eruption

ECOticias.com

A recent discovery has brought this volcano back into the headlines: more than 86,000 hidden earthquakes have been identified beneath the caldera, …

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.