LLAW’s All Things Nuclear #699, Monday, (07/22/2024)

“End Nuclear Insanity Before Nuclear Insanity Ends Humanity”

LLOYD A. WILLIAMS-PENDERGRAFT

JUL 22, 2024

1

Share

LLAW’s NUCLEAR ISSUES & COMMENTS, Monday, (07/22/2024)

Following is an in-depth and sensible primer on what AI and nuclear war (and many other applications) has to do with humanity, our ability to control AI, and the thoughtless deployment of AI without proper human guidance, constant oversite, negative affects on its applications at every level, and how failing to properly control the use of it in general could make it as, or even more, dangerous to humanity and other life than it is as any software equipped tool, business, construction, or other applications — especially a nuclear power plant, and nuclear warfare.

Personally, my experiences with AI (on the Internet) have been all bad and always cause more harm from start to finish than the beneficial services sought, and the reason for that is almost always improper supervision by humans, whose brains are usually required to have some idea basic intelligence, coupled with emotions, thoughts, considerations, rights and wrongs, and even reality, that AI, of course, does not innately have at all. It does what it’s taught and does what it is told in no uncertain terms and therefore allowed to do. It is NOT a substitute nor a replacement for awakened, alert, and expert experienced human programming and yin-yang logic that involves a sense of when something is potentially wrong. An uncontrolled AI system, used in ‘all things nuclear’ is a monstrous uncontrolled weapon as lethal as the nuclear power plant or the weapons of mass destruction it is designed to control ~llaw


logo footer

Humans should teach AI how to avoid nuclear war—while they still can

By Cameron VegaEliana Johns | July 22, 2024

The systemic use of AI-enabled technology in nuclear strategy, threat prediction, and force planning could erode human skills and critical thinking over time—and even lure policymakers and nuclear planners into believing that a nuclear war can be won. (Image: Screenshot from the 1983 movie WarGames, Metro-Goldwyn-Mayer)

When considering the potentially catastrophic impacts of military applications of Artificial Intelligence (AI), a few deadly scenarios come to mind: autonomous killer robots, AI-assisted chemical or biological weapons development, and the 1983 movie WarGames.

The film features a self-aware AI-enabled supercomputer that simulates a Soviet nuclear launch and convinces US nuclear forces to prepare for a retaliatory strike. The crisis is only partly averted because the main (human) characters persuade US forces to wait for the Soviet strike to hit before retaliating. It turns out that the strike was intentionally falsified by the fully autonomous AI program. The computer then attempts to launch a nuclear strike on the Soviets without human approval until it is hastily taught about the concept of mutually assured destruction, after which the program ultimately determines that nuclear war is a no-win scenario: “Winner: none.”

US officials have stated that an AI system would never be given US nuclear launch codes or the ability to take control over US nuclear forces. However, AI-enabled technology will likely become increasingly integrated into nuclear targeting and command and control systems to support decision-making in the United States and other nuclear-armed countries. Because US policymakers and nuclear planners may use AI models in conducting analyses and anticipating scenarios that may ultimately influence the president’s decision to use nuclear weapons, the assumptions under which these AI-enabled systems operate require closer scrutiny.

Pathways for AI integration. The US Defense Department and Energy Department already employ machine learning and AI models to make calculation processes more efficient, including for analyzing and sorting satellite imagery from reconnaissance satellites and improving nuclear warhead design and maintenance processes. The military is increasingly forward-leaning on AI-enabled systems. For instance, it initiated a program in 2023 called Stormbreaker that strives to create an AI-enabled system called “Joint Operational Planning Toolkit” that will incorporate “advanced data optimization capabilities, machine learning, and artificial intelligence to support planning, war gaming, mission analysis, and execution of all-domain, operational level course of action development.” While AI-enabled technology presents many benefits for security, it also brings significant risks and vulnerabilities.

One concern is that the systemic use of AI-enabled technology and an acceptance of AI-supported analysis could become a crutch for nuclear planners, eroding human skills and critical thinking over time. This is particularly relevant when considering applications for artificial intelligence in systems and processes such as wargames that influence analysis and decision-making. For example, NATO is already testing and preparing to launch an AI system designed to assist with operational military command and control and decision-making by combining an AI wargaming tool and machine learning algorithms. Even though it is still unclear how this system will impact decision-making led by the United States, the United Kingdom, and NATO’s Nuclear Planning Group concerning US nuclear weapons stationed in Europe, this type of AI-powered analytical tool would need to consider escalation factors inherent to nuclear weapons and could be used to inform targeting and force structure analysis or to justify politically motivated strategies.

The role given to AI technology in nuclear strategy, threat prediction, and force planning can reveal more about how nuclear-armed countries view nuclear weapons and nuclear use. Any AI model is programmed under certain assumptions and trained on selected data sets. This is also true of AI-enabled wargames and decision-support systems tasked with recommending courses of action for nuclear employment in any given scenario. Based on these assumptions and data sets alone, the AI system would have to assist human decision-makers and nuclear targeters in estimating whether the benefits of nuclear employment outweigh the cost and whether a nuclear war is winnable.

Do the benefits of nuclear use outweigh the costs? Baked into the law of armed conflict is a fundamental tension between any particular military action’s gains and costs. Though fiercely debated by historians, the common understanding of the US decision to drop two atomic bombs on Japan in 1945 demonstrates this tension: an expedited victory in East Asia in exchange for hundreds of thousands of Japanese casualties.

RELATED:

Three key misconceptions in the debate about AI and existential risk

Understanding how an AI algorithm might weigh the benefits and costs of escalation depends on how it integrates the country’s nuclear policy and strategy. Several factors contribute to one’s nuclear doctrine and targeting strategy—ranging from fear of consequences of breaking the tradition of non-use of nuclear weapons to concern of radioactive contamination of a coveted territory and to sheer deterrence because of possible nuclear retaliation by an adversary. While strategy itself is derived from political priorities, military capabilities, and perceived adversarial threats, nuclear targeting incorporates these factors as well as many others, including the physical vulnerability of targets, overfly routes, and accuracy of delivery vehicles—all aspects to further consider when making decisions about force posture and nuclear use.

In the case of the United States, much remains classified about its nuclear decision-making and cost analysis. It is understood that, under guidance from the president, US nuclear war plans target the offensive nuclear capabilities of certain adversaries (both nuclear and non-nuclear armed) as well as the infrastructure, military resources, and political leadership critical to post-attack recovery. But while longstanding US policy has maintained to “not purposely threaten civilian populations or objects” and “not intentionally target civilian populations or targets in violation of [the law of armed conflict],” the United States has previously acknowledged that “substantial damage to residential structures and populations may nevertheless result from targeting that meets the above objectives.” This is in addition to the fact that the United States is the only country to have used its nuclear weapons against civilians in war.

There is limited public information with which to infer how an AI-enabled system would be trained to consider the costs of nuclear detonation. Certainly, any plans for nuclear employment are determined by a combination of mathematical targeting calculations and subjective analysis of social, economic, and military costs and benefits. An AI-enabled system could improve some of these analyses in weighing certain military costs and benefits, but it could also be used to justify existing structures and policies or further ingrain biases and risk acceptance into the system. These factors, along with the speed of operation and innate challenges in distinguishing between data sets and origins, could also increase the risks of escalation—either deliberate or inadvertent.

Is a nuclear war “winnable”? Whether a nuclear war is winnable depends on what “winning” means. Policymakers and planners may define winning as merely the benefits of nuclear use outweighing the cost when all is said and done. When balancing costs and benefits, the benefits need only be one “point” higher for an AI-enabled system to deem the scenario a “win.”

In this case, “winning” may be defined in terms of national interest without consideration of other threats. A pyrrhic victory could jeopardize national survival immediately following nuclear use and still be considered a win by the AI algorithm. Once a nuclear weapon has been used, it could either incentivize an AI system to not recommend nuclear use or, on the contrary, recommend the use of nuclear weapons on a broader scale to eliminate remaining threats or to preempt further nuclear strikes.

“Winning” a nuclear war could also be defined in much broader terms. The effects of nuclear weapons go beyond the immediate destruction within their blast radius; there would be significant societal implications from such a traumatic experience, including potential mass migration and economic catastrophe, in addition to dramatic climatic damage that could result in mass global starvation. Depending on how damage is calculated and how much weight is placed on long-term effects, an AI system may determine that a nuclear war itself is “unwinnable” or even “unbearable.”

Uncovering biases and assumptions. The question of costs and benefits is relatively uncontroversial in that all decision-making involves weighing the pros and cons of any military option. However, it is still unknown how an AI system will weigh these costs and benefits, especially given the difficulty of comprehensively modeling all the effects of nuclear weapon detonations. At the same time, the question of winning a nuclear war has long been a thorn in the side of nuclear strategists and scholars. All five nuclear-weapon states confirmed in 2022 that “a nuclear war cannot be won and must never be fought.” For them, planning to win a nuclear war would be considered inane and, therefore, would not require any AI assistance. However, deterrence messaging and discussion of AI applications for nuclear planning and decision-making illuminate the belief that the United States must be prepared to fight—and win—a nuclear war.

RELATED:

Why a misleading “red team” study of the gene synthesis industry wrongly casts doubt on industry safety

The use of AI-assisted nuclear decision-making has the potential to reveal and exacerbate the biases and beliefs of policymakers and strategists, including the oft-disputed idea that nuclear war can be won. AI-powered analysis incorporated into nuclear planning or decision-making processes would operate on assumptions about the capabilities of nuclear weapons as well as their estimated costs and benefits, in the same way that targeters and planners have done for generations. Some of these assumptions could include missile performance, accurate delivery, radiation effects, adversary response, and whether nuclear arms control or disarmament is viable.

Not only are there risks of inherent bias in AI systems, but this technology can be purposely designed with bias. Nuclear planners have historically underestimated the damage caused by nuclear weapons in their calculations, so an AI system fed that data to make recommendations could also systemically underestimate the costs of nuclear employment and the number of weapons needed for targeting purposes. There is also a non-zero chance that nuclear planners poison the data so that an AI program recommends certain weapons systems or strategies.

During peace time, recommendations based on analysis by AI-enabled systems could also be used as part of justifying budgets, capabilities, and force structures. For example, an AI model that is trained on certain assumptions and possibly underestimates nuclear damage and casualties may recommend increasing the number of deployed warheads, which will be legally permissible after New START—the US-Russian treaty that limits their deployed long-range nuclear forces—expires in February 2026. The inherent trust placed in computers by their users is also likely to provide undue credibility to AI-supported recommendations, which policymakers and planners could use to veil their own preferences behind the supposed objectivity of a computer’s outputs.

Despite this heavy skepticism, advanced AI/machine learning models could still potentially provide a means of sober calculation in crisis scenarios, where human decision-making is often clouded, rushed, or falls victim to fallacies. However, this requires that the system has been fed accurate data, shaped with frameworks that support good faith analysis, and is used with an awareness of its limitations. Rigorous training on nuclear strategy for the “humans in the loop” as well as on methods for interpreting AI-generated outputs—that is, considering all its limitations and embedded biases—could also help mitigate some of these risks. Finally, it is essential that governments practice and promote transparency concerning the integration of AI technology into their military systems and strategic processes, as well as the structures in place to prevent deception, cyberattacks, disinformation, and bias.

Human nature is nearly impossible to predict, and escalation is difficult to control. Moreover, there is arguably little evidence to support claims that any nuclear employment could control or de-escalate a conflict. Highlighting and addressing potential bias in AI-enabled systems is critical for uncovering assumptions that may deceive users into believing that a nuclear war can be won and for maintaining the well-established ethical principle that a nuclear war should never be fought.

Editor’s note: The views expressed in this article are those of the authors and do not necessarily represent the views of the US State Department.Subscribe

ABOUT THE FOLLOWING ACCESS TO “LLAW’S ALL THINGS NUCLEAR” RELATED MEDIA:

There are 7 categories, with the latest (#7) being a Friday weekly roundup of IAEA (International Atomic Energy Agency) global nuclear news stories. Also included is a bonus non-nuclear category for news about the Yellowstone caldera and other volcanic and caldera activity around the world that play an important role in humanity’s lives. The feature categories provide articles and information about ‘all things nuclear’ for you to pick from, usually with up to 3 links with headlines concerning the most important media stories in each category, but sometimes fewer and occasionally even none (especially so with the Yellowstone Caldera). The Categories are listed below in their usual order:

  1. All Things Nuclear
  2. Nuclear Power
  3. Nuclear Power Emergencies
  4. Nuclear War
  5. Nuclear War Threats
  6. Yellowstone Caldera (Note: There are no Yellowstone Caldera bonus stories available in this evening’s Post.)
  7. IAEA Weekly News (Friday’s only)

Whenever there is an underlined link to a Category media news story, if you press or click on the link provided, you no longer have to cut and paste to your web browser, since this Post’s link will take you directly to the article in your browser.

A current Digest of major nuclear media headlines with automated links is listed below by nuclear Category (in the above listed order). If a Category heading does not appear in the daily news Digest, it means there was no news reported from this Category today. Generally, the three best articles in each Category from around the nuclear world(s) are Posted. Occasionally, if a Post is important enough, it may be listed in multiple Categories.

TODAY’S NUCLEAR WORLD’S NEWS, Monday, (07/22/2024)

All Things Nuclear

NEWS

How the Democrats running for N.H. governor are campaigning | WBUR News

WBUR

The Democratic Party’s primary ballot includes two candidates who share similar policy positions and point to their political resumes as proof of …

Interview: US diplomat Adam Scheinman on nonproliferation, arms control, and the NPT

Bulletin of the Atomic Scientists

In this interview, Bulletin editor in chief John Mecklin talks with Ambassador Adam Scheinman, who oversees American diplomacy around the Nuclear …

Nuclear Power

NEWS

Peter Dutton visits Queensland back country in nuclear energy push

News.com.au

Opposition Leader Peter Dutton has for the first time spruiked the Coalition’s controversial nuclear energy plan in an electorate earmarked for a …

The Notebook: Nuclear power continues to divide, but we need to think about the future

City A.M.

Kokou Agbo Bloua, Societe Generale’s global head of economics, takes the pen to talk nuclear, climate volatility and the economic outlook.

One nuclear plant could see 45,000 rooftop solar systems shut off each day | RenewEconomy

Renew Economy

“A [1,000MW] nuclear power station, which can only run down to 500 MW …would usually be supplying more energy than the system needs (Figure 6),” the …

Nuclear Power Emergencies

NEWS

World’s first meltdown-proof nuclear reactor unveiled in China – Interesting Engineering

Interesting Engineering

In 2011, the Fukushima nuclear reactor experienced a rare event in which the standard and emergency power supply to the cooling mechanism failed, …

Nuclear War

NEWS

Humans should teach AI how to avoid nuclear war—while they still can

Bulletin of the Atomic Scientists

The systemic use of AI technology in nuclear strategy, threat prediction, and force planning could erode human skills and critical thinking.

Russia’s Nuclear-Armed Spacecraft Could Supercharge Space War 1 – Forbes

Forbes

Moscow’s race to perfect spacecraft tipped with nuclear warheads could presage a rapidly expanding new phase of Space War 1, say leading American …

Missile Defense Won’t Save Us from Growing Nuclear Arsenals – Boston University

Boston University

You can’t build the impenetrable shield,” says BU military tech expert Sanne Verschuren.

Nuclear War Threats

NEWS

Russia’s Nuclear-Armed Spacecraft Could Supercharge Space War 1 – Forbes

Forbes

… nuclear war,” he adds. The U.S. began building its nuclear command … Escalating nuclear threats underscore the urgency for all the nuclear …

Humans should teach AI how to avoid nuclear war—while they still can

Bulletin of the Atomic Scientists

The systemic use of AI technology in nuclear strategy, threat prediction, and force planning could erode human skills and critical thinking.

Breaking the Impasse on Disarmament and Implementing Article VI Obligations

Arms Control Association

We condemn the recent threats from leaders of some nuclear-armed states underscoring their readiness to use nuclear weapons. Any threat to use nuclear 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.