Existing initiatives to address the risks posed by the integration of Artificial Intelligence (AI) into military systems fail to provide adequate safeguards against the dangers inherent in a world of continued reliance on the nuclear balance of terror. Both the widespread deployment of autonomous weapons systems (AWS) and the development of AI-enabled decision-support systems create new pathways for nuclear escalation and arms racing. Yet these risks are not adequately addressed by any of the primary AI and AWS risk-reduction efforts.
This leaves an opportunity for the incoming U.S. administration to widen the scope of proposals on regulating the integration of AI into nuclear operations, and help to set standards that can mitigate the potential threats to strategic stability.
The multilateral discussions now under way on options to regulate AI-governed robotic weapons are primarily concerned with battlefield effects that could violate the Laws of War, especially International Humanitarian Law (IHL). In recognition of these dangers, numerous states and non-governmental organizations, such as Human Rights Watch and the International Committee of the Red Cross, have called for the adoption of binding international constraints on AWS intended to reduce the risk of violations of IHL. The nuclear risks created by AWS and AI-enabled decision-support systems networks have received noticeably less attention in these processes.
Meanwhile, the United States and its allies have promoted the adoption of voluntary guidelines against the misuse of AI in accordance with its Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy. This initiative includes a recommendation for the presence of humans “in the loop” for all nuclear decision-making, mirroring language contained in the Biden Administration’s 2022 Nuclear Posture Review.
In practice, this phrase has historically meant having a human decisionmaker “verify and analyze the information provided by the [nuclear command, control, and communications (C3)] systems and deal with technical problems as they arose and, more importantly, make nuclear launch decisions.” More generally, preserving a human “in the loop” is intended to ensure that a human always makes “final decisions” when it comes to the potential use of nuclear weapons.
But this language, particularly given the lack of detail on how it would be implemented by nuclear powers, leaves unaddressed several dangerous ways in which AI, C3, and nuclear weapons systems could become entangled in the near future. On the one hand, these pathways exacerbate the classical fear of miscalculation under high alert, referred to as “crisis stability” or the risk that one side might use nuclear weapons first in a crisis. On the other, they also encourage arms racing behaviors over the medium-term, threatening “arms race stability,” or the risk that one side might seek a breakout advantage in advanced technology, triggering complementary efforts by the other. By introducing an external shock to strategic stability between nuclear powers, AI integration could further jeopardize the already fraught balance of terror.
The widespread integration of AI into civilian products as well as its use in lower-risk military applications such as maintenance, logistics, and communications systems may generate irrational optimism about the applicability of AI algorithms for nuclear operations. But there are many intractable and unpredictable problems that could arise from the fusion of algorithms and nuclear decision-making. Without exploring, assessing, and discussing these issues—especially the three concerns described below—decision-makers may find themselves more trapped by AI advice than empowered to navigate crises.
Aggravating the Entanglement Problem
The major nuclear-armed powers, notably China, Russia, and the United States, are installing data analysis and decision-support systems powered by AI into their conventional, non-nuclear C3 systems as well as their nuclear C3 systems (NC3). Military officials claim that AI will allow battle commanders to make quicker and better-informed decisions than would be the case without the use of AI. Reliance on AI-enabled decision-support and C3 systems could, however, increase the risk of conventional combat escalation in a crisis, possibly resulting in the unintended or inadvertent escalation to nuclear weapons use. This danger is greatly amplified when conventional and nuclear C3 systems are intertwined, or “entangled.”
NC3 systems are inevitably entangled with conventional forces because the latter are needed to support nuclear missions. As a former U.S. Air Force deputy chief of staff for strategic deterrence and nuclear integration puts it, nuclear operations require “seamless integration of conventional and nuclear forces.” Unsurprisingly, the overarching architecture guiding the development of all Department of Defense conventional C3 networks, known as the Combined Joint All-Domain Command & Control (CJADC2) system, lists integration, where “appropriate,” with NC3 as a primary line of effort. The CJADC2 architecture will supposedlyprovide U.S. battle commanders with AI software to help digest incoming battlefield data and provide them with a menu of possible action responses.
To understand how this software could create escalation risks, consider a crisis situation at the beginning stages of an armed conflict between two nuclear-armed countries. One side might decide to take limited kinetic actions to damage or degrade the enemy’s conventional forces. Presumably, the operational plan for such a set of actions would be carefully vetted by military staff for any potential to create undesired pressure on strategic assets, mindful of how strikes on entangled enemy conventional and nuclear forces and C3 nodes could inadvertently create the perception of a preemptive attack on strategic forces.
The installation of AI decision-support software intended to assist with the development of such actions might bring some benefits to military planners in that it would assess options more quickly, more thoroughly, and with more parameters in mind. But if poorly coded or if trained on incomplete or faulty data, such software could also lead to an unintended diminution of strategic escalation concerns and possibly the initiation of unintended strikes on enemy NC3 facilities. If both the weapons systems producing kinetic effects and the decision-support system developing operational plans are to some extent autonomous, there is an even greater risk that oversight of escalation potential could fall between the cracks.
Autonomous Systems and Second-Strike Vulnerability
Since the introduction of autonomous weapons systems, nuclear experts have warned that their extensive loitering capabilities and cheap cost could have implications for the vulnerability of nuclear weapons delivery systems widely understood as optimal for ensuring a retaliatory second strike, such as ballistic missile submarines (SSBNs). Second-strike invulnerability, meaning the assurance that certain nuclear forces can survive an enemy’s first strike, is valued for promoting strategic stability between otherwise hostile nuclear powers.
For example, some analysts have speculated that it might be possible to track adversary SSBNs by seeding key maritime passages with swarms of unmanned undersea vessels, or drone submarines. But it may not be SSBNs that lose their invulnerability first. At present, there are still immature technologies, such as light detection and ranging (LIDAR) capabilities and magnetic anomaly detection, that must be mastered and absorbed by navies before oceans are rendered truly “transparent.”
Instead, AWS may have an impact sooner on mobile land-based systems that are typically afforded low, but non-zero chances of surviving a first strike. Land-mobile launchers could become significantly more vulnerable in the near term purely due to improvements in AI, robotics, and sensor technology. Reliably finding land-mobile launchers requires real-time surveillance and an understanding of routines and doctrine; the deployment of multiple swarms of reconnaissance drones plus algorithmic processing of data from radar, satellite, and electronic sensors could help with both.
One concerning scenario is a medium-term increase in vulnerability due to technological breakthroughs. For instance, a military power might demonstrate the capability to find and destroy missile launchers using autonomous swarms, whether in well-publicized naval maneuvers or during the course of a regional conflict. Any nuclear power that relies heavily on a second-strike doctrine and corresponding force structure may, in the short term, respond by increasing the number of warheads on delivery systems that could be launched on warning.
A different type of vulnerability problem derives from the possibility of the observation of autonomous tracking during a crisis. For example, the presence of reconnaissance drones deep within an adversary’s country’s notional air-defense system might generate destabilizing escalation pressures. And precisely because autonomous swarms may become an essential part of conventional deep-strike operational concepts, their presence near nuclear systems would undoubtedly be treated with great suspicion.
A more serious variant of this problem would arise if one state’s autonomous system accidentally caused kinetic damage to another’s nuclear weapons delivery system due to mechanical failure or an algorithmic defect. Under heightened alert scenarios, such an accident could be understood as, at best, a limited escalatory step, or, at worst, the beginning of a large-scale conventional preemptive attack. Even if no kinetic impacts occur, the misidentification of reconnaissance drones for autonomous strike platforms could create escalatory pressures.
Data, Algorithms, and Data Poisoning
Beyond problems arising from how AI-enabled systems are being integrated into military operations, there are also concerns derived from AI technologies themselves. Algorithms are only as good as the data they are trained on. Given the lack of real-world data on nuclear operations, there are reasons to be skeptical about the appropriateness of synthetic, or simulated data for training algorithms associated with nuclear systems. For that reason, it is premature to rely on such algorithms in what are often perceived to be moderate-risk applications, such as pattern analysis in strategic early-warning systems.
An algorithm may become “overfitted” to a training dataset, learning lessons based on patterns that are unique to training data and not relevant to generalizations of real-world scenarios. A strategic early-warning algorithm might be fed thousands of synthetic simulations of a nuclear bolt from the blue—that is, an all-out surprise attack—and extrapolate warning signs that are insignificant in practice. Due to opaque algorithmic characteristics, it might misinterpret a conventional strike or a demonstration nuclear detonation as the prelude to a nuclear assault, increasing the risk of an unwarranted nuclear escalation.
This is all before taking into consideration intentional tampering with algorithms associated with nuclear C3 systems. The possibility of data “poisoning,” whereby hostile actors surreptitiously tamper with datasets to produce unwanted or unpredictable algorithmic outcomes, can be hard to eliminate when AI systems produce unexpected and dangerous results.
In the future, one critical concern will be whether the algorithms in nuclear systems are vulnerable to manipulation, or if the datasets they are trained on have been tampered with. This kind of data poisoning attack could lead to faulty early-warning systems and associated algorithms that are meant to recognize patterns or generate options. If undetected, such tampering could lead to misguided and potentially escalatory behaviors during a crisis. Alternatively, overconfidence in the success of pre-planned data poisoning attacks could cause the state generating these attacks to make dangerous risk-taking decisions that leads to inadvertent escalation.
Recommendations
Current U.S. policy, as affirmed in the Biden Administration’s Nuclear Posture Review of 2022, states that: “In all cases, the United States will maintain a ‘human in the loop’ for all actions critical to informing and executing decisions by the president to initiate and terminate nuclear weapons decisions.” This policy was reiterated in October 2024 in a broader AI policy document, the Framework to Advance AI Governance and Risk Management in National Security. This should remain U.S. policy and be affirmed by all other nuclear powers.
Beyond endorsing this human “in the loop” precaution, nuclear powers should adopt the following additional recommendations to minimize the potential risks generated by the integration of AI into C3, NC3, and decision-support systems.
Nuclear powers should separate strategic early-warning systems from nuclear command and control systems that authorize the use of nuclear weapons. Requiring a human to translate the outputs of the first system into the second would create a firebreak that would prevent several categories of accident. This would help mitigate concerns about the unreliability of algorithms in decision support systems.
Congress and the executive branch should ensure that the tasks and roles of aging NC3 systems, which incorporate multiple levels of human oversight, are replicated in the course of the ongoing NC3 modernization process, with necessary improvements to cybersecurity and reliability but without incorporating extraneous new software functions that could create novel operational and technological risks. Excessive complexity in nuclear command and control systems can generate new failure modes that are unpredictable. Algorithmic complexity and opacity compound this risk.
Nuclear powers should discuss their concerns about the dangers to strategic stability posed by the operational roles of AWS and AI-enabled decision-support systems in bilateral and multilateral forums. Both official and Track II dialogues can help alleviate misconceptions about AI and AWS. In particular, Russia and the U.S. should resume their strategic stability dialogue—suspended by the United States following Russia’s invasion of Ukraine—and initiate similar talks with China, or, ideally, between all three. Such talks could lead to the adoption of formal or informal “guardrails” on the deployment of potentially destabilizing technologies along with confidence-building measures (CBMs) aimed at testing common AI standards and other regulatory measures.
The international community should adopt binding rules requiring human oversight of AWS at all times and the automatic inactivation of any such device that loses radio communication with its human controllers. This would reduce the risk that faulty AI in AWS trigger unintended strikes on an adversary’s NC3 or second-strike retaliatory systems. Proposals to this end have been submitted by numerous governments and NGOs, including the Arms Control Association, to the Group of Government Experts of the Convention on Certain Conventional Weapons (CCW) and also to the UN Secretary-General.
The new administration has a responsibility to build on the first draft of AI policy set down by the outgoing Biden administration. The practice of keeping humans “in the loop” is a starting point for preventing the worst outcomes of co-existence between AI and nuclear weapons within the national defense ecosystem, but much more needs to be done. Congress has an important role in ensuring that the executive branch properly assesses the potential risks of autonomous weapons and AI decision-support systems. Without oversight, the incentives to automate first and assess risks later may come to dominate U.S. policies and programs.
MICHAEL KLARE, senior visiting fellow, and XIAODON LIANG, senior policy analyst
Discussion about this post