In June 2019, the United States announced a new artificial intelligence (AI) partnership with Singapore that calls for collaboration on the development and use of AI technologies in the national security domain.1 Is this type of cooperation a harbinger of things to come? The burgeoning military use of AI — technology that carries out tasks that normally require human intelligence — has the potential to alter how states carry out military operations. AI-enabled technologies — like autonomous drone swarms and algorithms that quickly sift through massive amounts of information — can increase the speed and efficiency of warfare, but they may also exacerbate the coordination and decision-making challenges frequently associated with multinational military operations carried out by allies and security partners.
Policymakers and experts in the United States and other countries have urged international cooperation on the development and use of AI, but this guidance overlooks important questions about the challenges of AI collaboration in the security domain. President Donald Trump’s executive order on AI directs “enhance[ed] international and industry collaboration with foreign partners and allies” to maintain “American leadership in AI.”2 Similarly, the congressionally chartered National Security Commission on Artificial Intelligence warns, “If the United States and its allies do not coordinate early and often on AI-enabled capabilities, the effectiveness of our military coalitions will suffer.”3 Several of Washington’s allies have echoed these calls for collaboration. Germany’s 2019 National AI Strategy advocates for “work[ing] with the nations leading in this field … to conduct joint bilateral and/or multilateral R&D activities on the development and use of AI.”4 While cooperation is important, what challenges might allies and partners encounter as they work together to develop and deploy AI in the military domain? And what steps might states take to overcome these obstacles?
States are racing to achieve superiority in the AI domain, and AI research and development is flourishing: In early 2019, the U.S. Department of Defense unveiled its AI strategy.5 Meanwhile, China has pledged to develop a $150 billion AI sector by 2030,6 and Russian President Vladimir Putin famously asserted, “whoever becomes the leader in [AI] will become the ruler of the world.”7 AI development promises to bring enhanced accuracy and efficiency to complex and dangerous tasks, but policymakers and scholars have yet to fully explore how these benefits compare with potential risks — particularly in the context of multinational military operations.8 To be sure, decision-makers have expressed concerns about the reliability of AI technologies and the ethical implications of delegating military operations to computers.9 These AI-specific challenges, however, may magnify the coordination and commitment challenges that frequently plague military operations conducted by multinational alliances and coalitions.
Drawing from theories of alliance politics and analysis of emerging AI technologies, I map out two areas where AI could hamper multinational military operations. First, AI could pose challenges to operational coordination by complicating burden-sharing and the interoperability of multinational forces. Not all alliance or coalition members will possess AI capabilities, raising barriers to military cooperation as AI-enabled warfare becomes increasingly common. States with AI technologies will also need to overcome political barriers to sharing the sensitive data required to develop and operate AI-enabled systems. At the same time, rivals can stymie multinational coordination by using AI to launch deception campaigns aimed at interfering with an alliance’s military command-and-control processes.
Second, AI could hamper alliance and coalition decision-making by straining the processes and relationships that undergird decisions on the use of force. By increasing the speed of warfare, AI could decrease the time leaders, from the tactical to strategic levels, have to debate policies and make decisions. These compressed timelines may not allow for the complex negotiations and compromises that are defining characteristics of alliance politics.10 Decision-making may be further hampered if the “black box” and unexplainable nature of AI causes leaders to lack confidence in AI-enabled systems. And, just as adversaries could use AI to interfere with command and control, they could also use AI to launch misinformation campaigns that sow discord among allies and heighten fears that allies will renege on their commitments.
To be sure, barriers to multinational military cooperation are not new, but AI may intensify these difficulties.11 To help overcome these obstacles to coordination and decision-making challenges, alliance and coalition leaders can draw lessons from past cases of successful cooperation and a growing corpus of national-level AI strategies to develop international agreements and standards that streamline the integration of AI into multinational operations.
This article makes three contributions to scholarly and policy debates in international relations. First, it investigates how technology shapes alliance relationships and multinational military operations. Most scholarly work on alliances and security partnerships has focused on the reasons behind their creation,12 their institutional design and processes,13 their effectiveness at reassuring friends and deterring rivals,14 and their survival amid changing political conditions.15 Much of this work has overlooked the effects of specific technologies on alliance politics, with the exception of studies on nuclear weapons. Second, the article builds upon research examining the role of emerging technologies in international security, more broadly. Existing studies have explored how militaries adopt new technologies,16 how those technologies affect conflict initiation and escalation,17 and how they shape force structure and doctrine.18 This article broadens this line of research by investigating how technology can both stymie and advance cooperation between states in the security domain. Third, the paper contributes to policy debates surrounding the increasing use of AI in military settings. Existing analyses have explored potential applications of AI,19 its effects on the balance of power,20 and the ethical and domestic political considerations associated with battlefield AI use.21 A deeper understanding of how AI can influence security partnerships and alliances may help inform policymaking.
This paper proceeds in five parts. First, I briefly define artificial intelligence and describe its military applications. Second, I survey the scholarly literature on alliance politics and multinational operations, focusing on the challenges of planning and carrying out operations. Third, I identify how AI can magnify these challenges. Fourth, I investigate how these AI-associated challenges might be overcome. I conclude by outlining potential avenues for future research.
Artificial Intelligence and International Security Applications
Broadly defined, AI is the ability of computers and machines to perform tasks that traditionally require human intelligence.22 AI has been applied to control self-driving cars and swarms of unmanned aircraft, to assist physicians in making medical diagnoses, and at the more quotidian level, to screen spam emails and act as virtual personal assistants.23 Underlying AI technologies are a variety of approaches including mathematical optimization, statistical methods, and artificial neural networks — computer systems that attempt to perform specific tasks in a similar way to the human brain.24 Regardless of approach, AI typically uses large amounts of data to train and feed algorithms to accomplish tasks and processes that are normally associated with human cognition. Most current AI is considered to be “narrow,” designed to achieve a specific task — like identifying objects in images. Researchers, however, are working to develop artificial general intelligence that can accomplish any task the human brain can.25
Narrow AI technology has increasingly been applied in the national security domain. Although much policy and scholarly writing focuses on lethal autonomous weapon systems — “killer robots” that can identify and engage targets without human intervention — AI is far more commonly employed in a range of more mundane military and national security tasks.26 In some cases, AI is part of analytical processes, like the use of machine learning to classify targets in satellite imagery.27 In other instances, it is part of the software used to operate physical systems, like autonomous planes or ships.28 In both cases, AI is not a military capability in itself, but an enabler that can enhance the efficiency of military tasks and systems.29
Military and political decision-makers seek to enhance the efficiency and accuracy of their state’s military and to reduce risk and costs during operations. AI can help accomplish these objectives.
Many regional and global military powers have already fielded AI-enabled military systems.30 Israel and Russia, for instance, have reportedly tested self-driving tanks and armored vehicles capable of identifying targets without human direction.31 The United States is making headway on Project Maven, the Defense Department’s effort to use machine learning — an application of AI — to streamline the analysis of video gathered by drones.32 Similarly, Japan’s Self-Defense Force announced that it will equip its P-1 maritime patrol aircraft with AI technology that will more effectively identify vessels and other potential targets.33 States have also begun incorporating AI into autonomous systems that can navigate without direction by human operators, often in swarms intended to overwhelm an enemy’s defenses. In 2017, for instance, the U.S. Naval Postgraduate School and the Defense Advanced Research Projects Agency hosted a large-scale experiment where swarms of autonomous drones flew simulated combat missions against each other.34
The development of these systems should not come as a surprise. Military and political decision-makers seek to enhance the efficiency and accuracy of their state’s military and to reduce risk and costs during operations. AI can help accomplish these objectives. In many contexts, AI can make assessments and judgements with greater speed and accuracy than humans, and with less manpower. For example, AI can help quickly dig through vast quantities of imagery and video data to pinpoint objects of interest, like military vehicles, with little human involvement.35 In contrast, geospatial intelligence exploitation that is not assisted by AI is a time-intensive and manpower-intensive process.36 AI can also be used to operate autonomous weapon systems that allow states to launch military operations without putting friendly personnel in harm’s way. These systems can decrease the risk of friendly casualties and reduce the political barriers to launching military operations.37 The efficiency-enhancing and risk-reducing characteristics of AI-enabled systems will likely appeal to casualty-averse and cost-conscious leaders. Indeed, AI technologies might allow these leaders to launch operations not previously possible because of efficiency concerns or high degrees of risk to friendly forces.
Allies, Partners, and the Challenges of Artificial Intelligence
Military operations today are commonly carried out by alliances or other multilateral coalitions — formal or informal arrangements between states.38 Allies cooperate militarily and diplomatically to respond to mutual threats and achieve common objectives, yielding both political and military benefits.39 Politically, multinational operations can impart legitimacy to military operations in the eyes of both domestic and international audiences. Support for military action from a broad coalition of allies and partners can serve as a cue to the public that the action is justified, and help counter narratives that a state’s military operations are improper or seek to upset the status quo.40 From a military perspective, alliances and coalitions allow states to share the burden of operations.41 Unlike unilateral operations, where a single state provides all personnel and equipment, alliances allow for the division of labor across all member states. To facilitate cooperation, allies often engage in consultative decision-making, develop shared operating procedures, build integrated command-and-control networks, acquire interoperable weapon systems that can integrate on the battlefield, and participate in joint military exercises.
Although alliances and multilateral coalitions can bolster the security of member states and the efficiency of their military operations, membership can create complications for decision-making and the coordination of military operations. First, alliances and coalitions must overcome operational challenges surrounding the integration and coordination of military forces. Modern military operations require the close coordination of participating forces, shared intelligence to guide planning and mission execution, and weapon systems capable of communicating with and operating alongside each other. The military of each alliance or coalition member state brings with it different equipment, policies, and tactics, meaning that a state’s forces may not fully integrate with the forces of its allies.42 Moreover, partners are often reluctant to share sensitive operational and intelligence information.43 Beyond these institutional issues, more commonplace matters — such as the different languages and military cultures of each member state — can hinder interoperability during contingency operations.44
Second, alliance and coalition leaders may have trouble deciding what policies their coalition should pursue. Although allies typically face a common threat and share many policy objectives, each state still maintains its own priorities and goals. State leaders therefore respond to domestic constituencies and pursue their own national interests, which, at times, may be at odds with alliance goals.45 At best, these divergent interests result in coordination problems that draw out decision-making timelines.46 At worst, they generate mistrust between partners and raise concerns of being abandoned during a crisis or “chain-ganged” into unwanted wars.47
While alliances and coalitions are comprised of member states with shared interests, there is significant variation in the degree of formalization of security partnerships that can affect how they plan and execute military operations. On the formal end of the continuum are alliances like NATO that are governed by treaties. These formal treaties invoke obligations and a sense of trust not typically found in less formalized, tacit arrangements.48 On the less formal end of the spectrum are coalitions, security arrangements that are generally more ad hoc and focused on achieving a specific and narrow goal.49 For example, George W. Bush’s “coalition of the willing” brought together more than three dozen countries during the 2003 Iraq War.50 Because of their more limited goals, coalitions are often temporary entities that exist only until their mission is accomplished, and frequently lack the institutional arrangements that help strengthen ties and coordination between allies. The analysis in this article applies across the continuum of formalization, but the challenges that AI poses to alliance operations and decision-making should be more vexing for coalitions that lack formalization. For clarity throughout the remainder of the article, I use the term alliances to describe security partnerships across the spectrum of formalization.
AI Obstacles to Alliance Operations
At the operational level, AI can complicate burden-sharing and the interoperability of alliance military forces. The development and integration of AI technology in the security domain poses three challenges to coordination during alliance military operations. First, not all states will develop military applications of AI at the same rate. Within an alliance, some states will possess and effectively operate AI-enabled capabilities, while others will not. This unequal distribution of technology can hinder burden-sharing and interoperability. Second, allies will need to resolve the political and technical challenges associated with developing interoperable AI-enabled systems and sharing the data that underpins AI technology. Data are often difficult to share and states are often loath to share sensitive information. Third, adversaries are likely to use AI to disrupt allied military operations.
Complicating Burden-Sharing: Artificial Intelligence Haves and Have-Nots
Despite the surge in international attention on AI, not all states have developed robust AI capabilities, particularly for military applications. One recent study finds significant variation in the capacity of states to “exploit the innovative potential of AI” for government purposes.51 States like the United Kingdom, Germany, and the United States receive high marks for AI readiness, while other allies like Spain, Turkey, and Montenegro fall lower on the readiness scale.52 This unequal distribution of AI technology can result from differences in the organizational, financial, and human capital available to develop and deploy new technologies and differences in political support for the use of AI.53 Uneven distribution of AI technologies has important implications for the ability of allies and partners to divide military tasks during crises.
Variation in the capacity to adopt and integrate AI technology into state militaries can create AI “haves” and “have-nots.” Some states — like Germany — possess a robust technology sector, have the financial resources to fund research and acquisitions, and maintain defense bureaucracies that are sufficiently skilled and flexible to integrate new AI technologies.54 Indeed, many of these states have created government institutions to manage military AI development. The United States, for example, established the Joint Artificial Intelligence Center in 2018 to coordinate the Defense Department’s AI programs.55 Other states lack these resources and are unable to rigorously pursue new AI capabilities. For instance, many of NATO’s economically weaker members have focused their defense spending on modernizing conventional forces and updating Cold War-era hardware, and not on AI development.56
Politically, “AI haves” may complain that “AI have-nots” are not adequately contributing to a mission, straining relations between allies. Operationally, capability gaps can hamper an alliance’s ability to deploy forces or achieve military objectives.
Even if a state has the resources to develop AI capabilities, limited public support for AI-enabled military systems can hamper such efforts. Opposition can stem from the uncertainty surrounding AI’s functionality, or from moral and ethical objections to delegating decisions on the use of force to computers. One recent cross-national survey, for instance, finds significant public disapproval of the use of lethal autonomous weapons among key U.S. allies. To be sure, autonomous weapons and AI are distinct, but AI is incorporated into the software architecture of most autonomous systems, and pundits and the public often conflate the two.57 In South Korea and Germany, 74 and 72 percent of the local populations, respectively, oppose their use (compared to 52 percent opposition among the U.S. public).58 These two countries are close U.S. allies that host dozens of U.S. military installations and over 60,000 American troops.59
Tepid public support at home and abroad can stymie alliance military operations in two ways. First, public opposition to the use of AI among allied populations may lead policymakers to restrict the use of AI-enabled technologies for military operations. In the event of future hostilities, for example, the South Korean or German governments might oppose an ally’s use of AI-enabled lethal weapon systems on their territory.60 Indeed, advocacy from the public and activist groups has led a growing number of states — including U.S. allies like Pakistan and Jordan — to call for bans on the use of lethal autonomous weapon systems.61
Second, civilian engineers and researchers that develop AI technology may refuse to work on military AI contracts. Disruptions to AI development can hinder the fielding of new capabilities and generate mistrust between the government and civilian firms. Google employees, for instance, protested their involvement in Project Maven, a Defense Department program that uses AI to analyze video collected by military drones.62 In a letter to their CEO, the employees argued that “Google should not be in the business of war,” explaining that the company should not “outsource the moral responsibility of [its] technologies to third parties,” and that work on Defense Department-backed AI would “irreparably damage Google’s brand.”63 The resistance ultimately led Google to terminate its involvement in the contract and generated public criticism of the Defense Department’s AI efforts.64
The existence of “AI haves” and “have-nots” within an alliance can complicate burden-sharing — a central tenet of military alliances. On one hand, states with robust AI capabilities can specialize their contributions to alliance operations and focus on providing AI-related capabilities. If, however, AI applications become a necessity for warfighting in the future, states that lack AI capabilities may be less able to contribute to alliance operations. States better equipped with AI capabilities may subsequently be forced to take on a greater share of work, generating both political and operational challenges. Politically, “AI haves” may complain that “AI have-nots” are not adequately contributing to a mission, straining relations between allies. Operationally, capability gaps can hamper an alliance’s ability to deploy forces or achieve military objectives. During the NATO-led air war over Kosovo in 1999, for instance, many NATO members possessed limited numbers of precision-guided munitions in their arsenals and often lacked the training to employ them, curtailing their ability to contribute to operations.65 As a result, responsibility for carrying out the air campaign fell to a small number of allies. In a larger conflict, burden-sharing might be critical to sustaining operations or securing battlefield victories.
Data Sharing and Standardization
As the number of states that employ military AI applications grows, the ability of allies to operate collectively will depend, in part, on the sharing of data that fuels AI systems. AI requires massive amounts of data to train and feed algorithms and models. To identify a surface-to-air missile site, for instance, an AI image classifier must learn to differentiate missile sites from other facilities by studying images of known missile sites. The more data used to train these systems, the more accurate the system will be.66 Once fielded, AI-enabled systems like the image classifier must continue to be fed imagery from reconnaissance aircraft, satellites, or other assets in a format that allows for target identification. Shared data might be needed to enhance the accuracy of AI-enabled systems or to increase the effectiveness of multinational operations. For example, some member states may be better positioned than others to gather data on a shared rival, increasing the amount of data available to AI systems.67
Because of its central role in AI development and operations, the U.S. military has described data as a “strategic asset,” yet sharing data — even within the U.S. military — has posed a significant challenge.68 Lt. Gen. Jack Shanahan, founding director of the Department of Defense’s Joint Artificial Intelligence Center, lamented that data “has stymied most of the [military] services when they dive into AI.” Specifically, “they realize how hard it is to get the right data to the right place, get it cleaned up, and train algorithms on it.”69 There are two primary factors that underlie these challenges. First, data resides in thousands of different repositories and often lacks standardized formatting. Video from the U.S. military’s fleet of reconnaissance aircraft, for instance, is stored on multiple separate networks and in different data formats. Second, significant amounts of data collected by weapons and sensor systems are considered proprietary by the contractors that design and maintain the equipment. Firms must first release or “unlock” this data before it can be analyzed or fed into other systems.70
Although shared data is needed to develop AI technologies that can integrate with allied equipment, states face both political and technical barriers to sharing security sector information. From a political standpoint, even the closest allies may be hesitant to share the sensitive data that undergirds military AI systems. States fear that sharing sensitive data might reveal intelligence sources and methods, the revelation of which could compromise ongoing operations or strain political relationships. During the Vietnam War, for example, the United States was hesitant to share intelligence with its ally South Vietnam. Officials feared that communist sympathizers in the ranks of South Vietnam’s military and intelligence services would pass information to North Vietnam and the Vietcong. They were also concerned that intelligence might highlight that the United States was planning operations that did not align with South Vietnam’s government priorities.71 States also worry that shared information could be used for purposes other than initially intended or in ways that are at odds with the sharing state’s interests. Turkey, for instance, may have used intelligence shared as part of counter-Islamic State operations to instead target Kurdish forces in northern Syria.72
To minimize these perceived risks, states often impose restrictions on information sharing. One of the most common control measures is sharing only finished intelligence — products such as briefings or reports derived from a variety of different intelligence sources.73 These products provide assessments, but generally omit technical data — like details about the information source — that could reveal intelligence-gathering procedures and methods. Although data sharing is a type of intelligence sharing, developing and operating AI-enabled systems may require the exchange of more complete raw data in far larger quantities than traditional intelligence sharing. Raw data, which includes imagery files and signals intercepts, can include metadata such as spectral signatures of imagery or characteristics of electronic emissions that can be used to feed AI systems.74 Since this information can expose precise capabilities and shortcomings of a state’s intelligence systems, decision-makers may be hesitant to share it — especially in the large quantities needed to develop and run many AI-enabled systems.
There are also technical obstacles to data sharing. Just as the U.S. intelligence community and military stores information in nonstandardized formats on multiple systems, so too do national security institutions in other allied states. Across an alliance, the same type of data might reside on hundreds of different networks and in different formats, making it difficult to share data or to develop interoperable systems. To use data from other alliance partners, data must first be located, transferred out of a state’s classified computer network, and reformatted into a standardized, usable form. Given that the U.S. military has faced significant data management challenges in its own AI development, we should expect alliances — with their greater number of institutional actors and data sources — to encounter even greater obstacles to data sharing.
Vulnerabilities: AI and Data
In addition to barriers to sharing, allies face the possibility that the data that they do share may be especially vulnerable to adversary manipulation. Engineers and military leaders worry that rivals could hack into data repositories and “poison” data — inserting fake data or making existing data deliberately flawed.75 In one recent academic study, researchers used data poisoning to cause an algorithm designed to identify street signs to misclassify stop signs as speed limit signs.76 In the military domain, a rival could poison imagery data in order to throw off AI target recognition systems, leading the system to miss military targets, classify them as nonmilitary ones, or identify civilian infrastructure as military facilities. At best, this could require manpower-intensive efforts to secure and sanitize data or lead states to turn back to manual analysis of targets. At worst, this could lead to the inadvertent targeting of noncombatants.
While the risk of data poisoning plagues all AI users, alliance military operations may be particularly susceptible because data inputs from multiple states are used to train and operate AI-enabled systems across the alliance. Flawed data inputs from one state can therefore have cascading effects across an alliance’s operations. Rivals will recognize that different members of an alliance defend their networks and data with different levels of safeguards. As a result, rivals may target data stored by states where they have easier access.77
In the military domain, a rival could poison imagery data in order to throw off AI target recognition systems, leading the system to miss military targets, classify them as nonmilitary ones, or identify civilian infrastructure as military facilities.
Adversaries can also use AI to launch deception campaigns designed to interfere with alliance military command and control. Militaries have long tried to deceive their adversaries during wartime and crises. During World War II, for instance, allied forces used a complex ruse involving imaginary armies equipped with inflatable tank and plane decoys to deceive Nazi planners about the location of the D-Day landings.78 While states and other actors have a range of tools with which to carry out deception operations, AI allows them to launch deception campaigns using digital decoys and misinformation rather than physical ones.
One AI tool actors can use to complicate alliance operations are deepfakes, manipulated videos and audio that realistically mimic the behaviors or speech of an actual person. In 2018, for instance, the digital media outlet Buzzfeed produced a film in which a deepfake of former President Barack Obama appeared to utter obscenities and criticize Trump.79 Deepfake creation relies on deep-learning algorithms that learn by observing photos, audio, and video of an individual to produce lifelike representations that can be programmed to say or do things that the actual person never did. Although early deepfakes were easily detectable to the naked eye, techniques such as generative adversarial networks have enhanced the quality and believability of deepfakes. This technique features two competing neural networks: a generator and a discriminator. The generator produces an initial deepfake, while the discriminator compares the AI-generated “fake” with genuine images from a training data set. The generator then updates the fakes until the discriminator can no longer distinguish the AI-generated image from the actual images.80 As AI technology advances, rivals may be better able to use AI to carry out deception campaigns.
Deepfakes could be used in a variety of ways. An adversary might create deepfakes of senior alliance commanders to issue incorrect or contradictory orders to troops in the field, or use AI to produce fake intelligence reports.81 A rival might use video or audio recordings of an actual commander obtained from public media reports or intercepted communications to generate deepfake commands. Or, they could use generative adversarial networks to create fake satellite intelligence imagery that misrepresents the ground truth.82 Once transmitted via video teleconference, phone, email, or radio, these false commands and intelligence reports could cause troops to redeploy in a way that aids the rival or simply generates confusion. Nefarious actors have already successfully employed these types of ruses. In 2019, for example, criminals used AI to clone the voice of a British energy firm executive and directed a company employee to transfer hundreds of thousands of dollars into a bank account controlled by the criminals.83 The software needed to carry out these efforts is easily available, demands little data for training, and increasingly requires minimal computer programming knowledge. Indeed, some voice cloning programs are available for free or at a low cost on the internet.84
Alliance military forces may be particularly vulnerable to AI-enabled misinformation and deception because multinational command-and-control processes involve coordination across multiple states.85 Personnel may have limited previous experience working with international partners, and as a result, be unfamiliar with their ally’s operating protocols and less adept at working within a multinational chain of command. Adversaries can exploit this unfamiliarity with coalition operations to inject AI-generated false commands. The time pressure, stressors, and complexity of military operations increase the likelihood that lower-level commanders will carry out these deepfake commands. These challenges will become more vexing as the quality of deepfakes increases and deciphering real from tampered content becomes more difficult.
Obstacles to Alliance Decision-Making
In addition to creating obstacles to the conduct of multinational military operations, AI can also strain the ability of alliance leaders to make decisions during a crisis. Alliance decision-making is often characterized as a contentious process in which policymakers from states with different national interests, military capabilities, and risk tolerances coordinate their preferences.86 Policymakers seek to advance their state’s own interests during deliberations, frequently leading to negotiated policy compromises. NATO allies, for instance, routinely have policy disagreements — take, for instance, clashes over the response to Egypt’s nationalization of the Suez Canal in 1956 and over the 2003 U.S. invasion of Iraq.87 Alliances and coalitions are also fraught with commitment problems, where states fear that allies will back out of agreements or drag them into unwanted conflicts.88 Divergent national positions and fears of abandonment can lead decision-making consultations between states to be drawn out, and, if conducted in the midst of a crisis, leave alliances unable to respond decisively to threats.89
AI can complicate the coordination required for alliance decision-making and the subsequent ability to command and control multinational forces in three key ways. First, AI technologies promise to accelerate the speed of military operations, reducing the amount of time available for deliberations between states. Second, there are varying levels of uncertainty surrounding the reliability and effectiveness of AI technologies. If decision-makers from different states hold different degrees of trust in the ability of AI systems to provide accurate information or take appropriate actions, they may be hesitant to use these systems when making decisions on the use of force. Third, adversaries may use AI-enabled disinformation campaigns to degrade trust between allies and heighten fears that member states will renege on their alliance commitments.
Compressed Decision-Making Timelines
The proliferation of AI-enabled technologies among both friends and rivals will compress the time policymakers and military commanders have to deliberate over political and military decisions. In the hands of allies, AI-assisted intelligence, surveillance, and reconnaissance or command-and-control systems may identify adversary military maneuvers faster than non-AI systems. Once presented with this information, alliance decision-makers may need to quickly decide how to respond — particularly if adversary forces pose an immediate threat or must be targeted within a narrow window of opportunity.
The U.S. military has already started to develop this type of capability. As part of a series of exercises, the Defense Department demonstrated a command-and-control network that uses AI to automatically detect enemy activity and pass targeting information between multiple intelligence and military assets. During one of these exercises, a space asset detected a simulated enemy ship, but was unable to identify it. The network automatically cued an intelligence, surveillance, and reconnaissance platform to collect additional information on the adversary vessel, which it then sent to a command-and-control asset. The command-and-control platform used AI to select the best platform available to strike the enemy ship and passed targeting data to the nearby U.S. naval destroyer that would engage the adversary vessel. AI significantly shortened the targeting process relative to efforts without AI technology. When describing the AI-enabled network in November 2019, U.S. Air Force Chief of Staff Gen. David Goldfein announced, “This is no longer PowerPoint. It’s real.”90
The proliferation of AI-enabled technologies among both friends and rivals will compress the time policymakers and military commanders have to deliberate over political and military decisions.
At the strategic level, this type of AI-enabled command-and-control system could present decision-makers with intelligence that a rival is preparing to deploy strategic forces — like ballistic missile submarines or mobile missile launchers — from its garrisons during a crisis. In such a case, senior policymakers from various alliance member states might hold differing opinions on how best to respond, but would have little time to debate their options before the adversary’s forces are dispersed and more difficult to locate.91 Commanders at the operational and tactical levels of alliance operations will face similar challenges as AI-enabled systems more rapidly provide battlefield intelligence about rival forces. As a result, commanders may be forced to quickly decide whether to strike a fleeting target detected by an AI-enabled system. To be sure, decision-makers in unilateral operations will confront these same issues, but settling on the best course of action is more complex in settings where multiple actors have a say in the decision-making process.92
An adversary’s use of AI-enabled systems can also compress timelines and complicate alliance decision-making. Just as AI can boost the tempo of allied operations, it can increase the frequency and speed of a rival’s military actions. AI-enabled autonomous weapon systems that allow states to launch military operations without putting personnel in harm’s way may lead rival leaders to launch operations that they might not otherwise carry out.93 China, for instance, has developed and exported autonomous drones capable of identifying targets and carrying out lethal strikes with little or no human oversight.94 Further, a rival’s integration of AI into its command-and-control networks may speed its decision-making process. Indeed, China’s military has expressed an interest in leveraging AI for military decision-making.95 A publication from the Central Military Commission Joint Operations Command Center, for example, described how the use of AI to play the complex board game Go “demonstrated the enormous potential of artificial intelligence in combat command, program deduction, and decisionmaking.”96 These systems could be employed against the United States and its allies in the Indo-Pacific region, forcing allied commanders to respond more quickly to these threats.
Uncertainty Surrounding AI Technology
AI can also strain alliance decision-making by fueling uncertainty about information and military actions. Unlike human analysts or military personnel who can be asked to explain and justify their findings or decisions, AI generally operates in a “black box.” 97 The neural networks that underpin many cutting-edge AI systems are opaque and offer little insight into how they arrive at their conclusions.98 These networks rely on deep learning, a process that passes information from large data sets through a hierarchy of digital nodes that analyze data inputs and make predictions using mathematical rules. As data flows through the neural network, the net makes internal adjustments to refine the quality of outputs. Researchers are often unable to explain how neural nets make these internal adjustments. Because of this lack of “explainability,” users of AI systems may have difficulty understanding failures and correcting errors.99
Policymakers have called for the development of more transparent AI systems, and researchers are working to develop explainable AI tools that peer inside the AI black box.100 Yet, many decision-makers remain uncomfortable with the uncertainty surrounding AI-enabled systems. The commander of the U.S. Air Force’s Air Combat Command, for instance, publicly explained that he was not yet willing to rely on AI programs to analyze the full-motion video collected by reconnaissance drones. He argued that although systems are improving, they are still unable to consistently provide accurate analysis.101 So long as the decisions and analysis of AI systems remain opaque, military commanders may be reluctant to trust AI-enabled systems. And if used, AI may contribute to the fog of war, rather than reduce it, making it difficult to make decisions using information delivered by AI technologies.
The operational implications associated with uncertainty and lack of trust in AI would likely be exacerbated in multinational alliance contexts. There is significant cross-national variation in trust in AI technologies, even among close allies. One 2018 survey, for instance, found that just 13 percent of respondents in Japan and 17 percent of respondents in South Korea trust artificial intelligence, compared to 25 percent of respondents in the United States. Similar disparities exist between the United States and many of its NATO allies. In Spain, 34 percent of respondents trust artificial intelligence, compared to 21 percent in Canada, 40 percent in Poland, and 43 percent in Turkey.102 Given this variation, policymakers and commanders from some states may be more reluctant to use AI-enabled systems or trust the information they deliver than leaders from other states during multinational operations.
Allied decision-makers will also face uncertainty when confronting a rival’s use of AI-enabled technologies. Leaders will be forced to wrestle with whether to respond to actions carried out by AI-enabled systems — like autonomous aircraft or ships — in the same way as actions carried out by traditionally manned assets. Existing doctrine and law are generally silent on these issues, providing no guidance on the appropriate response. States have drafted domestic policies to govern their own use of autonomous weapon systems, but these regulations and international law make no distinction between how states should react to a rival’s AI-enabled military actions versus “traditional” military actions.103 Yet, decision-makers may believe that a rival’s use of AI technologies demands different responses than those involving manned platforms.104 What happens if a rival claims that an attack carried out by an AI-enabled system was the result of a flawed algorithm? Should air defense forces respond differently to an adversary’s autonomous drones that penetrate friendly airspace than to a manned aircraft that does the same? Decision-makers may find themselves with little time to consider these complicated issues, particularly as AI technology accelerates the speed of a rival’s military operations.
Adversary Manipulation and Interference
Even if states were to trust their own AI technologies, rivals and malicious actors can use AI to sow discord that can hamper decision-making. Trust and close relationships are crucial when multiple states coordinate security-related decisions since policymakers must be confident that allies will not renege on commitments. Leaders have long held fears of being abandoned by allies or of being drawn into unwanted conflicts.105 These fears are magnified when leaders suggest they might not follow through with their alliance commitments or engage in provocative actions.106 Trump, for instance, raised questions about Washington’s commitment to its allies when he publicly questioned the value of defending certain NATO member states.107 An adversary could use AI to drive misinformation campaigns that latch onto these concerns in an effort to strain ties or deepen cleavages between allies.
Just as adversaries can use deepfakes to interfere with operational-level coordination, they can also use AI technologies to breed confusion and mistrust that hamper strategic decision-making. Actors seeking to disrupt alliance cohesion might create deepfakes depicting leaders of alliance member states questioning the value of an alliance, criticizing other leaders, or threatening to take actions that could draw an alliance into an unwanted conflict. These falsified videos or recordings could boost uncertainty of an ally’s commitments or induce panic over fears of abandonment during a crisis. The decision-making process may be slowed as policymakers try to understand their allies’ true intentions and preferences, or convince domestic publics that an ally’s “statements” are in fact AI-produced misinformation.
The Way Forward
Although the proliferation of military AI technology has the potential to frustrate alliance military operations and decision-making, these obstacles are not insurmountable. Allies have previously worked together on missions that involved new technology, shared highly sensitive information, and learned to cope with compressed decision-making timelines. Drawing lessons from historical exemplar cases where allies have wrestled with new technology, coupled with guidance from emerging national AI policies and analysis of new technologies, I identify ways that alliances can overcome the pitfalls of AI integration in an environment in which AI is increasingly common.
Increasing AI Interoperability and Data Sharing
To ensure alliances and coalitions are able to leverage AI technologies during their operations, states will need to remove barriers to data sharing and access. One initial step to enabling this type of interoperability is to establish formal agreements that govern the development and use of AI-enabled technologies and associated data. These formal agreements will not only prescribe procedures for collaboration, but help assuage fears that allies will renege on commitments.108 Agreements that explicitly define the responsibilities and expectations of member states help eliminate vagaries that otherwise allow a state to back out of commitments with partners.109
To integrate AI into alliance operations, policymakers will need to first establish how they will jointly develop and employ AI capabilities. This entails identifying the types of operations in which allies are willing to use AI-enabled technologies. Some states may only be willing to employ AI military systems in limited areas and eschew using AI for certain tasks. The U.S.-Singapore agreement, for example, stipulates that the two states will focus their AI efforts on humanitarian assistance and disaster relief operations.110 More narrowly scoped agreements that focus on noncombat operations may prove more palatable to policymakers and their domestic publics. These narrow agreements could serve as useful first steps to collaboration, but still yield lessons and best practices applicable across the full range of military operations.
Even when formalized agreements establish the processes and institutions that enable AI cooperation between states, many leaders may remain hesitant to share the sensitive data that underpins AI development and operations.
Developing data-sharing policies and technical standards may be difficult given the sensitive nature of national security information and the variation in technical standards across alliance member states. Allies, however, have found ways to coordinate cooperation, even in sensitive areas. The United States and its Five Eyes partners — the United Kingdom, Canada, Australia, and New Zealand — have long maintained agreements that govern intelligence collaboration. The 1946 United Kingdom-United States Agreement, for example, established formal rules for sharing signals intelligence — intercepted electronic emissions and communications.111 The agreement spelled out how the states would cooperate on the collection, analysis, and dissemination of signals intelligence, while a technical appendix provided detailed technical and procedural guidance on communications intercept equipment and decryption and translation processes.112 Specifically, the agreement called on states to “make available to the other [states] continuously, currently, and without request, all raw traffic, [communications intelligence] end-product and technical material acquired or produced.”113 Some existing intelligence sharing agreements might allow for the exchange of the sensitive data needed to train and operate AI systems. When existing agreements are not in place or do not cover the types of data required for AI-enabled warfare, policymakers will need to develop new bilateral or multilateral agreements that enable interoperability and data sharing. These agreements and the procedures used to implement them will likely vary depending on the states involved and the degree and purpose of cooperation. In some cases, cooperation may be narrowly scoped to limited data sharing in support of a specific operation. In other cases, agreements may be far broader and cover issues related to research and development, interoperability, and extensive data sharing.
Even when formalized agreements establish the processes and institutions that enable AI cooperation between states, many leaders may remain hesitant to share the sensitive data that underpins AI development and operations. Information-sharing arrangements are plagued by commitment problems as states can back out of their agreements to exchange data if they fear that data will be leaked or their capabilities and shortcomings will be revealed.114 This might be particularly true in ad hoc coalitions or larger alliances, where relationships between member states may be weaker. Recent technological advances, however, may help overcome these commitment problems by convincing member states that their data will remain secure even when shared.
In particular, developments in the field of cryptology allow states to share data with partners for use in AI systems, while hiding the exact content of input data. Secure multiparty computation, for example, is a privacy-preserving technique in which AI algorithms perform their computations using an input that remains secret, but provide an output that is public to all authorized users.115 Secure multiparty computation has been increasingly used in the medical and financial sectors where analysts seek to assess trends but need to protect individual-level health and fiscal data to avoid violating privacy regulations.116 This and other privacy preserving approaches could be applied to a range of AI-enabled alliance military tasks, such as the classification of objects in satellite and reconnaissance imagery. Member states might feed sensitive intelligence data into a secure multiparty computation-based system managed by an alliance’s intelligence fusion center, which would then return information about potential targets, without revealing attributes about each state’s intelligence inputs.
To successfully integrate AI and share data, however, partners will also need to establish technical standards to ensure data is stored and formatted in ways that make it easily accessible to and usable by various alliance members. In designing these agreements, alliance policymakers might draw insights from existing state-level AI guidelines and alliance standardization protocols. The U.S. National Institute for Standards, for example, released its AI standards in February 2019. The guidance calls for defining data specifications that ensure AI technologies meet “critical objectives for functionality, interoperability, and trustworthiness.”117 In the alliance military context, this might mean ensuring that data associated with geospatial or signals intelligence are formatted and labeled in a common manner and stored on shared alliance networks. Or, it could mean establishing alliance-wide protocols for data security and integrity to minimize the risks of data poisoning. These specifications could be codified in formal arrangements like NATO’s standardization agreements, which provide standards for thousands of systems and processes ranging from aerial refueling equipment to satellite imagery products.118 These standards ensure “doctrine, tactics, and techniques are developed in harmony” to help allies “operate effectively together while optimizing the use of resources.”119
Streamlining Decision-Making and Command and Control
AI is not the first military development to reduce the amount of time alliance leaders have for crisis decision-making. Warsaw Pact military modernization in the 1970s, for instance, led NATO to reevaluate the amount of warning it would have in advance of an invasion of Western Europe. Prior to 1978, analysts estimated that the Soviets and their allies needed 30 days to prepare for an attack, giving alliance leaders a week to decide on response options. The expansion of Warsaw Pact offensive military capabilities reduced the preparation timeline to 14 days, slashing the window for NATO deliberation to just four days.120 To mitigate the risks of protracted decision-making timelines, NATO took several steps to improve its ability to rapidly react. Specifically, senior NATO military commanders were given greater authority to order defensive measures in time-sensitive circumstances that precluded political authorization. The alliance also revamped and streamlined communications systems and procedures that facilitated alliance consultations and engaged in additional exercises focused on military alerts and mobilizations.121
More recently, NATO’s development of an alliance ballistic missile defense capability again raised the prospect that military commanders might be forced to make decisions on the use of force — albeit in a defensive manner — without time for political deliberations. In the event a rival were to fire missiles at Europe, intercept timelines would not allow for political consultation.122 To prepare for the potentiality of defending Europe from missile attack, NATO considered pre-delegating launch authority to lower-level commanders.123 Under specific rules of engagement, NATO commanders would be authorized to make decisions on the targeting of inbound missiles without waiting for approval from higher headquarters. These guidelines would ensure the alliance would be able to defend itself even if there was insufficient time for more senior commanders and policymakers to debate policy choices.
Just as pre-delegation of authorities to lower-level commanders helped NATO streamline crisis decision-making in the past, it may also help alliance decision-makers respond to “machine speed” operations that leave insufficient time for deliberation.124 Military commanders need guidelines for how to respond to an adversary’s AI-enabled actions and for how to employ information provided by friendly AI systems. As states increasingly deploy autonomous weapon systems that incorporate AI technologies, military commanders also need to know whether to react differently to a rival’s operations that are carried out using traditional platforms than to those conducted using AI-enabled systems. More importantly, they need the authority to make these decisions without real-time direction from superiors. While pre-delegation may increase the ability of decision-makers to respond quickly, it has its downsides. Junior commanders may inadvertently use force in ways not desired by alliance policymakers, or increase the opportunities for rivals to launch AI-enabled deception campaigns.
In addition to streamlining decision-making processes, it is crucial that alliance leaders find ways to mitigate the risks that AI-enabled misinformation or deception campaigns pose to alliance solidarity and military command and control. The development of strategic communication strategies helps counter misinformation, and technical and procedural updates can harden command-and-control processes against AI-enabled interference. NATO has already taken steps in this direction, establishing a Strategic Communications Center of Excellence that supports the development of best practices to minimize the effects of disinformation.125 Among the center’s priorities is boosting resilience to misinformation campaigns by raising awareness about the ways that rivals might disseminate fake information.126 These efforts can be bolstered by leveraging technological advances like deepfake detection software that quickly identifies falsified information.127 Alliances and coalitions could also create agencies charged with detecting deepfakes that threaten alliance cohesion or military operations and then informing the public or military units about these falsified videos, recordings, and images. Creating these organizations, however, requires manpower and funding that allies may be unwilling to contribute.
A Path Forward for Alliance AI Integration
In recent years, alliances have successfully relied on a mix of formal agreements and technical measures — like those described above — to streamline interoperability and decision-making. For example, NATO established the Afghan Mission Network, a computer system that enabled participants in the NATO-led International Stabilization and Assistance Force to communicate and exchange battlefield information. At its height, this force included personnel from more than three dozen states working to train Afghan security forces, rebuild Afghan government institutions, and conduct counter-insurgency operations. The computer networks of each of these member states were initially isolated and generally unable to communicate with those of other states. As a result, there was no common operating picture for critical warfighting functions such as intelligence, surveillance, and reconnaissance or coordinating artillery strikes.128 These insulated networks slowed decision-making and command and control and complicated battlefield coordination because information could not easily be transmitted up and down the chain of command. To allow the International Stabilization and Assistance Force to exchange information from the headquarters to the tactical level, NATO planners drafted intelligence sharing agreements and built the Afghan Mission Network.129
To be sure, establishing a shared computer network is a far different task from developing interoperable, AI-enabled military capabilities. The Afghan Mission Network, however, demonstrates that a combination of policy and technical fixes can help members of a large, multinational coalition remove barriers to decision-making and operations and enable interoperability and the sharing of sensitive data. Indeed, the Afghan Mission Network was so successful that NATO used it as a foundation for its Federated Mission Network, which helps ensure connectivity and information sharing between NATO members outside the Afghan theater.130
Failure to cooperate early and often on the development and use of AI may leave allies ill-prepared for operations in an era in which AI is an increasingly common fixture in the arsenals of both friends and foes.
The institutional changes described above will take time to implement fully and requirements will evolve as AI technology matures. There are several steps policymakers can take to ensure alliances remain sufficiently flexible and postured to integrate the latest advances in military AI technology. First, alliance member states can work to develop a corps of subject-matter experts with deep technical knowledge about AI and AI-enabled operations. These experts, who gain expertise through graduate education programs or fellowships in the private sector, could staff alliance-run AI centers of excellence, AI development labs, and working groups. Using their knowledge, they would identify where and how AI can best contribute to alliance operations from the tactical through strategic levels and help update alliance doctrine and policies as AI technology evolves. Individual states have already taken some of these steps. The U.S. Department of Defense activated its Joint AI Center in 2018 and, in 2019, the U.S. Air Force and the Massachusetts Institute of Technology launched a jointly staffed organization to develop AI algorithms and systems for military applications.131
Second, incorporating AI-enabled capabilities into alliance planning exercises and wargames will help prepare policymakers and commanders to better employ AI.132 Wargames, for instance, might ask leaders to employ AI-enabled capabilities or respond to a rival’s use of AI-enabled weapons. These events allow leaders to test and refine institutional processes in a low-risk environment, while also socializing practitioners to the potential uses, limitations, and risks of AI-enabled warfare.
As additional funding and research drive increases in the effectiveness and reliability of AI, the military use of AI technologies will likely expand. And as more states integrate AI into their armed forces, the United States will find itself working with allies to build and exercise AI capabilities that are interoperable and support alliance decision-making processes. Failure to cooperate early and often on the development and use of AI may leave allies ill-prepared for operations in an era in which AI is an increasingly common fixture in the arsenals of both friends and foes.133
Alliances face two broad sets of challenges when integrating AI into operations. First, AI complicates alliance operations. The resource and data requirements needed to build and maintain AI systems pose obstacles to burden-sharing and interoperability. Adversaries can also use AI to launch military deception campaigns that complicate operational coordination. Second, AI can significantly strain alliance decision-making. New AI technologies promise to increase the speed with which allies and adversaries conduct operations, decreasing the time partners have to debate potential courses of action. Decision-making can also be disrupted if adversaries use AI to generate misinformation that can degrade trust among allies. To overcome these challenges, allies will need to establish multinational agreements and standardization guidelines that help ensure data is structured in ways that promote interoperability, while technical measures will help preserve data privacy, allow for data sharing, and minimize the consequences of AI use on the part of adversaries.
Whether and how states grapple with these challenges will shape the conduct of multinational operations and has implications for alliance politics and the global balance of power. Alliances that effectively integrate AI technology will be better positioned to counter threats, while those that allow AI to stymie decision-making and operations may find themselves disadvantaged on the battlefield. Within alliances, member states that quickly master the integration of AI into their militaries may gain significant influence, even if they are less powerful than other alliance partners in conventional terms. Because of their AI know-how, these states may play a dominant role in developing the norms, standards, and doctrine for AI use and help set an alliance’s AI strategy. In a similar vein, Estonia leveraged its cyber warfare expertise to bolster its position in NATO. Despite being territorially small and weak in conventional military terms, Estonia’s specialized expertise allowed it to play a leading role in shaping NATO’s cyber doctrine.134 A state’s successful development of AI can therefore increase its voice and sway within complex multinational institutions.
This article represents a first step in understanding how the burgeoning development of AI technologies will affect alliances, and offers a framework for future hypothesis testing. Future work might more systematically explore the ways in which AI-enabled systems influence multinational military decision-making and operations. For instance, do national security decision-makers trust information provided by AI technologies more or less than information delivered by non-AI enabled sources? Under what conditions are decision-makers more or less likely to believe this information? Are military leaders from certain states more willing than those from other states to rely on AI technologies? If so, what drives this variation? Scholars might also try to identify the types of technical or institutional solutions that best promote AI interoperability. Do alliance decision-makers see formal agreements or technical solutions as a more effective means of ensuring data sharing? Scholars can explore these questions using a variety of methodological approaches including experimental research involving alliance decision-makers or in-depth case studies informed by interviews of senior policymakers.
Researchers might also consider the effects of AI on alliances in areas beyond decision-making and interoperability. For example, how does the use of AI affect strategic stability, nuclear deterrence, and alliance reassurance? Does the increased tempo of AI-enabled warfare make it harder or easier for states to deter rivals and reassure allies? Studies that address these questions would not only expand our scholarly understanding of the relationship between emerging technology and international security, but would help policymakers design better processes and institutions for a security environment in which AI use is becoming widespread.
As AI becomes increasingly common in military arsenals around the world, it is crucial for states to understand the potential challenges AI poses to multinational operations and work to overcome them. To prepare for warfare at machine speed, alliances should develop policies and practices that streamline data sharing and decision-making, and take procedural and technical measures to bolster their defenses against AI-equipped rivals.
Erik Lin-Greenberg is a postdoctoral fellow at the University of Pennsylvania’s Perry World House.
Acknowledgements: I am grateful to Jonathan Chu, Theo Milonopoulos, Michael Horowitz, Jordan Sorensen, Alexander Vershbow, participants at 2019 Perry World House Global Order Colloquium, and the anonymous reviewers for comments on earlier drafts.
Image: Staff Sgt. Jacob Osborne