Governments now deploy online platforms to shape public opinion and influence collective cognition. This is acutely apparent between China and Taiwan.
Frank Cheng-Shan Liu
Source: Getty
AI in warfare has numerous impacts, including how they shape human responses to target recommendations and how they increase the speed at which lawful targets can be recommended.
This essay is part of a series from Carnegie’s Digital Democracy Network, a diverse group of thinkers and activists engaged in work on technology and politics. The series is produced by Carnegie’s Democracy, Conflict, and Governance Program. The full set of essays is scheduled for publication in summer 2026.
In contemporary warfare, one specific and increasingly salient factor has attracted growing attention: the role of new military technologies, particularly military artificial intelligence, in shaping patterns of destruction in high-intensity conflicts. Among the most prominent examples of the use of AI-based decision-support systems are those discussed in media reports that analyze extensive intelligence datasets and assist in generating operational strike decisions in Gaza, such as the Gospel and Lavender, and their potential contribution to the extremely high number of civilian casualties and the scale of destruction reported in the conflict.1 More recently, reports have pointed to the central role these systems have played in U.S. and Israeli operations against Iran.2 Rather than asking the general question of whether AI systems lead to international humanitarian law (IHL) violations, the essay considers two ways in which decision-support systems (DSS) may affect warfare: how they shape human responses to targeting recommendations and how they expand the scale and speed at which lawful targets can be generated and attacked.
I offer insights from my recent work on the subject, focusing on two points. The first concerns the importance of carefully assessing the potential impact of military AI in armed conflict. To fulfill the core function of IHL—minimizing human suffering in warfare—it must be better understood how technological innovations affect battlefield decisionmaking. Military AI may have positive effects, such as improving the ability to distinguish between lawful and unlawful targets.3 At the same time, it may generate harmful effects, such as producing vast numbers of inaccurate targets that are approved without sufficient human scrutiny.4 At a theoretical level, all of these possibilities appear plausible. Yet theory alone is not enough. What is required is rigorous empirical inquiry into how these systems actually affect decisionmaking in war.
To illustrate this point, one of the potential harmful effects that appears repeatedly in the legal and ethical literature on DSS is the impact of automation bias. Legal and ethical scholars often assume that human decisionmakers will defer to AI targeting recommendations, to the extent that human judgment becomes little more than a rubber stamp.5 Much of this literature, however, overlooks a countervailing tendency known as algorithmic aversion, which suggests that in high-risk settings, humans are often reluctant to rely on machines.6 Targeting decisions in war clearly fall within such high-stakes contexts. More importantly, claims about automation bias in this area are frequently made in the absence of empirical evidence, while largely ignoring the main findings of the broader empirical literature on AI and decisionmaking. The assumption is repeated so often that it risks becoming conventional wisdom, even though we do not yet know whether it holds true in practice.
In a recent experimental study that I conducted with Ryan Shandler from Georgia Tech and Michael Gross from the University of Haifa, we examined this assumption directly.7 We found that participants were more willing to approve strikes when the information came from human intelligence officers rather than from a DSS, especially in cases involving high levels of collateral damage. We also found that providing participants with more detailed information eliminated the gap between the groups. Crucially, we found no evidence of automation bias in this context. Of course, this is a single study with clear limitations, and its findings must be treated with caution. Still, the results underscore the importance of empirically testing even the most widely accepted assumptions about the effects of military AI. They also suggest that future work should focus more carefully on the specific conditions under which reliance on AI may increase or decrease. Understanding the circumstances that shape the balance between automation bias and algorithmic aversion will be essential if regulatory frameworks are to respond to the real dynamics of battlefield decisionmaking.
The second point concerns what I see as a more profound risk posed by DSS on the battlefield, one that is closely related to the increasing pace and scale of target production. Unlike the first point, this observation is not grounded in empirical findings but in theoretical analysis. The key distinction between DSS and traditional intelligence processes lies in the scale and speed at which DSS can generate targets. In an article co-authored with Yuval Shany from the Hebrew University of Jerusalem, we argue that this capacity threatens to erode what we describe as the principle of restraint in IHL.8
IHL is built around prohibitions, such as the ban on intentionally targeting civilians, and permissions, rather than obligations, to use force. Focusing on the latter, combatants may strike enemy forces and objects, but they are not required to attack every lawful target. In practice, armed forces often refrain from doing so for moral, pragmatic, or operational reasons, or because they lack the capacity to attack every target. DSS, however, can drastically increase the number of available targets. To illustrate, while a traditional human-led targeting process might identify one hundred legitimate targets over the course of a year, an AI system could generate fifty targets a day.9 The effect of such technologies is to narrow the space between the legal ceiling of permissible conduct and the operational floor of what can actually be achieved in the field, pushing practice closer to the outer legal limits of warfare.
The expansion of capacity risks facilitating levels of destruction far greater than previously possible, without any formal violation of IHL. Such a dynamic is especially troubling in contexts where hostility between parties reduces the role of moral or pragmatic restraint and incentivizes striking as many targets as possible. The concern, then, is not only that DSS may facilitate unlawful attacks, but that they may enable extensive, lawful destruction that nonetheless significantly undermines the restraint at the heart of IHL. This concern also raises broader questions about whether IHL, with its architecture of prohibitions and permissions, is sufficiently equipped to address the challenges of an era in which technological capacity has expanded so dramatically.
Shany and I explore several responses to the risk that the proliferation of DSS will undermine the principle of restraint by dramatically expanding the pool of lawful targets. Here I will mention only one, which focuses on recalibrating the principle of proportionality.10 Current rules prohibit excessive collateral harm relative to anticipated military advantage, but they leave broad discretion, and apply across all legitimate targets. A stricter approach could introduce presumptions of illegality when civilian harm exceeds fixed thresholds in DSS-assisted strikes, or could permit such strikes only against high-value targets. These measures would reorient proportionality toward a stronger humanitarian baseline, counterbalancing the DSS-driven expansion of lawful destruction. There are, of course, significant challenges to adopting such an approach, including determining what constitutes a high value target, and it would succeed only if it does not incentivize the defending party to abuse the principle of distinction and further embed its military personnel and objects within its civilian population.
One major open question is whether, in high-intensity urban conflicts characterized by deep enmity, the absence of decision-support systems would meaningfully reduce levels of destruction, or whether states would still pursue widespread targeting within the limits of human capacity. Answering this question requires further empirical study. What is clear, however, is that the integration of DSS into military practice compels a reassessment not only of compliance with IHL’s prohibitions, but also of whether its broader architecture of permissions and restraints remains adequate in an era of rapidly expanding technological capacity.
It is important to emphasize the limited scope of this essay. A range of other serious and well-documented concerns regarding military AI are not addressed, including questions about the ability of such systems to accurately identify legitimate military targets, the quality and bias of underlying data, or the risks of error and misclassification at scale. Those issues are critically important and warrant sustained attention. The focus here has instead been on a different, and less discussed, policy challenge: how even lawful, good-faith uses of military AI may incrementally erode restraint by narrowing the gap between what IHL permits and what military forces are technologically capable of doing. Recognizing and addressing this structural risk should be part of any serious effort to govern the future use of military AI in armed conflict.
Yahli Shereshevsky
Senior Lecturer, University of Haifa Law School; Principal Investigator, Minerva Center for the Rule of Law under Extreme Conditions
Yahli Shereshevsky is a senior lecturer at the University of Haifa Law School and a principal investigator at the Minerva Center for the Rule of Law under Extreme Conditions. His research and teaching focus on international humanitarian law, the intersection of law and technology, international lawmaking, and international criminal law.
Carnegie does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie, its staff, or its trustees.
Governments now deploy online platforms to shape public opinion and influence collective cognition. This is acutely apparent between China and Taiwan.
Frank Cheng-Shan Liu
A disconnect between Gen Z citizenry and older rulers has fueled massive demonstrations. What are the key issues and how is protest energy translating into electoral change?
Usama Khilji
In an effort to disseminate its preferred message, the Iranian regime is offering a simple transaction: connectivity for amplification.
Mahsa Alimardani
Internet service providers can facilitate internet access but also draconian control.
Irene Poetranto
Censorship in China spans the public and private domains and is now enabled by powerful AI systems.
Nathan Law