Autonomous Weapons Under International Humanitarian Law

Font Size:

Expert asserts that military use of autonomous weapons could be meaningfully regulated under international humanitarian law.

Font Size:

Imagine walking down the street when a drone flies up alongside you—and then shoots you. Although this exact scenario is currently fiction, some artificial intelligence researchers worry that autonomous weapons could shape our lives profoundly in the future.

As a result, some lawyers are considering how to regulate such weapons. Writing in his personal capacity, Charles Trumbull, a U.S. Department of State attorney-adviser, has argued that existing international humanitarian law (IHL) is flexible enough to cover autonomous weapons despite how this form of weaponry blurs the distinctions between human and machine decision-making.

Artificial intelligence (AI) relies on computer science to develop machines capable of completing work that historically required human operators, such as recognizing images and analyzing data. AI systems can perform some tasks better and more quickly than humans do.

AI allows autonomous weapons, for example, to respond faster to incoming threats such as high-speed missiles, endure harsh environments that human soldiers could not bear, and serve without later experiencing post-traumatic stress disorder.

Underscoring the value of AI for national security, Russia’s President Vladimir Putin has opined that “whoever becomes the leader in this sphere will become the ruler of the world.”

Autonomy in weapons varies. Semi-autonomous weapons can identify targets but require humans to initiate an attack, such as South Korea’s SGR A-1 Sentry. Autonomous weapons can attack targets without further human decision-making, such as Israel’s Harpy. Trumbull envisions a future in which technology progresses to the point that “fully autonomous weapons” take over more decision-making from humans, such as by assessing military objectives themselves.

Trumbull argues that, although IHL bans weapons that cause “unnecessary suffering” or “are inherently indiscriminate,” autonomous weapons do not by their nature necessarily fall into such categories. He emphasizes that the design of such weapons matters more under IHL than their effects in a particular scenario do, and that autonomous weapons could be designed to avoid these banned qualities.

Trumbull asserts that autonomous weapons will, however, pose challenges given that IHL focuses on whether a human operator’s decisions are reasonable rather than on the effects of those decisions. Whether a decision violates IHL depends on the information available when the attack occurred—instead of what is known in hindsight—and provides commanders a range of acceptable responses in a single scenario.

IHL, in other words, governs human decisions. Although machines lack IHL obligations, Trumbull claims that, even if an autonomous weapon makes a targeting decision that a human would have previously, IHL still applies to the human deploying the autonomous weapon. He argues that rather than placing the responsibility for military actions on machines, humans would remain responsible, with accountability shifting from the soldier using a gun to the humans who develop and deploy autonomous weapons.

But greater autonomy in weapons complicates the process of determining whether a human judgment to use force is reasonable.

Trumbull emphasizes that autonomous weapons can make unpredictable decisions after their deployment, weakening the link between lethal action and the decision to deploy. Furthermore, as reasonableness relies on how a commander’s actions compare to other commanders’ hypothetical decisions had they faced the same conditions, reasonableness in autonomous weapons will be unclear before militaries develop more experience with them.

Autonomous weapons also introduce uncertainty over the information that was available to support making an attack. Trumbull outlines a “distributed knowledge problem” in which commanders depend on information available from a whole host of “computer programmers, the weapon’s testers, intelligence units and friendly forces, satellite imagery, weather forecasters, and the weapon’s sensors.” He claims that the range of potential sources will create difficulties in assigning individual accountability for military outcomes.

Still, Trumbull asserts that using autonomous weapons should not create an accountability gap. Even if identifying the specific cause of some wrongdoing may be more difficult with autonomous weapons, war crimes against civilians remain illegal.

On the other hand, assessing potential war crimes becomes more difficult in cases without clear intent—as would likely be the case with autonomous weapons. In these situations, Trumbull proposes that IHL violations require a determination that an offending human has acted recklessly.

Although challenges exist in evaluating recklessness given current militaries’ relative lack of experience with autonomous weapons, Trumbull recommends that states can “develop norms of professional behavior” and “national rules of engagement” for autonomous weapons that can provide the basis for when a commander’s behaviors veer from reasonable to reckless. In addition, he suggests that an emphasis on state responsibility as opposed to individual responsibility could also help prevent an accountability gap.

Trumbull predicts that autonomous weapons will also impact how new weapons are tested and reviewed under the Geneva Conventions’ Additional Protocol I. The weapons review process depends on reliability and predictability: Does the weapon behave consistently as expected? If so, and if commanders and programmers of autonomous weapons deliberately direct them against civilians, Trumbull believes IHL violations will be clear.

Because autonomous weapons can use their learning capabilities to respond to their environments and act accordingly, how they behave in testing environments may not reflect their behavior in the field. Through learning, these weapons may even behave in ways entirely unpredicted by humans.

“This inherent degree of unpredictability does not mean that autonomous weapons are unlawful,” Trumbull argues. He claims that given the lack of transparency surrounding existing weapons review processes and coordination challenges among states, a universal autonomous weapons testing standard would be impractical.

Instead, he recommends that states “develop non-binding, good practices for improving autonomous weapon reliability.” He suggests that the Convention on Certain Conventional Weapons’s Protocol on Explosive Remnants of War and the U.S. Department of Defense’s Directive on Autonomy in Weapon Systems could provide models for practices such as quality control, testing, failsafe measures, and personnel training.

The potential advantages of autonomous weapons provide the world’s militaries great incentives to develop such weapons. Acknowledging the challenges of autonomous weapons for IHL’s human-centered standards, Trumbull argues that the time has come for discussions to shift from banning autonomous weapons to how to apply IHL in practice.