In 2002, the Magna Science Centre in South Yorkshire witnessed a surprising event: a two foot tall robot, Gaak, escaped from a gladiatorial experiment with learning robots.  The experiment, part of the “Living Robots” project, simulated a predator and pray scenario where some robots searched for food (prey) and others hunted for them (predators).

Gaak, a predator, was left unattended for fifteen minutes and, in that time, managed to find and navigate along a barrier, find a gap, move through it and continue across a car park to the M1 motorway.

Gaak was found rather quickly when a motorist almost collided with it.  This story of robot liberation helps us to understand a simple fact about learning machines: they are unpredictable.

This should guide us when thinking through the role of artificial intelligence and robotics in contemporary warfare, especially if we think there are morally right and wrong ways of using lethal force.

While we have no evidence that learning weapons systems are currently being deployed in combat, we are at a critical juncture in terms of how far artificial intelligence (AI) is utilized on weapons systems or platforms, and whether such learning weapons will be deployed in the future.

One major concern for deploying learning weapons systems is that they cannot be adequately tested, verified or validated because they are constantly changing, developing new capacities, and are not static traditional systems.

Another is that that they may be vulnerable to hijacking by an adversary, either for use against friendly troops or to commandeer the platforms to gather intelligence.  This is a particular worry when the system constantly requires new sensor data, because this data could potentially be “spoofed” to mimic safe code, while in reality being malicious.

“we are at a critical juncture in terms of how far artificial intelligence (AI) is utilized on weapons systems or platforms, and whether such learning weapons will be deployed in the future.”

Rising to the latter challenge, the US Defense Advanced Projects Agency (DARPA) recently unveiled a project whose ultimate goal is to produce software “that can write near-flawless code on its own”.

The project, the High Assurance Cyber Military Systems (HACMS) [pronounced Hack-Ems], is DARPA’s solution to software vulnerabilities on unmanned systems, including aerial vehicles, ground vehicles, sensors, and control systems (such as SCADA systems).

HACMS uses formal mathematical models to automatically generate new code that is invulnerable to spoofing attacks that might attempt to directly gain control of an operating or control system.

While HACMS is different to Gaak in that it does not learn through goal driven algorithms, it will more than likely be part of a software package that does.  HACMS will act as an automated software compiler, secure code generator, and operating system that guards against incoming cyber threats by sniffing out potential intrusions. In short, HACMS is the cyber protector for the (learning) machine.

What does all of this technology have to do with military ethics?  Well, let’s start with the fact that Gaak is a technological relic by today’s standards, but still managed to escape the confines of its creators – both literally and figuratively – despite its relative simplicity.

Today we are facing the potential for much more sophisticated learning machines that have the capacity to generate near perfect code and can guard against external manipulation.

This is not to say that they are as intelligent as human beings (or that they are “super” intelligent.)  But we will almost certainly be dealing with some version of “intelligence” when states create and produce learning weapons and deploy them against a variety of perceived “threats.”

From an ethical perspective, the overarching worry is that if one gives a machine the ability to learn while deployed, a system that allows that computer architecture to perpetually generate and check its own code, making it virtually unhackable, then one has created a very dangerous machine indeed.

We will have created an artefact that can decide which orders to follow, or at very least which sub-tasks under a general order to follow.

Giving a system the ability to change sub-goals may inadvertently mean that one day they begin changing the very goals themselves. This would take war, and moral evaluation of lethal acts during war, out of the hands of humans. This leads to two specific ethical concerns.

[/media-credit] Giving a system the ability to change sub-goals may inadvertently mean that one day they begin changing the very goals themselves

Firstly, while we would surely want somewhat intelligent machines to be able to refuse to unjust (or illegal) orders, they may also decide against saying “no” to patently unjust wars, and to unjust actions within wars, because they may have learned that these actions are the most expedient way to achieve their specified goal. Secondly, and conversely, they may opt to say “no” to just orders.

“if one gives a machine the ability to learn while deployed, a system that allows that computer architecture to perpetually generate and check its own code, making it virtually unhackable, then one has created a very dangerous machine indeed”

Moreover, learning systems are capable of self-improvement, and presently we do not know at what rates such a system would improve or in which direction they would go in complex and unconstrained environments. To get a better understanding of the potential for rapid and almost unbounded learning, consider the tech industry’s gains in this area.

In 2014 Google Deepmind unveiled a generic AI that had the capacity to learn and succeed at all the Atari 2600 games, almost overnight.

This week it unveiled its capacity to win at 3-D maze games through “sight.”  Next month, we will see if the company’s claims to have mastered the Chinese game GO are true when their AI player AlphaGo takes on the world GO champion Lee Sedol.

While Deepmind has emphatically rejected their technology being used for military purposes, they are not the only game in town.   That such learning is possible means others are likely developing it too.

Taking this type of technological capacity and weaponizing it carries with it great amounts of risk.  For even if we may worry about fallible humans waging unjust wars, the creation of machines capable of deciding their own actions in (or out) of war is at least morally questionable.

Moreover, these are the emerging technologies that will change the face of combat.  Just war theorists will do well to remember that.  While we can talk in the abstract about “weapons” and “war,” focusing only on the logic of the argument, we often fail to begin from the correct assumptions about how war is presently fought.

Instead, many contemporary just war theorists start from outmoded or idealized conceptions of what war is like, ignoring empirical realities about modern combat.

For example, many current missile systems have the capacity for automatic target correlation (ATC).  This is where an image of a pre-selected target is loaded onto the missile’s onboard computer.  Missiles like this typically fly (automatically) to a location, then scan that location with sensors looking for a “match” to the picture of the target.

When they have that match, they “decide,” and fire on the target.

This type of precision guided munition is presently fielded and often deployed.   While this raises its own ethical worries, more troubling still is the technology’s potential for automatic targeting without using pre-loaded images.

For example, if a state deploys a learning weapon system, how will that system “target” combatants?  Will that system utilize facial recognition?  Will it utilize a sensory input from intention recognition, gesture recognition, connecting into surveillance systems, or even potentially other intelligence, surveillance and reconnaissance inputs? What proxies will it utilize as indicators of “unjust threat” or “combatant?”

Will that targeting be more or less reliable in upholding the principles of just war than the average human soldier? All these practical questions need to be addressed when making all-things-considered judgments about the morality of war.

“While we can talk in the abstract about “weapons” and “war,” focusing only on the logic of the argument, we often fail to begin from the correct assumptions about how war is presently fought.”

In addition, we should extend our ethical discussion of AI weapons more broadly, beyond narrow questions of targeting criteria.  What would be the wider effects of creating and deploying such machines? And how would these impact upon the practice of war?

We know, for example, that weapons development and modernization is inevitable, and some might say preferable if those weapons cause less collateral damage and minimize civilian harm.

Some might even claim that through weapons development there is a net benefit to society when the technology “trickles down” to the general public, through things like Global Positioning Satellite (GPS), the Internet, self-driving cars or automatic pilots.  But what of learning weapons with perfect self-generating code?

I cannot here answer this question fully.  However, I can point to a few areas for consideration.  First, we would want to consider seriously who is developing the weapons and for what purpose.  We might then have a very good idea of the likelihood of beneficial “trickle down.”

Second, we would want to take into our estimations whether these machines will be isolated platforms “thinking” on their own, or whether they will be connected to one another and to other systems, networks and databases.

The extent of their reach may have serious deleterious effects beyond the battlespace.  And finally, I would raise the empirical point that whenever something is developed, it spreads.  As in the case of “drones,” when technology is relatively cheap, highly desirable and generally effective, rapid proliferation inevitably follows.

For example, the Rand Corporation estimates that every country in the world will possess armed drones within 20 years. It is not a serious leap to think that weaponized AI technology will not remain solely in the hands of states.

As in the case of Gaak’s escape, we see yet another example of the ‘Sorcerer’s Apprentice’ problem on the horizon. Technology often liberates itself from the hands and intentions of its creators.  This is not always a problem, but those concerned with the ethics of warfare should be well aware of its likelihood as well as it potential effect on the future of combat.

 

 

Credits:

Article published on OxPol and The Ethical War Blog of the Stockholm Centre for the Ethics of War and Peace

Featured image: DARPA

Audio: SoundCloud/WBEZ’s Worldview

Video: YouTube/BAE Systems

 

CC BY 4.0 Escape of the Gaak: new technologies and the ethics of war by OxPol is licensed under a Creative Commons Attribution 4.0 International License.