Can Defence Shape Public Perception of Non-volitional Weapon Systems?

What is the difference between blinking your eye and pulling a trigger? Clearly, blinking your eye is a non-volitional act that occurs despite your intention, decision or desire. Pulling a trigger on the other hand, requires a certain level of decision-making; it requires a will, an intention and a deliberate action. But what about locking onto a target simply by looking at it or even discharging a weapon with only a thought? As modern militaries search for operational advantage, the speed with which a human operator can control their equipment becomes paramount. This is fuelling investment in non-volitional machine interfaces, which negate the need for a user to perform voluntary movements to issue commands.

The ethics of autonomous weapon systems have already been extensively discussed. The controversial idea of “killer robots” indiscriminately wreaking havoc among human combatants recalls a litany of popular culture references. From German expressionist Fritz Lang’s 1927 sci-fi epic Metropolis to the more recent Blade Runner, Terminator and Matrix Hollywood franchises – fictionalised dystopian worlds share a foreboding sense of inevitability; that humanity’s thirst for technological progress will ultimately end in its own self-destruction at the hands of rogue autonomous machines.

While these potential problems have been widely discussed, the ethics concerning non-volitional weapons have received comparatively little attention. Groups have been campaigning against the use of autonomous weapon systems for years. Action groups such as Article 36 promote the banning of “killer robots”, supported by larger international organisations like Amnesty International and Human Rights Watch. These groups form an organised front against the research and development of autonomous weapons; calling for weapons to remain under human control and the UK government to actively challenge their use at the UN and other international fora. However, when it comes to the suite of non-volitional weapons, opposition is conspicuously muted. The lack of public profile should not belie the potential of these systems and the new set of ethical considerations they pose.

Non-volitional weapon systems form part of a larger technology family of Human Machine Interfaces (HMI) which have become ubiquitous in military research laboratories around the world. The suite of HMI technologies consists of a broad spectrum of technologies and use cases which would be familiar to the general public. From image recognition cameras, to smart clothing and other wearables such as smartwatches. Yet alongside these technologies sits the increasing military exploration of non-volitional machine interfaces. These technologies include examples such as eye-tracking to support target selection and brain-controlled interfaces (BCIs) to deliver functions such as drone control.

In a military context, these technologies are designed to increase the speed and intuitiveness with which weapon systems can be deployed, making them more reflexive. Their use aims to shorten the observe-orient-decide-act (OODA) loop by integrating the control of military systems with the most rapid human responses, specifically reducing the time between deciding and acting. The question around the extent to which this reduction blurs the lines between a decision and an action remains largely unanswered. Using non-volitional machine interfaces to issue directions to military platforms will require extensive research, public consultation and new doctrinal frameworks to account for a plethora of ethical issues.

Public perception around non-volitional weapon systems will largely be informed by the research conducted by the military itself. The US Defense Advanced Research Projects Agency (DARPA) is currently funding research on the potential use of invasive BCIs in remote weaponry directly controlled by the operators’ brain signals. Moreover, a US patent has been granted jointly to Duke University and DARPA relating to “apparatus for acquiring and transmitting neural signals” for the purposes of, “weapons or weapons systems, robots or robot systems”. It is expected that these organisations will provide a suitable level of transparency and public scrutiny for these kinds of projects. Not least because many of these technologies remain untested in military settings and pose a risk to military personnel through blue-on-blue incidents.

One of the key ethical challenges relates to their impact on determining responsibility for military accidents. How accountable can we make an operator for a non-traditional control input? In the case of physical actions or button presses, these might seem more obviously the deliberate volitional action of an individual. Yet in the case of eye-tracking as an input for control over a weapon, it is more difficult to establish what was intended against what occurred.

At the heart of the ethical issues surrounding non-volitional weapon systems is the philosophical concept of volition. In antiquity, Aristotle was the first to establish the philosophical connection between volition and the will. This was built upon by the medieval voluntarists including St Augustine and Thomas Aquinas, who constructed volition around the will of God, followed by the post-Cartesians who understood volition as an inner mental event resulting in a physical act, such as a voluntary movement of the body.

The conceptualisation of volition took a sharp philosophical turn with Kant and Schopenhauer. Kantian deontological ethics recognised that a person who regularly thinks bad thoughts and yet does good deeds, may prove to be a moral exemplar. This approach places a moral sense of duty above the gift of being virtuous and has been absorbed into Western concepts of criminal law to prosecute those who perform bad deeds, rather than those who simply contemplate them. Non-volitional machine interfaces potentially disrupt this widely accepted position by merging the internal (thoughts about actions) with the external (execution of the actions themselves).

In 2019, the Royal Society commissioned a public dialogue to examine attitudes to neural interfaces. The project involved 73 participants from a “broadly representative demographic of society” with varying tendencies to adopt new technologies. The results concluded that nearly all participants approved of using neural interfaces for medical purposes. Non-medical uses were met with great caution, but some were considered positive developments for society (such as enhancing entertainment experiences). The main areas of concern centred around equality of access, control and transparency of data use and a potential future in which minds could be read or behaviour controlled without prior consent. This study reveals that the public’s perception of non-volitional machine interfaces may be intimately connected with their specific use. This means public attitudes towards non-volitional weapon systems could depend on how the final use case is presented and articulated by Defence.

The most taxing ethical questions for military systems using non-volitional machine interfaces concern the moral and legal responsibility for unintended consequences while operating weapons. In both eye-tracking and BCIs, the relationship between human intent and action is mediated by an additional technological layer of interpretation. Complex and opaque algorithms and models are required to interpret the detected signals (whether light reflecting off the cornea or brain waves) to determine where the eyes are pointing or what is being thought. Even as the technologies improve, there are likely to be enduring issues relating to the reliability of both the signal measurement and its interpretation. So while military personnel may use these systems to perform an action, it may not be clear what specific actions were actually intended. There is little room for ambiguity in the act of pulling a trigger or pressing a button, but non-volitional machine interfaces will capture much less binary inputs from their human users. This is likely to make it far more difficult to pass judgement on where responsibility ultimately lies.

There have been calls for the UK government to step up its investigations into non-volitional technologies in various contexts. The Royal Society report warned that failure by the government to act in this area may lead to innovation led by for-profit companies at the expense of government. The recommendations from the report included a call for a “national investigation” of the ethical issues presented by neural interfaces to address questions of what data should be collected, how it is kept safe and the acceptability of enhancements.

There is certainly a case for the expansion of military doctrine to provide greater clarity into the use of non-volitional weapons. Areas where command and control intersect with non-volitional weapons will need to be incorporated into existing legal principles like criminal accountability for war crimes. Even if the use of a weapon such as a BCI-guided drone or missile is not prohibited per se under international criminal law, criminal liability may still apply to military personnel who use the weapons to commit certain crimes under both domestic and international law.

Non-volitional weapon systems present significant challenges for the legal system. They cloud the ability of a court to determine whether a volitional act occurred before a weapon was discharged and may make it harder to prove guilt for misdirected attacks on civilians. The central problem is that non-volitional weapon systems deprive the actor of the authority over his or her actions, potentially rendering them unaccountable for conscious control over weapons with the potential for lethal force.

Building a broad public consensus of support for non-volitional weapon systems will be very difficult. First, the issue is likely to be conflated with the debate over “killer robots” that dominates public consciousness. Second, there will be privacy concerns relating to technologies which are viewed as being able to, in some sense, read minds or intuit intentions. Third, there could be strong resistance to the notion that an intention to act should be interpreted as a command to act. Finally, the application of imperfect technologies to life or death decisions in a military context is unlikely to avoid scrutiny. To overcome all of these barriers to acceptance, the advantages to military capability will need to be significant and incontrovertible. Continuing to fund research into non-volitional weapon systems without painstakingly addressing legitimate ethical concerns and robustly establishing the potential benefits, risks wasting considerable elements of the Defence science and technology budget. Ultimately, we might decide that while using a joystick, aiming a weapon manually or pressing a sequence of buttons are ponderously slow on today’s lightning-paced battlefield, the non-volitional alternatives bypass something too precious to lose: human will.