Shane Riza
The Spectra of Impunity in Warfare
Ideas about killing from ever greater distance and with ever increasing force are as old as warfare itself, as is the penchant for warriors to invent ways of protecting themselves. These drive the evolutionary stages of the weapons and counter-weapons of war. Evolution that increases lethality for one in relation to the other always causes perturbations in the thinking on what is right, wrong, and legal in war. Such perturbations anchor the argument for continuing to increase the technological edge and farther remove soldiers from risk. From a limited view of morality many claim an imperative to protect soldiers by any means feasible. Many claim increasing automation in warfare is following a basic vector of technology, and norms of behavior will change to catch up with the technology just as they did in the past. Many claim the slide toward robotic war is just the next step along the trend line of seeking greater distance from killing in battle. They are wrong. Nothing in our past has prepared us for an artificial intelligence that may decide to kill.
There are three trends that shape warfare carrying it faster, farther, and longer than ever before. These trends are enabled by technologies—some lethal, some mundane—powering them ever closer to an alteration in war that threatens to strip its very meaning as a human activity. There is a drive for total impunity in warfare which carries far more moral weight than claims that simple distance from the killing has always been a part of armed conflict. The confluence of technology trends and the desire for impunity create moral issues in warfare that humans have never dealt with before. The most ominous for the future of the Just War tradition is a “risk inversion” whereby non-combatants will, for the first time, face greater risk than those charged with the fighting. We will explore this concept and its consequences for the future of meaningful war in “The Spectra of Impunity in Warfare.”
Derek Gregory
The God trick and the administration of military violence
Advocates have made much of the extraordinary ability of the full motion video feeds from Predators and Reapers to provide persistent surveillance (‘the all-seeing eye’), so that they become vectors of the phantasmatic desire to produce a fully transparent battlespace. Critics – myself included – have insisted that vision is more than a biological-instrumental capacity, however, and that it is transformed into a conditional and highly selective visuality through the activation of a distinctively political and cultural technology. Seen thus, these feeds interpellate their distant viewers to create an intimacy with ground troops while ensuring that the actions of others within the field of view remain obdurately Other.
But the possibility of what Donna Haraway famously criticised as ‘the God-trick’ – the ability to see everything from nowhere in particular – is also compromised by the networks within which these remote platforms are deployed. In this presentation I re-visit an air strike on three vehicles in Uruzgan province, Afghanistan, in February 2010, in which 20 civilians were killed. Most commentaries – including mine – have treated this in terms of a predisposition on the part of the Predator crew involved to (mis)read every action by the victims as a potential threat. But a close examination of the official investigations that followed, by the US Army and the US Air Force, reveals a much more complicated situation. The Predator was not the only ‘eye in the sky’, its feeds entered into a de-centralized, distributed and dispersed geography of vision in which different actors at different locations saw radically different things, and the breaks and gaps in communication were as significant as the connections. In short, much of later modern war may be ‘remote,’ but there’s considerably less ‘control’ than most people think.
Christiane Wilke
The Optics of Bombing: International Law and the Visibility of Civilians
The distinction between civilians and combatants is fundamental to the international law of armed conflict that prohibits the intentional targeting of civilians. How do participants in armed conflict recognize civilians? While combatants are obliged to carry their weapons openly, to wear uniforms, and to have a “distinctive sign recognizable at a distance,” there is nothing distinctive about civilians. Many debates about the legality and ethics of aerial bombardments focus on the ability of new technologies to observe the distinction between civilians and combatants. Through a case study of one single NATO air strike in Afghanistan, this paper shows that the vital distinction between civilians and combatants is constitutionally blurry. Yet this distinction is continuously enacted by participants in conflicts through visual technologies that range from close observation to aerial surveillance. As a result of specific visual technologies and practices of violence, the burden of visually distinguishing themselves has shifted from combatants to those who would like to be considered civilians.
Patrick Crogan
Un-securing the Territory
This talk will explore robotic weapons systems as leading edge technology exemplifying and intensifying central questions of contemporary technoculture. My approach is philosophy of technology-inspired and seeks to approach drones from longer and shorter historical distances – from their inheriting of military influenced ‘classical’ Greek formulations of mathematics, tekhne and architecture, to the ‘classic’ questioning concerning modern technology articulated from the 1940s by post-war, post-Nazi, Martin Heidegger, who described the action of modern technics as an ambivalent, ‘un-securing’, ‘harbouring-forth’. The resort to philosophical conceptions to discuss such a material and politically and morally urgent phenomenon is arguably (perhaps even undoubtedly) problematic, but is defensible on two grounds as I hope to show: Nothing is materialised (by ‘we’ humans at least), nothing becomes political or attains moral agency or impact, without abstraction resulting from philosophical and conceptual labour – materiality is composed with ideas just as the reverse is true. Secondly, as Bernard Stiegler has argued, time may be short in the accelerating speed race of technological change, but this is all the more reason to take some time to come to terms with the dynamics of the transforming situation in which ‘we’ humans, under or potentially under drones, try to find ‘ourselves’ today.
Jutta Weber
Making Algorithms Kill. On the Epistemic Logic of Robotic Warfare
Traditionally, autonomy was understood as the individual self-determination of the liberal subject and signified the essential difference between humans and animals or machines. These boundaries were blurred with the rise of new sciences such as systems theory and cybernetics. The new concept of systems made it possible to blackbox organic as well as technical entities and to make them thereby compatible. Autonomy got redefined as self-regulation, adaptation and self-exploration – not only in cybernetics but also in today’s behaviour-based robotics. In contrast to this, symbolic AI held on to a more traditional, top-down understanding of intelligence and autonomy by hoping to rebuild human thinking in silico.
In my paper I want to explore diverse epistemological and (techno)scientific principles and scientific rationalities which underlie contemporary automated / autonomous weapon systems. Analysing their mechanisms and effects, I hope to foster a better understanding of the complex human-machine networks that make algorithms kill.