Beyond Heaven and Hell

AI, what will it mean? Helpful robots washing and caring for an ageing population, or pink-eyed terminators sent back from the future to cull the human race?”

These are the words of Boris Johnson, the UK Prime Minister, who in his speech at the UN General Assembly in September 2019 colourfully warned against technologies and pink-eyed Terminators that will threaten our very existence. The stories of scary Terminators and sci-fi scenarios appear [see update below] to have dominated the media landscape with titles such as ‘The Doomsday Invention’ as, following its winter, AI has entered another spring season.

The fears and hopes that accompany intelligent machines have long roots and have once again been revived by recent discussions on the Singularity by futurist Ray Kurzweil, and existential warnings by prominent figures such as Professor Nick Bostrom, Elon Musk, and the late Professor Stephen Hawking.

The response to this barrage of doom is equally interesting. Responding to the recurring theme of the Terminator in all its variations, there has been an impressive pushback with reports calling out the hyperbole which characterises much of the AI debate. For example, in 2017, Rodney Brooks[3], the former director of the Computer Science and Artificial Intelligence Laboratory at MIT chastised the “hysteria” that surrounds us about the future of AI and robotics – “about how powerful they will become, how quickly, and what they will do to jobs” and warned that “Mistaken extrapolations, limited imagination, and other common mistakes […] distract us from thinking more productively about the future.” CognitionX, a key knowledge network platform in AI has organised conference panels titled “Stop the Terminator chat” while prominent philosopher Luciano Floridi reached the point in one of his talks of urging despairingly, ‘enough with this Terminator stuff!’.

The key point of those who voice such warnings is that this hype and hysteria blinds us to the important opportunities, but also questions and problems of the here and now. As Kate Crawford and Ryan Calo argue convincingly, fears about the future impacts of artificial intelligence are distracting researchers from the real risks of the already deployed systems.

Instead of participating in these important discussions (perhaps a task for another day), what interests me here is to start the process of observing how they are conducted. In other words, I want to understand the work that the Terminator does in configuring AI and in particular ethical AI? How does the invocation of this figure – either by highlighting it or by rejecting and pushing against it – configures ethical AI in particular ways?

As neatly illustrated by the PM’s words above, there is a tension between the two figures of the malevolent Terminator and the benevolent robot (which here I read as the embodiment of good, ethical AI although there is a whole interesting issue of embodiment that I shall leave unexplored for the moment). Yet, I think there is more to this relationship and if I were to push it further I would suggest that besides tension, there is also a symbiosis going on.

This interesting symbiosis is captured by Lucy Suchman’s acute observation in her post ‘Which Sky is falling?

“In much of the coverage of debates regarding AI and robotics it seems that to reject the premise of superintelligence is to reject the alarm that Musk raises and, by slippery elision, to reaffirm the benevolence of AI and the primacy of human control.”

There are two points here that I would like to explore further starting with this idea of the slippery elision that reaffirms the benevolence of AI.

Proclamations of ‘This isn’t the Terminator’ have, indeed, become somewhat common place in their frequency and nonchalance. This is a phrase which is meant to reassure but in the paternal way of a father who easies his child’s irrational fears, as demonstrated in the words of Mathieu Webster of NHS Shared Business Services who advocates “helping people understand that it is not something to be frightened of. It’s not the Terminator”. Such phrases are uttered in a sincere and light-hearted way becoming the perfect carrier for all fears, concerns, objections. Namely, a way to acknowledge them but without really addressing them. Who after all is really worried that this is the Terminator! This phrase becomes a shortcut that bangles all of the fears, concerns, objections together and then disperses them with a fleeting comment that works to confirm their naivety and imply their ignorance (and hence, in its turn, imply the naivety and ignorance of the ‘public’ who is imagined as holding these fears) while implicitly affirming that ‘everything is fine’.

Ultimately, this is a trope which – just like its inverse one that worries about the technological singularity, superintelligence and the distant future impacts – achieves the same result of moving the gaze away from valid fears, problems and concerns. Yet while the latter, as Crawford has identified, ‘[is] distracting researchers from the real risks of deployed systems’, the former achieves much the same by rendering them and their holders insignificant and naïve. Bringing Crawford and Calo and Suchman together, I would argue that it is indeed true that worrying about terminators and superintelligence distracts us from valid concerns. Yet, it is also necessary to stress that “Not worrying about superintelligence, […], doesn’t mean that there’s nothing about which we need to worry”.

Secondly, I would like to think more carefully about the ‘and’ in Suchman’s quote above. This is an interesting link that connects the benevolence of AI with the assertion and primacy of human control. One, that as I will try to explain, folds in a complex, messy and contradictory relation, yet one that is still productive and goes places.

To illustrate my point, here is a lengthy quote by Mustafa Suleyman the cofounder of the AI company Google Deepmind who in 2016 shared a panel with Suchman and others titled Inequality, Labor, Health, and Ethics in AI organised by the AI Now Institute. Responding to the question ‘What would the best AI look like? What would an unbiased, just, fair AI look like?’ he responded:

I think the first thing I would say is that we should try to avoid talking about AI as if it’s a person or that it has status, just like a human does. I think we have a tendency to project that into these systems. We have to remember that these are machine learning systems that we design, we control, we direct, and there are problems that we point them at, and we are in control of that. So I would say that a just or a good machine learning system is one that we have control over, that is reasonably transparent, in that we can see the training data that goes into those systems. That we keep attempting to do a better job of understanding the models that have been learned and looking at the classifications and the predictions and the recommendations that are generated; and that the benefits of those systems as we develop them over the coming decades accrue to the majority and not just the few. And I mean that with regard to the world, not just the west.

Suleyman starts with the assertion of control. He objects to talking about AI as a person not because he objects to its anthropomorphisation (as Suchman has written) but because he objects to it being granted agency. As he asserts “We have to remember that these are machine learning systems that we design, we control, we direct, and there are problems that we point them at and we are in control of that”. Moving well away from AI’s mentalist origins which prides itself exactly in this technology’s ability to go beyond the limitations that we might impose to it, Suleyman offers a figuration of an agency-less, neutral, purely instrumental technology that is controlled and manipulated at will (it is interesting that even linguistically he moves from the subject AI to the more abstract, generic, object-like “machine learning systems”). This is in sharp contrast with the human ‘we’ which is presented as the technological puppet-master. Restaging the familiar tension between technology and the free-willed liberal humanist subject, Suleyman relies here on the well-known political (and moral) tradition for which claims to autonomy, agency and being in control are the sole and defining preserve of the latter.

But not for long.

As he moves to his next sentence describing what a good or just AI would be, the goalposts change. Now, it’s not all of machine learning systems which are under our control, but only the good ones, and the concessions – “reasonably transparent […] keep attempting to do a better job of understanding” – keep coming as the assertion and primacy of control appears to be slipping through our collective human hands.

The morally neutral, instrumental, agency-less figuration of technology gives way to the ‘good AI’ which is deemed good precisely because it remains controllable and benign. The implication of course here is that things could be or become otherwise as the possibility of a bad, malevolent AI is lurking in the shadows always in danger of getting out of control. And hence, the Terminator re-enters the picture. A controllable, good, ethical, benevolent AI or robot, if we are to come full circle, cannot make sense without the pink-eyed Terminator as its antipode, hence demonstrating that the myth of the instrumental, benign, ethical and good AI is just that, a figure which is very successful, travels frequent and far, yet one which folds into it contradictions and a mess that can be unfolded, taken apart and hence potentially put together in different ways.

In ways, for example, that do not deny this technology’s agency, or which enable us to understand this agency not as the technology’s property, but as a dynamic attribute manifested through the human, technological, regulatory, corporatist, political, discursive relations that configure it hence urging us to broaden our gaze and our analysis.

—— UPDATE ——

In the first paragraph above, I write ‘The stories of scary Terminators and sci-fi scenarios appear to have dominated the media landscape’. Since I wrote this sentence, I have come across the work by Garvey and Maskal (2019) which makes the use of the word ‘appear’ rather pertinent. To explain, Garvey and Maskal published an article with the explanatory title ‘Sentiment Analysis of the News Media on Artificial Intelligence Does Not Support Claims of Negative Bias Against Artificial Intelligence’ (2019). There, the authors ‘examine the belief that negative news media coverage of AI—and specifically, the alleged use of imagery from the movie Terminator—is to blame for public concerns about AI. This belief is identified as a potential barrier to meaningful engagement of AI scientists and technology developers with journalists and the broader public,’ only to conclude that ‘Contrary to the alleged negative sentiment in news media coverage of AI, we found that the available evidence does not support this claim.’ As such, the results of this study, add another interesting thread that further complicates and enriches my quest to explore how such stories and tropes form, transform and travel.