Thoughts and Reflections

Ethical AI – the ‘holy grail’ of innovation?

It has been more than half a century now that Artificial Intelligence (AI) has entered public consciousness cloaked in an air of shock and awe. Visions of futuristic utopias where intelligent agents would facilitate our lives every step of the way or where our mortality fears would be eased by downloadable brains and other technological solutions have been accompanied by fears that the uncannily human technological companions will eventually turn against us, now more intelligent and hence arguably unable to contain and control. But as the robots are proving more awkward than intelligent and the masculinist visions of a ‘strong AI’ more and more difficult to realise, a different story is also coming to the foreground. One, far more mundane, yet no less socially disruptive.

Indeed the story has shifted from utopian/dystopian masculinist visions of AI to more mundane instrumental visions of tools helping us address key everyday problems with advancements in healthcare, transport, security, policing, the humanitarian sector, social services, etc.

But all the promises of the benefits that AI can deliver are accompanied by warnings of the risks that such technologies also carry. And not without good reason. From data privacy violations, to discrimination and bias, to evidence of automated systems that entrench inequalities and target the poor (Eubanks 2018), there are plenty of valid reasons to be wary of these new technologies. And it is in this landscape that public commitments for an ethical and trustworthy AI strike a reassuring note.  

Indeed, the ambition of AI visions is matched by the numerous public declarations of earnest commitments to ‘get this right for the benefit of all humanity’. These have led to various efforts such as the creation of a Centre for Data Ethics and Innovation in the UK (CDEI), the adoption for an AI for Humanity strategy in France, and the establishment of the global tech industry consortium, Partnership on Artificial Intelligence to Benefit People and Society which includes companies such as Google, Amazon, Facebook, Microsoft, among its partners.

Such high-profile commitments notwithstanding, as the recent collapse of Google’s ethics board demonstrates, this task is not proving easy leading me to wonder; has ethical AI become the holy grail of innovation – almost mythical in its existence yet powerful enough to capture the imagination of industry, academics and policy-makers alike?

About the project

Configuring ethical AI in healthcare

AI’s promise for healthcare is ambiguous. Calls for speedy integration of innovative technologies to rescue an over-burdened NHS clash with longstanding principles of medical ethics raising questions about how they might transform practices. Commitments to develop ethical AI – by establishing a Centre for Data Ethics and Innovation, an AI code of conduct, and the RSA’s Forum for Ethical AI – are meant to ease such concerns. Yet, considering that ethical principles (i.e. privacy, responsibility, transparency, trust) are being challenged in this technological landscape (EDPS 2018; Clarke et al 2006; Ananny and Crawford 2016; Stocker 2016), it is far from obvious what ethical AI is.

This project – funded by Wellcome Trust and hosted by Lancaster University – will address this question by leveraging insights from feminist Science and Technology Studies which support a strategic shift from philosophical definitions to relational sociological analysis of situated processes asking how is an ethical AI configured in contemporary healthcare discourses and practices? Such a shift from the representational to the performative builds upon the recognition that figurations of an ethical AI are not only descriptive but also performative, having normative implications in policy, technology development and use. It also recognises that ethics is a contextual socio-material practice, not a feature that can be designed ‘into’ technology (Liegl et al 2016; Nissenbaum 2009; Introna 2007). This basis enables us to understand AI as a materialsemiotic actor (Haraway 1997) whose power lies not only in its computational powers but also in the compelling imaginaries and visions it can muster. By understanding the process of its configuration, the hope is that we open up the possibility to change it.

EDPS Ethics Advisory Group Report (2018) ‘Towards a digital ethics’ https://edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf

Ananny, M. and Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, pp: 1-17.

Clarke, K., Hardstone, G., Rouncefield, M., Sommerville, I. (2006). Trust in Technology: A Socio-Technical Perspective (Computer Supported Cooperative Work). New York: Springer-Verlag.

Haraway, D. J. (1997) Modest_Witness@Second_Millennium.FemaleMan_Meets_OncoMouse:Feminism and Technoscience Routledge, London, EN.

Introna, L.D. (2007) ‘Maintaining the reversibility of foldings: Making the ethics (politics) of information technology visible’. Ethics and Information Technology, 9(1), pp.11-25.

Nissenbaum, H. (2009) Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Stocker, M. (2016). ‘Decision-making: Be wary of ‘ethical’ artificial intelligence’. Nature, 540(7634), p. 525.