About the project

Configuring ethical AI in healthcare

AI’s promise for healthcare is ambiguous. Calls for speedy integration of innovative technologies to rescue an over-burdened NHS clash with longstanding principles of medical ethics raising questions about how they might transform practices. Commitments to develop ethical AI – by establishing a Centre for Data Ethics and Innovation, an AI code of conduct, and the RSA’s Forum for Ethical AI – are meant to ease such concerns. Yet, considering that ethical principles (i.e. privacy, responsibility, transparency, trust) are being challenged in this technological landscape (EDPS 2018; Clarke et al 2006; Ananny and Crawford 2016; Stocker 2016), it is far from obvious what ethical AI is.

This project – funded by Wellcome Trust and hosted by Lancaster University – will address this question by leveraging insights from feminist Science and Technology Studies which support a strategic shift from philosophical definitions to relational sociological analysis of situated processes asking how is an ethical AI configured in contemporary healthcare discourses and practices? Such a shift from the representational to the performative builds upon the recognition that figurations of an ethical AI are not only descriptive but also performative, having normative implications in policy, technology development and use. It also recognises that ethics is a contextual socio-material practice, not a feature that can be designed ‘into’ technology (Liegl et al 2016; Nissenbaum 2009; Introna 2007). This basis enables us to understand AI as a materialsemiotic actor (Haraway 1997) whose power lies not only in its computational powers but also in the compelling imaginaries and visions it can muster. By understanding the process of its configuration, the hope is that we open up the possibility to change it.

EDPS Ethics Advisory Group Report (2018) ‘Towards a digital ethics’ https://edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf

Ananny, M. and Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, pp: 1-17.

Clarke, K., Hardstone, G., Rouncefield, M., Sommerville, I. (2006). Trust in Technology: A Socio-Technical Perspective (Computer Supported Cooperative Work). New York: Springer-Verlag.

Haraway, D. J. (1997) Modest_Witness@Second_Millennium.FemaleMan_Meets_OncoMouse:Feminism and Technoscience Routledge, London, EN.

Introna, L.D. (2007) ‘Maintaining the reversibility of foldings: Making the ethics (politics) of information technology visible’. Ethics and Information Technology, 9(1), pp.11-25.

Nissenbaum, H. (2009) Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Stocker, M. (2016). ‘Decision-making: Be wary of ‘ethical’ artificial intelligence’. Nature, 540(7634), p. 525.