Thoughts and Reflections

AI in Medicine

The book Future Morality has just been published by Oxford University Press.

The book includes a chapter co-authored by Kerasidou Angeliki and myself titled ‘AI in Medicine’.

Abstract: AI promises major benefits for healthcare. But along with the benefits come risks. Not so much the risk of powerful super-intelligent machines taking over, but the risk of structural injustices, biases, and inequalities being perpetuated in a system that cannot be challenged because nobody actually knows how the algorithms work. Or, the risk that there might be no doctor or nurse present to hold your hand and reassure you when you are at your most vulnerable. There are many initiatives to come up with ethical or trustworthy AI and these efforts are important. Yet we should demand more than this. Technological solutionism and the urge to “move fast and break things” often dominate the tech industry but are inappropriate for the healthcare context and incompatible with basic healthcare values of empathy, solidarity, and trust. So how can such socio-political and ethical issues get resolved? It is at this juncture that we have the opportunity to imagine different futures.

Using Grace’s fictional story, this chapter argues that in order to shape the future of healthcare we need to decide whether, to what extent, and how—under what regulatory frameworks and safeguards—these technologies could and should play a part in this future. AI can indeed improve healthcare but, instead of casting ourselves loose and at the mercy of this seemingly inevitable technological drift, we should be actively paddling towards a future of our choice.

You can find our chapter (and the rest of the book) here.

 

Before and beyond trust: reliance in medical AI

Our article ‘Before and beyond trust: reliance in medical AI’ has just been published in the J

Here is the abstract:

Artificial intelligence (AI) is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a “public trust deficit”. This paper argues that a focus on trust as the basis upon which a relationship between this new technology and the public is built is, at best, ineffective, at worst, inappropriate or even dangerous, as it diverts attention from what is actually needed to actively warrant trust. Instead of agonising about how to facilitate trust, a type of relationship which can leave those trusting vulnerable and exposed, we argue that efforts should be focused on the difficult and dynamic process of ensuring reliance underwritten by strong legal and regulatory frameworks. From there, trust could emerge but not merely as a means to an end. Instead, as something to work in practice towards; that is, the deserved result of an ongoing ethical relationship where there is the appropriate, enforceable and reliable regulatory infrastructure in place for problems, challenges and power asymmetries to be continuously accounted for and appropriately redressed.

You can find the article here.

Kerasidou CX, Kerasidou A, Buscher M, Wilkinson, S. ‘Before and beyond trust: reliance in medical AI’,

WHO – Ethics & Governance of Artificial Intelligence for Health

The World Health Organisation (WHO) has just published its first global report on Artificial Intelligence (AI) in health

The report is titled: Ethics and governance of artificial intelligence for health: WHO guidance

It identifies the following key ethical principles for the use of AI for health:

  • Protecting human autonomy
  • Promoting human well-being and safety and the public interest.
  • Ensuring transparency, explainability and intelligibility.
  • Fostering responsibility and accountability.
  • Ensuring inclusiveness and equity.
  • Promoting AI that is responsive and sustainable.

You can find the report here.

 

Interested in shaping the UK’s National AI Strategy?

The AI Council, supported by The Alan Turing Institute, has put out a call inviting the views of those researching, developing, working with, or using AI technologies to share what they think is important for the National AI Strategy.

The National AI Strategy will be written by The Office for AI (OAI), the unit responsible for overseeing the implementation of the Government’s AI and Data Grand Challenge. The survey forms one part of the wider information gathering work being undertaken by them to help inform the National AI Strategy.

The questions centre around the four components of the AI Roadmap:

  • Research and innovation
  • Skills
  • Data, infrastructure, and public trust
  • National cross-sector adoption

The deadline is Sunday 20 June 2021, 23:59 BST.

Take Part In The Survey

Methodological adventures

This project set out to explore the way that ethical AI is being figured in healthcare with a particular focus in the UK, with the aim to open up the possibilities for alternative reconfigurations. Such a sentiment is at the heart of a feminist STS perspective which seeks to unsettle the dominant narratives and voices and open up possibilities for new ways of understanding, designing and living with technology (see Suchman 2007).

While my original thinking was to get some people and practitioners who design, work and live with these technologies and with them co-design alternative figurations and come up with alternative stories for ethical AI, Covid-19 forced a rethink.

The disruption of the pandemic meant that getting hold of healthcare practitioners and generally people physically in the same room was now impossible. This limitation presented an opportunity as it forced a deeper methodological thinking.

If people and settings (in this case, those connected with the healthcare setting) are out of reach and bounds where and how can we find these alternative stories and figurations? What is the role of technology in this case when we all sit in front of our screens via Zoom, teams or other online platforms trying to make a connection?

But was it ever thus? Is there such a thing as a pure/unmediated interaction, or is this technologically mediated interaction changing the stories that can be told?

The lessons by feminist STS tell us that indeed these different comings-together are changing the stories that can be told and the figurations that emerge. But they remind us, that it was ever thus! In Verran’s words, ‘[e]xperience and stories of that experience are not the same thing’ (2002: 731).The stories and figurations are not immutable mobiles that pre-exist in the minds of our participants and we can access them, successfully or not, by cleverly devising ways to get them out of their brains and into our note pads!

These stories and figurations are performed, worked and re-worked, shaping and shaped by our very own methods (Lury and Wakeford 2012). As my own research shows, they are messy and inconsistent, yet they can still hold in places, sometimes they travel successfully, others not (Kerasidou 2019; 2017).

With these points firmly and freshly in mind, and with covid’s disruptive innovation opportunity presented, the workshop of this project was reimagined and redesigned.

We allowed ourselves to experiment and explore with no clear sight of where this will lead us but with a firm belief that it will ‘move’ us to a different, alternative place. That’s all we could ask. And that is a lot!

Trust in AI

There are ongoing discussions about the possibility of AI replacing doctors by making them obsolete versus the importance of the doctors’ humanity and humans’ unique ability to care, empathise and be trusted. Recently Hatherley (2020) argued that, while capable of being reliable, AI systems cannot be trusted and are not capable of being trustworthy because “[t]rusting relations […] are exclusive to beings with agency” which, according to the author, the nonhuman AI systems lack.  

Such views rely on traditional humanistic theorisations which pit ‘cold technologies’ against ‘warm human care’.[4] However, sociological studies have shown that relations between humans and technologies are not only functional but also social, affective, playful and frustrating, and often all of the above at the same time [ref]. They (re)configure each other in particular ways allowing practices such as that of treating, caring and trusting to shape and unfold in different ways, some successful, others less so [ref the logic of care]. To put it simply, when we enter a doctor’s office, we don’t trust our health to any human who happens to be present but to the one who wears the stethoscope.

Changing our analytic perspective from the humanistic dipole of human vs. machine to human-machine configurations allows us to recognise that interpersonal trust between doctors and patients develops not (only) because they are both human. Instead, because there is an orchestration of bodies, materials, knowledge systems, technologies, regulatory bodies, which work together to enable and foster this trust in practice (Buscher). To put it simply, when we enter a doctor’s office, we don’t trust our health to any human who happens to be present but to the one who wears the stethoscope.

Yet this does not mean that trust in AI, as it is currently developed, is or should be granted. Instead, as we argue here a focus on trust as the basis upon which a relationship between AI and the public is built is, at best, ineffective, at worst, inappropriate or even dangerous, as it diverts attention from what is actually needed to actively warrant trust. Instead of agonising about how to facilitate trust, a type of relationship which can leave those trusting vulnerable and exposed, efforts should be focused on the difficult and dynamic process of ensuring reliance underwritten by strong legal and regulatory frameworks. From there, trust could emerge but not merely as a means to an end. Instead, as something to work in practice towards; that is, the deserved result of an ongoing ethical relationship where there is the appropriate, enforceable and reliable regulatory infrastructure in place for problems, challenges and power asymmetries to be continuously accounted for and appropriately redressed.

Beyond Heaven and Hell

AI, what will it mean? Helpful robots washing and caring for an ageing population, or pink-eyed terminators sent back from the future to cull the human race?”

These are the words of Boris Johnson, the UK Prime Minister, who in his speech at the UN General Assembly in September 2019 colourfully warned against technologies and pink-eyed Terminators that will threaten our very existence. The stories of scary Terminators and sci-fi scenarios appear [see update below] to have dominated the media landscape with titles such as ‘The Doomsday Invention’ as, following its winter, AI has entered another spring season.

The fears and hopes that accompany intelligent machines have long roots and have once again been revived by recent discussions on the Singularity by futurist Ray Kurzweil, and existential warnings by prominent figures such as Professor Nick Bostrom, Elon Musk, and the late Professor Stephen Hawking.

The response to this barrage of doom is equally interesting. Responding to the recurring theme of the Terminator in all its variations, there has been an impressive pushback with reports calling out the hyperbole which characterises much of the AI debate. For example, in 2017, Rodney Brooks[3], the former director of the Computer Science and Artificial Intelligence Laboratory at MIT chastised the “hysteria” that surrounds us about the future of AI and robotics – “about how powerful they will become, how quickly, and what they will do to jobs” and warned that “Mistaken extrapolations, limited imagination, and other common mistakes […] distract us from thinking more productively about the future.” CognitionX, a key knowledge network platform in AI has organised conference panels titled “Stop the Terminator chat” while prominent philosopher Luciano Floridi reached the point in one of his talks of urging despairingly, ‘enough with this Terminator stuff!’.

The key point of those who voice such warnings is that this hype and hysteria blinds us to the important opportunities, but also questions and problems of the here and now. As Kate Crawford and Ryan Calo argue convincingly, fears about the future impacts of artificial intelligence are distracting researchers from the real risks of the already deployed systems.

Instead of participating in these important discussions (perhaps a task for another day), what interests me here is to start the process of observing how they are conducted. In other words, I want to understand the work that the Terminator does in configuring AI and in particular ethical AI? How does the invocation of this figure – either by highlighting it or by rejecting and pushing against it – configures ethical AI in particular ways?

As neatly illustrated by the PM’s words above, there is a tension between the two figures of the malevolent Terminator and the benevolent robot (which here I read as the embodiment of good, ethical AI although there is a whole interesting issue of embodiment that I shall leave unexplored for the moment). Yet, I think there is more to this relationship and if I were to push it further I would suggest that besides tension, there is also a symbiosis going on.

This interesting symbiosis is captured by Lucy Suchman’s acute observation in her post ‘Which Sky is falling?

“In much of the coverage of debates regarding AI and robotics it seems that to reject the premise of superintelligence is to reject the alarm that Musk raises and, by slippery elision, to reaffirm the benevolence of AI and the primacy of human control.”

There are two points here that I would like to explore further starting with this idea of the slippery elision that reaffirms the benevolence of AI.

Proclamations of ‘This isn’t the Terminator’ have, indeed, become somewhat common place in their frequency and nonchalance. This is a phrase which is meant to reassure but in the paternal way of a father who easies his child’s irrational fears, as demonstrated in the words of Mathieu Webster of NHS Shared Business Services who advocates “helping people understand that it is not something to be frightened of. It’s not the Terminator”. Such phrases are uttered in a sincere and light-hearted way becoming the perfect carrier for all fears, concerns, objections. Namely, a way to acknowledge them but without really addressing them. Who after all is really worried that this is the Terminator! This phrase becomes a shortcut that bangles all of the fears, concerns, objections together and then disperses them with a fleeting comment that works to confirm their naivety and imply their ignorance (and hence, in its turn, imply the naivety and ignorance of the ‘public’ who is imagined as holding these fears) while implicitly affirming that ‘everything is fine’.

Ultimately, this is a trope which – just like its inverse one that worries about the technological singularity, superintelligence and the distant future impacts – achieves the same result of moving the gaze away from valid fears, problems and concerns. Yet while the latter, as Crawford has identified, ‘[is] distracting researchers from the real risks of deployed systems’, the former achieves much the same by rendering them and their holders insignificant and naïve. Bringing Crawford and Calo and Suchman together, I would argue that it is indeed true that worrying about terminators and superintelligence distracts us from valid concerns. Yet, it is also necessary to stress that “Not worrying about superintelligence, […], doesn’t mean that there’s nothing about which we need to worry”.

Secondly, I would like to think more carefully about the ‘and’ in Suchman’s quote above. This is an interesting link that connects the benevolence of AI with the assertion and primacy of human control. One, that as I will try to explain, folds in a complex, messy and contradictory relation, yet one that is still productive and goes places.

To illustrate my point, here is a lengthy quote by Mustafa Suleyman the cofounder of the AI company Google Deepmind who in 2016 shared a panel with Suchman and others titled Inequality, Labor, Health, and Ethics in AI organised by the AI Now Institute. Responding to the question ‘What would the best AI look like? What would an unbiased, just, fair AI look like?’ he responded:

I think the first thing I would say is that we should try to avoid talking about AI as if it’s a person or that it has status, just like a human does. I think we have a tendency to project that into these systems. We have to remember that these are machine learning systems that we design, we control, we direct, and there are problems that we point them at, and we are in control of that. So I would say that a just or a good machine learning system is one that we have control over, that is reasonably transparent, in that we can see the training data that goes into those systems. That we keep attempting to do a better job of understanding the models that have been learned and looking at the classifications and the predictions and the recommendations that are generated; and that the benefits of those systems as we develop them over the coming decades accrue to the majority and not just the few. And I mean that with regard to the world, not just the west.

Suleyman starts with the assertion of control. He objects to talking about AI as a person not because he objects to its anthropomorphisation (as Suchman has written) but because he objects to it being granted agency. As he asserts “We have to remember that these are machine learning systems that we design, we control, we direct, and there are problems that we point them at and we are in control of that”. Moving well away from AI’s mentalist origins which prides itself exactly in this technology’s ability to go beyond the limitations that we might impose to it, Suleyman offers a figuration of an agency-less, neutral, purely instrumental technology that is controlled and manipulated at will (it is interesting that even linguistically he moves from the subject AI to the more abstract, generic, object-like “machine learning systems”). This is in sharp contrast with the human ‘we’ which is presented as the technological puppet-master. Restaging the familiar tension between technology and the free-willed liberal humanist subject, Suleyman relies here on the well-known political (and moral) tradition for which claims to autonomy, agency and being in control are the sole and defining preserve of the latter.

But not for long.

As he moves to his next sentence describing what a good or just AI would be, the goalposts change. Now, it’s not all of machine learning systems which are under our control, but only the good ones, and the concessions – “reasonably transparent […] keep attempting to do a better job of understanding” – keep coming as the assertion and primacy of control appears to be slipping through our collective human hands.

The morally neutral, instrumental, agency-less figuration of technology gives way to the ‘good AI’ which is deemed good precisely because it remains controllable and benign. The implication of course here is that things could be or become otherwise as the possibility of a bad, malevolent AI is lurking in the shadows always in danger of getting out of control. And hence, the Terminator re-enters the picture. A controllable, good, ethical, benevolent AI or robot, if we are to come full circle, cannot make sense without the pink-eyed Terminator as its antipode, hence demonstrating that the myth of the instrumental, benign, ethical and good AI is just that, a figure which is very successful, travels frequent and far, yet one which folds into it contradictions and a mess that can be unfolded, taken apart and hence potentially put together in different ways.

In ways, for example, that do not deny this technology’s agency, or which enable us to understand this agency not as the technology’s property, but as a dynamic attribute manifested through the human, technological, regulatory, corporatist, political, discursive relations that configure it hence urging us to broaden our gaze and our analysis.

—— UPDATE ——

In the first paragraph above, I write ‘The stories of scary Terminators and sci-fi scenarios appear to have dominated the media landscape’. Since I wrote this sentence, I have come across the work by Garvey and Maskal (2019) which makes the use of the word ‘appear’ rather pertinent. To explain, Garvey and Maskal published an article with the explanatory title ‘Sentiment Analysis of the News Media on Artificial Intelligence Does Not Support Claims of Negative Bias Against Artificial Intelligence’ (2019). There, the authors ‘examine the belief that negative news media coverage of AI—and specifically, the alleged use of imagery from the movie Terminator—is to blame for public concerns about AI. This belief is identified as a potential barrier to meaningful engagement of AI scientists and technology developers with journalists and the broader public,’ only to conclude that ‘Contrary to the alleged negative sentiment in news media coverage of AI, we found that the available evidence does not support this claim.’ As such, the results of this study, add another interesting thread that further complicates and enriches my quest to explore how such stories and tropes form, transform and travel.

NHSX – Have your say on the Tech Plan for health and care

Unfortunately, with the madness of the Covid pandemic upon us, I missed the opportunity to ‘Have [my]  say on the Tech Plan for health and care‘ as per the NHSx invitation.

The active engagement has been paused due to the pandemic but this is only Phase 1 of 5, and one can still register for updates when things will be resumed.

This is part of the Join the Conversation initiative which seems like a great initiative but as some contributors have pointed out, the limited number of responses it received also raises the question of how visible and hence how inclusive such initiatives are? Considering that I didn’t come across the call until too late, even though I am subscribed to the NHSX newsletter, demonstrates that this is a valid question worth asking….

Still, it is very interesting to read the existing comments and ideas of the people who did have their say already. From my point of view, it is fascinating to see how dominant stories and figurations – such as that of ‘techno-solutionism’ or the hyped language of Tech Visions – are being confronted and challenged (“Considering the average time for new technologies to filter through the NHS is up to 17 years, why tech solutions are a priority?” – Nicholas Kelly. Or, “Why is the NHS tech plan provided in a PDF?” – Suzannah). This is another evidence that besides the (messy, contradictory but still powerful) stories that dominate in policy circles, there is a plethora of alternative ones which are worth seeking out.

——- UPDATE ———

I just came across this very interesting report by Amanda Lenhart and Kellie Owens at Data & Society, Good Intentions, Bad Inventions:The Four Myths of Healthy Tech. Interesting to read against the above.

Public Trust and Data Sharing practices: When not all data subjects are made equal

Public Trust and Data Sharing practices: When not all data subjects are made equal

On the 3rd of October, the campaign organisations the3million and Open Rights Group lost the high court challenge (but planning an appeal) over the Immigration Exemption clause in the Data Protection Act which came into force last year.

The controversial immigration exemption is a section of the UK’s newly introduced Data Protection Act 2018; a national law which adapts the EU’s GDPR and updates the Data Protection Act 1998. The exemption is being challenged on the grounds that it breaches fundamental rights as it denies certain data subjects the right to transparency and access to their personal records. According to the Open Rights Group,

The exemption has never existed in UK law before its introduction last year. It allows data controllers, including public bodies such as the Home Office or a school or hospital and private bodies such as employers or private landlords, to restrict access to personal data if releasing the information would “prejudice effective immigration control.”

The legal challenge has revealed that the UK government has used this exemption in response to 60% of its immigration-related data requests since the beginning of 2019 while it was further confirmed that the individuals affected are not being informed that the immigration exemption is applied hence further obstructing their ability to appeal. It is also worth noting that the Home Office has a high error rate so appeals – which the applicants usually win but not before going through a lengthy, costly and highly traumatic process – are common place.

The developments described above are particularly interesting considering that the Data Protection Act 2018 is the cornerstone of the UK’s AI NHS strategy which is meant to provide the necessary assurances to the British public that it deserves its trust on the handling of its most sensitive personal data.

Public Trust

The issue of public trust appears to be one of the biggest concerns in policy circles. As the House of Lords report on AI in the UK (2018) stressed, “Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data” (93).

In the same report, Dame Fiona Caldicott, the National Data Guardian, warned the select committee

What we have not done is take the public with us in these discussions, and we really need their views. What is the value? Are they happy for their data to be used when it is anonymised for the purposes we have described? We need to have the public with us on it, otherwise they will be upset that they do not know what is happening to their data and be unwilling to share it with the people to whom they turn for care. That is the last thing we want to happen in our health service (91).

Ill-thought and costly scandals such as care.data (Carter et al 2015, Sterckx et al 2016) and the Deepmind “fiasco” (Powles and Hodson 2017), as the Lords report characterises it, demonstrate that things can indeed go wrong.

As such, public trust is an issue that the government and institutions such as the NHS stress they are taking very seriously. Concerns over privacy or safety are being appeased by a commitment to ethical values and principles inscribed in documents such as the Code of conduct for data-driven health and care technology. The code has been drawn with the help of industry, academics and patient groups and aims to “encourage” (its recommendations are not legally binding) technology companies to meet a gold standard set of principles to protect patient data to the highest standards in order to capitalise  -ethically and responsibly- on the opportunities of AI.

The 10 key principles that the code outlines are underwritten by the Data Protection Act 2018, and as the introduction asserts:

People need to know that their data is being used for their own good and that their privacy and rights are safeguarded. They need to understand how and when data about them is shared, so that they can feel reassured that their data is being used for public good, fairly and equitably.

However, what are we, to make of such proclamations in light of the legal Immigration Exemption as outlined above which indicates that not all data subjects are equal and that perhaps not all ‘people’ are worthy of the same consideration?

The exemption comes to be added to a long and ongoing list of scandals and controversial cases that have coloured the experience of UK immigrants and minority communities with a deep distrust towards the British state and its handling of citizens’ personal data. From the long and still unfolding Windrush scandal, to revelations that there is a secret database run by counter-terror police across the UK and accessed by all police forces and the Home Office which contains personal information of individuals referred to the government’s anti-radicalisation programme Prevent without their knowledge, to reports that the Department of Education is sharing children’s data with the Home Office and immigration enforcement services, to the ongoing struggle of healthcare practitioners and advocacy groups to stop the sharing of data with the Home Office , these controversial policies demonstrate that the hostile environment – the systematic and coordinated policy strategy by the UK government to make the lives of Others from difficult to impossible – is not only alive and kicking. It is also enabled by the ambitious data sharing strategy of the UK government and its piecemeal and intimidating implementation.

In a recent report, the National Audit Office recognised the role that the government’s data strategy played in events such as the Windrush scandal, and added that “despite years of effort and many well-documented failures, government has lacked clear and sustained strategic leadership on data”. This report was followed by an open letter published by civil society groups, including the Open Data Institute (ODI), which warned against serious problems with the government’s collection, use and sharing of data. The letter urged the government to adopt a “transformative” data strategy which will focus on “earning the public’s trust”. As it states:

Debate and discussion about the appropriate extent of using citizens’ data within government needs to be had in public, with the public. Great public benefit can come from more joined-up use of data in government and between government and other sectors. But this will only be possible, sustainable, secure and ethical with appropriate safeguards, transparency, mitigation of risks and public support.

Distrust

Distrust by minority, migrant and vulnerable groups of state intervention and sharing of personal information is not a new phenomenon (Hoopman et al 2007, Lane and Tribe 2010, Jutlla and Raghavan 2017). Yet this does not make it any less pressing. This problem of distrust cannot be remedied by rather condescending statements of “people need[ing] to know” and “need[ing] to understand that what is done is for their own good” so that they can feel reassured, as the quote from the code put it above, or by wishful proclamations of ‘not leaving anyone behind’, as the NHS  proclaims.

Such responses patronise these communities and the public at large, as they wilfully ignore the fact that these people and communities distrust precisely because they know the real and painful impact these data sharing practices can have on their lives, rather than in spite of it. Unless the voices of these communities are sought and sought listened to, and these groups become not just receivers but central in the forming of policy, these public promises and proclamations of adopting an ethical approach that will guarantee public trust risk, at the very least, ringing hollow.  

——UPDATE —–

On 19 March 2020, and with the Covid-19 pandemic taking hold in the UK, the Home Office made public the independent inquiry Windrush Lessons Learned Review led by Wendy Williams. The report was a scathing indictment of the Home Office’s handling of the Windrush generation with Williams writing:

While I am unable to make a definitive finding of institutional racism with the department, I have serious concerns that these failings demonstrate an institutional ignorance and thoughtlessness towards the issue of race and the history of the Windrush generation within the department, which are consistent with some elements of the definition of institutional racism. (7)

Unfortunately, the inquiry was published when the Covid-19 virus was spreading in the UK (the official lockdown was announced on the 23rd of March) hence it received little attention. However, with digital tracking being touted as a key strategy to combat the pandemic, on the one hand, and with reports emerging that the virus appears to disproportionately affect BAME communities, on the other, the synergies between public (dis)trust and data sharing practices especially where minority, migrant and vulnerable groups are concerned warrant close, careful and critical consideration.[1]

[1] https://www.independent.co.uk/news/uk/home-news/coronavirus-undocumented-migrants-deaths-cases-nhs-matt-hancock-a9470581.html