Trust in AI

There are ongoing discussions about the possibility of AI replacing doctors by making them obsolete versus the importance of the doctors’ humanity and humans’ unique ability to care, empathise and be trusted. Recently Hatherley (2020) argued that, while capable of being reliable, AI systems cannot be trusted and are not capable of being trustworthy because “[t]rusting relations […] are exclusive to beings with agency” which, according to the author, the nonhuman AI systems lack.  

Such views rely on traditional humanistic theorisations which pit ‘cold technologies’ against ‘warm human care’.[4] However, sociological studies have shown that relations between humans and technologies are not only functional but also social, affective, playful and frustrating, and often all of the above at the same time [ref]. They (re)configure each other in particular ways allowing practices such as that of treating, caring and trusting to shape and unfold in different ways, some successful, others less so [ref the logic of care]. To put it simply, when we enter a doctor’s office, we don’t trust our health to any human who happens to be present but to the one who wears the stethoscope.

Changing our analytic perspective from the humanistic dipole of human vs. machine to human-machine configurations allows us to recognise that interpersonal trust between doctors and patients develops not (only) because they are both human. Instead, because there is an orchestration of bodies, materials, knowledge systems, technologies, regulatory bodies, which work together to enable and foster this trust in practice (Buscher). To put it simply, when we enter a doctor’s office, we don’t trust our health to any human who happens to be present but to the one who wears the stethoscope.

Yet this does not mean that trust in AI, as it is currently developed, is or should be granted. Instead, as we argue here a focus on trust as the basis upon which a relationship between AI and the public is built is, at best, ineffective, at worst, inappropriate or even dangerous, as it diverts attention from what is actually needed to actively warrant trust. Instead of agonising about how to facilitate trust, a type of relationship which can leave those trusting vulnerable and exposed, efforts should be focused on the difficult and dynamic process of ensuring reliance underwritten by strong legal and regulatory frameworks. From there, trust could emerge but not merely as a means to an end. Instead, as something to work in practice towards; that is, the deserved result of an ongoing ethical relationship where there is the appropriate, enforceable and reliable regulatory infrastructure in place for problems, challenges and power asymmetries to be continuously accounted for and appropriately redressed.