Articles

Deep fakes – a cause for concern?

an example of how deep fake images can be dangerousIn this issue we wanted to take a look at deep fakes and how easy it is to detect them. Image manipulation/editing is nothing new, and deep fakes are the latest in a long line of techniques used for manipulation. Joseph Stalin had people removed from photographic images of him so he was not seen to be associating with the “wrong type of people”.

What is a Deep fake? Deep fakes refer to audio, image, text or video that have been automatically synthesised by a machine learning system and AI. Deep fake technology can be used to create highly realistic images or videos that may depict people saying or doing something that they did not. For example, recent images have circulated of the Pope wearing a large white “puffer” coat, something he never did. Link here: https://www.bloomberg.com/news/newsletters/2023-04-06/pope- francis-white-puffer-coat-ai-image-sparks-deep-fake-concerns

  • Public concern: The public are concerned about the misuse of deep fakes, they are hard to detect and technology is advancing rapidly. The public have limited understanding, and there is a risk of public misinformation especially as the deep fakes become more sophisticated. It is good to look for inconsistencies when trying to decide if an image is a fake, such as mismatched earrings, inconsistent eye blinking etc.
  • Worries and considerations: Deep fakes are increasingly being used for malicious purposes, such as the creation of pornography, and modern tools for creating them are readily available and increasing in sophistication yielding better and better results. Even though public awareness is increasing, the ability to detect a deep fake is not. However some recent research has shone a lens on who might be better at detecting them.
  • Research by Ganna Pogrebna: Ganna is a decision theorist and a behavioural scientist working at the Turing Institute. She recently gave a talk by Zoom on her empirical study into “Temporal Evolution of Human Perceptions and Detection of Deep fakes”. Ganna identified a range of personality traits (37) which could be measured using psychological measurement scales e.g. Anxiety, extraversion, self- esteem etc. Based on the description of the trait she then developed an algorithm. The hypothesis was based on the “big five” personality traits (openness, conscientiousness, extraversion, neuroticism and agreeableness).

The study commenced with a small group of 200 people, and has now increased to 3,000 people in each of five different Anglophone countries: UK, US, Canada, Australia, New Zealand. As Ganna has a large group of deep fakes (dataset) she can test using lots of different people not just using images of actors and politicians as in some studies. This has yielded a copious amounts of data, including cross sectional data from representative samples. Each participant was subjected to 6 deep fake algorithm variations in a between subjects design.

  • Results: People’s ability to detect deep fakes gradually declines as the quality of the deep fakes improves. However, those people who show high emotional intelligence, conscientiousness and are prevention focused are better at detecting deep fakes. Neuroticism, resilience, empathy, impulsivity and risk aversion were traits coming in a close second with these people having better results. 2% of participants (which is low) were very good at detecting deep fakes (although no exact definition of “very good” was presented). They have three traits which are statistically scored higher than other participants: conscientiousness, emotional intelligence and prevention focus – they all detect well. General intelligence and knowing about technology does not make you able to detect deep fakes better, testing for general versus emotional intelligence could be an interesting addition to the data. It will be good to see the full results in terms of exact performance and effect size when published.
  • So What: We are getting familiar with deep fakes and with talking about “hallucinations” such as content created by ChatGPT, these are assertions confidently made by algorithms even though they are far removed from the truth. The future technology is exciting, possibilities are endless with new technologies emerging at an exponential rate, but we need to question more than ever what we see and what we read.

Let us know what work you are doing in the deep fake arena – we’d love to hear from you – CAISS@lancaster.ac.uk