Literature Reviews

Review of paper: “Fooled twice: People cannot detect deep fakes but think they can”

– Nils C Kobis, Barbora Dolezalova & Ivan Soraperra

In this study the authors show that people cannot reliably detect deep fakes, even if they had their awareness raised and received a financial incentive, their detection accuracy was still poor. People appear to be biased towards mistaking deep fakes as authentic videos rather than the other way around and they also overestimate their detection abilities. Is seeing really believing?

These manipulated images, whilst entertaining can have a dark side. Large scale use of facial images are being used to create fake porn movies of both men and women which could impact their reputation; or in the case of a fake voice remove the life savings from someone. Calwell et al, 2020 ranked the malicious use of deep fakes as the number one emerging AI threat to consider.

This is an issue as the ability to create a deep fake using Generative Adversarial Networks (GANs) is not just in the realm of the experts but accessible to anyone, expert knowledge is not required. Extensive research in judgement and how people make decisions shows that people often use mental shortcuts (heuristics) when establishing the veracity of items online. This could, the authors posit, lead to people becoming oversensitive to online content and then fail to believe anything – even genuine authentic announcements by politicians. However, the counter argument is that fake videos are the exception to the rule and “seeing is believing” is still the dominant heuristic. This study tested both these competing biases – “liars divided versus seeing is believing”.

The results of the study showed that people struggled to identify deep fake videos due to their inability to do so, not just that they were lacking in motivation. They also found that people were overly optimistic with a systematic bias exhibited towards guessing that the videos were authentic.

It could be argued that humans process moving visual information more effectively than other sensory data, results showed a slightly better than chance result and this is worse than when static images are used. Could this be due to inattention? More research is needed in this area.

The authors also found two related biases in human deep fake detection, participants were told 50% of the videos were fake, but still 67.4% were deemed to be authentic, so this was not related to their ability to guess so not deliberate – they were using their judgement. The other bias was related to the “Dunning Kruger”* effect, people over estimated their ability to detect deep fakes, particularly low performers were over confident. Overall people did really think that “seeing is believing”.

Conclusion – Deep fakes will undermine knowledge acquisition as our ability to detect them is not due to a lack of motivation but an inability to do so. The videos used in this study did not have an emotional content which may have yielded different results. More work is definitely needed in this area.

Link to the paper here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8602050/ Reference: Caldwell M., Andrews J.T., Tanay T., Griffin L.D. AI-enabled future crime. Crime Sci. 2020;9:1–13. [Google Scholar]

*The Dunning-Kruger effect occurs when a person’s lack of knowledge and skills in a certain area cause them to overestimate their own competence.