Podoletz – Emotional AI in action: will it change crime, crime prevention, policing and security, and why this matters?

FACTOR is pleased to announce our next talk in the 2024-2025 academic year by Dr Lena Podoletz (Law, Lancaster University):

TITLE

Emotional AI in action: will it change crime, crime prevention, policing and security, and why this matters?

ABSTRACT

Emotional AI is an emerging technology used to make probabilistic predictions about the emotional states, intentions and attitudes of people utilising data sources, such as facial (micro)-movements, body language, vocal tone or the choice of words. When it comes to using language as such source of data, experimental tools are being developed to detect and, sometimes, even to try and forecast criminal behaviour. Cyberbullying, terrorism and digital forms of hate speech and fraud are some of the main areas where such developments are in the making. At this moment, these tools are still in their very early years. This talk considers their implications for ethics, privacy, human and procedural rights, as well as the existing evidence base for performance.

TIME & PLACE

W05, 1500-1550, Thu 07th Nov 2024, Bowland North SR10. (Please note that this talk will not be streamed or recorded.)

Find information on how to get to campus here, and how to navigate campus buildings here.

REGISTRATION

For accessibility, fire, and safety compliance, please register before attending.

McVean – Sounds familiar: How does a listener’s familiarity with a voice affect their ability to detect an AI-generated version of that voice?

FACTOR is pleased to announce our next talk in the 2024-2025 academic year by FACTOR PhD student Hope McVean (LAEL, Lancaster University):

TITLE

Sounds familiar: How does a listener’s familiarity with a voice affect their ability to detect an AI-generated version of that voice?

ABSTRACT

Recent technological advancements mean that AI-generated speech has rapidly advanced in quality, to the point where it can be virtually indistinguishable from genuine human speech. This opens the door to a new breed of cybercrime: one which utilises AI to create fraudulent representations of a target’s speech in an attempt to deceive listeners into believing that they are genuine. Presently, curbing this deception is difficult, since the conditions dictating accurate discrimination between genuine and synthetic speech are little understood. This talk examines one factor: a potential victim’s familiarity with the speaker’s voice. When the speaker’s voice is well-known to a listener (i.e. a celebrity or loved one), does this impact the listener’s ability to recognise an AI-generated sample of that speaker’s voice? If so, why? What other factors may also be at play, and how might those factors interact with familiarity to influence the potential victim’s performance? With the insights gained by addressing these questions, we begin to give ourselves the tools to mitigate against AI-mediated cybercrime.

TIME & PLACE

W04, 1500-1550, Thu 31st Oct 2024, Bowland North SR20. (Please note that this talk will not be streamed or recorded.)

Find information on how to get to campus here, and how to navigate campus buildings here.

REGISTRATION

For accessibility, fire, and safety compliance, please register before attending.