Barber – The Reframing of Rape in Extremist Online Discourses

The FORGE is delighted to announce our external guest speaker: Kate Barber (T) (Cardiff University). Details of her talk are below:

TITLE
The reframing of rape in extremist online discourses

NOTES
THIS TALK IS ON A TOPIC, AND WILL CONTAIN EXTRACTS OF DATA, THAT SOME MAY FIND DISTRESSING.

DISCRETION IS STRONGLY ADVISED.

ABSTRACT

Linguistic analyses of far-right discourses have traditionally focused on nationalist rhetoric or racist and ethnoreligious-based invective. The explicit anti-feminist stance held by some far-right groups, specifically in relation to sexual offences against women, remains underexplored. This paper outlines initial findings from an ongoing corpus-assisted critical discourse analysis of 100 blog posts on sites identifying as belonging to the Alternative Right (Alt-Right) or the right-wing men’s activist movement known as the Manosphere. While these factions can be distinguished by their primary concerns towards racial diversity (Alt-Right), and men’s rights (Manosphere), this study aims to highlight how their discourses converge in their portrayal of victims and perpetrators of sexual violence against women.

This paper outlines preliminary findings from the second and third year of my PhD research. Using corpus linguistics and a discourse analytical framework based largely on van Dijk’s (1984) and Koller’s (2012) sociocognitive approach to discourse studies and collective identity analysis, the paper discusses how inhabited and ascribed identities promote white male victimhood and portray the mainstream concept of rape culture as a ‘feminist-produced moral panic’ (Gotell & Dutton 2016, p. 65). The presentation includes details of the network analysis I undertook in order to locate the online websites and blogs from which I selected my data; corpora construction; and a comparative analysis of racist and misogynistic constructions of identity in narrative and non-narrative discourses. Finally, some of the ongoing challenges this research has presented will be discussed along with the importance of applying linguistic analyses to develop inoculation narratives (Braddock 2019) and other counter-extremism measures.

References
Braddock, K. (2019). Vaccinating Against Hate: Using Attitudinal Inoculation to Confer Resistance to Persuasion by Extremist Propaganda. Terrorism and Political Violence. DOI: 10.1080/09546553.2019.1693370
Gotell, L. & Dutton, E. (2016). Sexual Violence in the ‘Manosphere’: Antifeminist Men’s Rights Discourses on Rape. International Journal for Crime, Justice and Social Democracy. 5(2), 65-80.
Koller, V. (2012). How to Analyse Collective Identity in Discourse – Textual and Contextual Parameters. Critical Approaches to Discourse Analysis across Disciplines. 5(2), 19-38.
van Dijk, T.A. (1984). Prejudice and Discourse. Amsterdam: John Benjamins Publishing Company.

The talk will be approximately 30-40 minutes in total, with around 10-20 minutes at the end for Q&A.

TIME & PLACE
1400-1500, Thu 13th Feb, County South D72.

All are welcome to attend.

Popoola – “It’s the story, stupid!” How MARV (Multivariate Analysis of Register Variation) can save the world from fake news.

The FORGE is delighted to announce our third external speaker: Olu Popoola (University of Birmingham). Details of his talk are below:

TITLE
“It’s the story, stupid!” How MARV (Multivariate Analysis of Register Variation) can save the world from fake news.

ABSTRACT
Computer-aided fake news detection can be a useful complement to human efforts. On its own, fact-checking is often too slow to prevent the viral spread of disinformation; debunking news stories and communicating corrections can also have a backfire effect of reinforcing the false belief (Lazer et al. 2018). Most computational methods frame fake news detection as a text classification task (Shu et al. 2017) and so require data pre-labelled for veracity. However, the complexities of defining fake news (e.g. fabricated facts or undisclosed advertising?), the different types of fake news (imposter news vs. low-quality news vs. inaccurate news), the difficulty in establishing objective ground truth as well as the weaponization and dilution of ‘fake news’ as a concept leave the collection of pre-labelled data fraught with epistemological issues.

Semi-supervised multivariate statistical techniques may overcome these limitations by modelling news veracity as a latent variable whose value can be estimated from the presence of deception clues and a novel deception scoring approach. This study tested the hypothesis that i) there is significant linguistic variation within the online news genre and that ii) variation is correlated with deceptive situational parameters of communication. Multivariate register analysis was conducted on 5000 stories from the political section of 15 online news sources selected as representative of the online news ecosystem (i.e. a mix of UK and US legacy and new online media from across the full political spectrum). Linguistic parameters were defined from a feature set combining lexico-grammatical and cohesion-based features; situational parameters were drawn from expert-defined fake news detection heuristics and used to calculate a deception score. Visualisation techniques (Diwersy, Evert and Neumann, 2014) were used to assess whether this situational analysis revealed any dimensions of deception and deceptive text clusters.

The study found that linguistic variation in the online news genre is highly correlated with the probability of veracity, with absence of narrative the main indicator of potential deception. This result was unexpected as storytelling is generally associated with deception. However, in the context of a profession which places supreme value on the news story it makes sense that narrative register is a key veracity indicator. Semi-supervised multivariate analysis with deception scoring emerges as a viable alternative to text classification for automated deception detection in epistemologically challenging genres.

REFERENCES
Diwersy, S., Evert, S. and Neumann, S., 2014. A weakly supervised multivariate approach to the study of language variation. Aggregating dialectology, typology, and register analysis. linguistic variation in text and speech, pp.174-204.

Lazer, D.M., Baum, M.A., Benkler, Y., Berinsky, A.J., Greenhill, K.M., Menczer, F., Metzger, M.J., Nyhan, B., Pennycook, G., Rothschild, D. and Schudson, M., 2018. The science of fake news. Science, 359(6380), pp.1094-1096.

Shu, K., Sliva, A., Wang, S., Tang, J. and Liu, H., 2017. Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter, 19(1), pp.22-36.

BIO
Olu Popoola is a PhD candidate researching methods for cross-domain deception detection at the University of Birmingham, and moonlights as a deception detection trainer and OSINT investigator. By day, Olu is a Teaching Fellow at Aston University where he teaches information integrity to future health professionals (a third career, following ten years in advertising and consumer research and another ten in English language teaching). Olu is married with two canal boats and a cat.

TIME & PLACE
1100-1200, Wed 20th Mar, County South B89

Gillings – Building a corpus of deception: methodological and analytical considerations

FORGE is delighted to announce a talk by our upcoming internal speaker: Mathew Gillings (Linguistics & English Language). Details of his talk are below:

TITLE
Building a corpus of deception: methodological and analytical considerations

ABSTRACT
The field of deception detection has been largely dominated by researchers from the field of psychology, and it is clear that those in linguistics have a lot more to offer than they have done thus far. Previous work in this area has primarily been carried out using LIWC (Pennebaker et al, 2001), which identifies what percentage of a given text can be attributed to particular personalities and mental states. However, in more recent years, Archer and Lansley (2015) and McQuaid et al (2015) have applied corpus linguistic methods to the field, using Wmatrix to investigate between truthful and deceptive corpora. However, there has never been a large-scale and systematic corpus study using deceptive spoken data. Similarly, up until now, the sociolinguistic nature of deception has never been investigated.

In this talk, I will discuss the state-of-the-art in deception detection and identify a series of issues with that early work. In particular, I will discuss how certain methodological decisions can impact on the quality and validity of results that arise from the data, and how a different method of analysis can lead to more intuitive and nuanced findings. I will explain how I created a corpus of deception by following best practice in increasing motivation, cognitive load, and ecological validity. I will then discuss how more traditional corpus linguistic methods (such as keyword analysis, effect size measures, dispersion, and concordance analysis), combined with a more flexible, user-friendly analysis tool, can provide further insight into the sociolinguistic nature of deceptive discourse.

References
Archer, D. and C. Lansley. (2015). Public appeals, news interviews and crocodile tears: an argument for multi-channel analysis. Corpora. Vol. 10(2): 231-258.

McQuaid, S., M. Woodworth, E. Hutton, S. Porter, and L. ten Brinke. (2015). ‘Automated insights: verbal cues to deception in real-life high-stakes lies’. Psychology, Crime and Law. Pp. 1-

Pennebaker, James & E. Francis, M & J. Booth, R. (2001). Linguistic Inquiry and Word Count (LIWC): LIWC2001. Mahwah: Lawrence Erlbaum Associates.

TIME & PLACE
1100-1200, Wed 06th Feb, County South B89

All are welcome to attend.