Misogyny, machines, and the media, or: how science should not be reported

Yesterday (26 May 2016), the thinktank Demos released a blog post entitled The Scale of Online Misogyny in which the author Jack Dale discussed “new research by Demos’ Centre for the Analysis of Social Media”. This research, in a nutshell, intimates that around half of misogynistic abuse on Twitter is sent by women. I’m going to go through that post and put my thoughts here as I progress, with the first side-point that a proportion of what I say for the blog post can also be said for Demos’ 2014 report by Jamie Bartlett et al.

In this research, Dale says that they analysed their Twitter data using, “a Natural Language Processing Algorithm”. This is remarkably under-explained. For context, it’s a bit like saying that they used “a surgical procedure”. Well yes, but could we have more details? (Edited to add: I later discovered a press release that contains the following: “Demos conducts digital research through its Centre for the Analysis of Social Media (CASM), using its own in-house technology – Method 52, which is a Natural Language Processing tool.” Still not much to go on there though.) Whatever the case, this Natural Language Processing, or NLP algorithm is then described as a process “whereby a computer can be taught to recognise meaning in language like we do.”

Fundamentally, no. Continue reading