Literature Reviews

Social Media Algorithms warp how people learn from each other

Social Media Algorithms warp how people learn from each other, research shows.

William Brady, Assistant Professor of Management and Organisations at Northwestern University.

Interactions especially on social media are influenced by the flow of information controlled by algorithms. These algorithms are amplifying the information that sustains engagement – and could be described as “click bait”. Brady suggests that a side effect of this clicking and returning to the platforms is that “algorithms amplify information that people are strongly biased to learn from”. He has called this “PRIME” – prestigious, in-group, moral and emotional information. This type of learning is not new and would have served a purpose from an evolutionary perspective – learning from prestigious individuals is efficient as we can copy the successful behaviour. Also from a moral point of view, those who violate moral norms can be sanctioned as it would help the community maintain cooperation. With social media this PRIME information is giving a poor signal as prestige can be faked and our feeds can be full of negative and moral information which will lead to conflict rather than cooperation. This can foster dysfunction as social learning should support cooperation and problem solving, but the algorithms are designed to increase engagement only. Brady calls this “mismatch functional misalignment”.

So what, why does this matter?

People can start to form incorrect perceptions of their social world, this can lead to a polarisation of their political views, seeing the “in group” and “out-group” as being more sharply divided than divided than they really are. The author also found that the more a post is shared the more outrage it generates. So when these algorithms amplify moral and emotional information the to think in a critical way. Is a coordinated campaign misinformation is included in this and is itself then amplified.

What next?

Research in this area is new and there is some controversy around whether this type of online polarisation being amplified spills over into the offline world is debateable. More research is needed to understand the outcomes that occur “when humans and algorithms interact in feedback loops of social learning”. For research to continue ethical concerns such as privacy need to be considered. Brady would like to see “what can be done to make algorithms foster accurate human social learning rather than exploit social learning biases”. He suggests we need an algorithm that “increases engagement while also penalising PRIME information”.

Link: https://www.scientificamerican.com/article/social-media-algorithms-warp-how- people-learn-from-each-other/

Talk Review

CAISS Talk Series Reports: Professor Wendy Moncur

CAISS were privileged to have Professor Wendy Moncur from the University of Strathclyde deliver our second talk in October.

Wendy leads the Cybersecurity Group and her research focuses on online identity, reputation, trust and cybersecurity and crosses many disciplinary boundaries. Her current research – the 3.6million AP4L project – develops privacy enhancing technologies (PET’s) to support people going through sensitive life transitions. The research is looking at four transition groups: (i) living with cancer, (ii) leaving the armed forces, (iii) LGBT+ and (iv) relationship breakdowns.

Wendy talked to us about “Navigating bias in online privacy research”. She stressed that it is important that we ask the right questions and whilst doing this we also ask the right people. As researchers what do we ourselves “bring” to the research as we use our own “interpretive lens” and it is important that we communicate our findings clearly so that others can understand the results.

Wendy then went on to discuss our individual online identities, this is co-constructed, made up of data about an individual posted by themselves and by other people and organisations. The internet in general is swimming in personal data, the minute we share anything we have lost control – once it is “out there” this information persists. Her explanation of how threads of personal data can be used to construct information regarding an individual was very thought provoking; e.g. if you share your Strava run data then someone can easily ascertain your home address or where you work if you run in your lunch break!

To mitigate against bias in the research Wendy advocated the following:

  • Allow for self reflection
  • Draw out information on digital privacy in sensitive contexts
  • Foster participants’ ability for self-expression
  • Facilitate richer, more comprehensive stories and descriptions
  • Enable non-experts to be heard
  • Avoid assumptions and bias.

To further reduce the researcher bias and ensure that the vocabulary was robust the research team worked hard to increase the list of descriptive terms they used, checked out further terms with the University Librarian and also with the advisory board of people living with the transitions under investigation. This led to a very big list! For the workshops that ensued participants were asked to map their life transition on line with questions as prompts. Then empathy mapping was used to help further remove bias and deliver a shared understanding of the user across the research team. Next metaphor cards were used with the groups asked to consider potential technological solutions as opposed to just challenges, needs and practices. Finally participants ideas were prioritised using the MoSCoW tool (Must have, Should have, Could have, Will not have).

Sociodemographic groups were discussed in that older people 70 plus tend to read but don’t comment on line, 30 to 60 year olds have a lot to say and younger people are happy to share information but in general have a more robust awareness of online security.

Results have indicated that the “Transition Continuum” is not a straight line and this is being explored further. Useful design insight for developing Privacy Enhancing Tools is that people’s experiences are not necessarily linear or instantaneous and can extend over a long period. For the future privacy settings ideally need to be more like a dial than a switch.

Talk Review

CAISS Talk Series Reports: Dr Sharon Glaas

The first CAISS talk was held in September with a fabulous session from Dr Sharon Glaas on “Mitigating Researcher Bias in Linguistic Studies”.

Sharon started by defining bias as “who gets to talk and who is listened to”. For example do stay at home Mums have a voice? Sharon studies linguistics – the systematic study of language and communication – functional and descriptive not prescriptive e.g. linguistic sources of persuasion. She reminded us how the social world is studied based on how it is constructed. Some highlights:

  • Linguistics frequently work in an interdisciplinary, multi disciplinary way – working with other disciplines highlights the issue of bias in ways of thinking.
  •  How you talk about something affects how you view it. E.g. Pro-life versus anti abortion.
  • Constructivist versus positivist perspectives, the social world versus the real or natural world – what is the truth and how is meaning perceived?
  • Bias is part of the world that we live in. We cannot remove it but we need to be aware of it and try to mitigate it.
  • Media literacy is most important.
  • One of the biggest red flags is the use of Large Language models (LLM’s) and how they are being framed. AI does not know things it just repeats them.
  • People “pull down” on large chunks of language and a LLM will just predict what the next word is.
  • Language is an issue as LLM’s do not learn.

Sharon also elaborated on her interesting work in a corpus assisted study of political and media discourses around the EU in the lead up to Brexit. She found that the pro –EU stance of the Guardian was systematically undermined by three key themes:

  • Discourses of Conflict between UK / EU and EU / Member states
  • Discourses of Disparity of citizen’s experience (EU not working)
  • Discourses of Threat to the UK and an existential risk to the EU.

We all have linguistic biases – ways of conceiving and talking about things that are grounded in our world view. Sharon does not believe it is possible to entirely eliminate bias from our work – but awareness and transparency help mitigate the issue. She stressed in her conclusion that it is important to understands the impact of those biases as use of LLM’s and AI tools become more prevalent.

We had excellent feedback from Sharon’s talk, one delegate said it was “the best one hour briefing they had heard in a very long time”.

Uncategorized

CAISS Bytes: A.I. Safety Volunteers

A global group of AI experts and data scientists have put together a voluntary framework for developing AI products safely. There are 25,000 members including Meta, Google and Samsung staff with a checklist of 84 questions for developers to consider at the start of an AI project. Data protection laws from various territories are included, and whether it is clear to a user that they are interacting with AI amongst others. There are questions for individuals developers, teams involved and for people who may be testing products. The public are invited to submit their questions. As the field of AI is rapidly evolving questions around bias, legal compliance and transparency and fairness, developers can contribute to building responsible and trustworthy AI systems. Link https://www.bbc.co.uk/news/technology-66225855

Byte

CAISS Bytes: A.I. Literature Reviews

Nature magazine tells us how Mushtaq Bilal is using a new generation of search engines, powered by machine learning and large language models, moving beyond keyword searches to pull connections from the mass of the scientific literature to speed up work. Some programs, such as Consensus, give research-backed answers to yes-or-no questions; others, such as Semantic Scholar, Elicit and Iris, act as digital assistants. Collectively, the platforms facilitate many of the early steps in the writing process. Critics note, however, that the programs remain relatively untested and run the risk of perpetuating existing biases in the academic publishing process. Can using AI free up time to allow more focus on innovation and discovery by drawing information from a massive body of literature and papers? Link below:

https://www.nature.com/articles/d41586-023-01907- z?utm_source=Nature+Briefing&utm_campaign=487d548e6a-briefing-dy- 20230808&utm_medium=email&utm_term=0_c9dfd39373-487d548e6a-48999819

Byte

CAISS Bytes: Noam Chomsky

Noam Chomsky a professor of linguistics has dismissed ChatGPT as “hi-tech plagiarism”, it has also been described as a “parrot” as it can only repeat what has been said before. There are concerns around how we will teach people to research, think and write, is using ChatGPT a “way of avoiding learning”? Further questions are posed such as: is the essay now dead, could a machine learning algorithm produce a better result, are students being failed if they do not find the content interesting enough to engage with? Link https://www.openculture.com/2023/02/noam-chomsky-on-chatgpt.html

Conference Report

Behavioural & Social Science Conference, Bath July 2023

The 2023 CREST (Centre for Research & Evidence on Security Threats) BASS (Behavioural and Social Science in Security) conference was a packed three days of very diverse and interesting talks. There were two great keynotes; firstly David Matthews talked about “Intersubjectivity and communicative rationality in defence and national security contexts”. He illustrated how the strategic environment is rapidly changing with what he described as “lessons from the field” from Timor-Leste, Iraq and Afghanistan. This was carried out by deploying a social science team to discover how conflict and Western troops are affecting the local population and create socio-culturally appropriate recommendations and interventions. What looks good in policy terms when briefing and planning remotely may be very different when you are trying to ascertain “ground truth” in the field and may have unintended consequences. Using adversarial approaches risks alienating local communities and can affect creditability. The role of social scientists should be to mobilise empathy for the “other” and avoid using adversarial approaches. Local needs have local priorities and behaviour must be transparent, especially when building a long term relationship. The conference closed with a keynote from Professor Martin Innes, Co-Director of the Security, Crime and Innovation Centre at Cardiff University. He gave us a fascinating talk on the use of OSINT (Open Source Intelligence). OSINT needs a blend of art, craft and science to be successful. Warfare introduces uncertainty, so how do we know what is real? The important issue is to have near real time monitoring, although sometimes having to deal with cold, hard facts; retroactive knowledge can update past events with information to aid future work. Modern war combines brutal kinetic activity with high level intelligence. The problem is we have an ecosystem https://web.stanford.edu/~jgrimmer/tad2.pdf based on publicity of exposure and this gets more attention than original threats. During the three days there was a packed programme across the three themes of Risk Assessment and Management, Gathering Human Intelligence and Deterrence and Disruption. Some highlights were:
Wendy Moncur gave a thought provoking talk on the potential risk and harms of revealing personal data on-line. She covered digital literacy, cyber harms in the context of organisations, identity theft, unwelcome attention e.g. cyber bulling and how even using Strava to record your run can give away valuable personal information. The collection of aggregated data can lead to insight for a malign actor and is relevant for everyone.
Lewys Brace talked to us about his fascinating work on the Con.Cel project which focussed on the Incel (involuntary celibate) community, showcasing the work on extremism, online spaces, ideological contagion and UK based activities on Incels. Analysis of their behaviours and psychological traits is key in discovering the relationship between on-line discussion and offline violence.
Our very own Matt Asher updated us on the work into whether AI can really predict political affiliation. Historically based on physiognomy and although previous research shows a 73% accuracy rating, results in the replicated study (n1,998) never passed 66% accuracy and with biases in terms of who is and isn’t accurately classified. Research is on-going into the implications of AI predicting behaviour.
Link: https://crestresearch.ac.uk/

Conference Report

Conference Report: IC2S2 – The 9th International Conference on Computational Social Science, Copenhagen

By G. Mason

The 9th International Conference on Computational Social Science was held in Copenhagen in July and saw 728 delegates from across the world enjoy a day of tutorials and then a further three packed days of key notes and shorter talks.  This was complimented by a huge range of posters which were exhibited each day.

Keynote – Jevin West, University of Washington (US) Information Science:

Jevin kicked off the conference with two questions asking: “Can we muster enough elective energy to take action for ourselves in politics, health etc”.  and “how do we solve a problem like misinformation?”  His answer to both was: basically “you don’t, but you can do things better”.

One myth he wanted to debunk: anti vaccine equals anti-science.  This is not true, people who are anti vaxers look to science to inform their views too.  The public like to think that they can do their own research, but they often depend on the “credentials of experts”, whether they are scientists or lay experts.  He then asked “do journals still matter as credential makers?”  The conclusion was that “yes they do”, however, highly regarded professionals are less likely to publish in journals as they do not see the need to be “published” as their research is readily available and “out there”.

Perceived expertise: what are the prevalence and relative influence of perceived/lay  experts who spread Covid-19 vaccine misinformation?  Data & methods – 4.3m tweets and 5.5 m users that included vaccine discussion during April 2021 were examined and manually labelled.  It was found that anti-vaccine supporters shared lots of perceived expert knowledge, tweets, likes etc.  Perceived experts were more influential in the community than real experts e.g. health professionals.  Perceived experts had a sizeable presence in the anti-vaccine community and are seen as important.

Perceived expertise to perceived consensus around Covid-19: Papers were being discussed in both communities – science based and anti-mask wearing. How papers were referenced had an effect, they could be split into epidemiology and physical experiments in the science community.  In the online community it was separated by mask and non-mask wearing.  The conclusion was that perceived expertise leads to a perceived consensus.

Much laughter was heard when he explained Brandolinis Law, from Wikipedia the definition is: “Easy to create BS, hard to clean it up!”  When asked, ChatGPT produced a definition of Brandolinis law that was complete BS; the irony of it!

For the Future: Jevin stated that we need to find out how to measure the prevalence of and exposure to misinformation.  We need to teach science from an early age with social science, future generations will then understand the bridge between science and social science;  and we need to save the planet from social media so we do not destroy ourselves!       (Anecdote on the next page . Ed)

Take away main message – “We need critical thinking skills more than ever”.

                                                                                                                              

Please note that conference reports reflect the opinions and views of the presenters

Byte

Students using AI Chatbots to learn a language

The BBC have reported that students are switching to AI to learn a language. AI has benefits as it will not judge you if you make a mistake. With Spanish for example it can give regional variations such as Argentinian and Mexican Spanish. However, as useful as it can be for practising it can be repetitive, corrections are missing and words can be invented. As a supplement to other methods AI could have a place in cementing knowledge and making practice fun. Link here: https://www.bbc.co.uk/news/business-65849104