Literature Reviews

Social Media Algorithms warp how people learn from each other

Social Media Algorithms warp how people learn from each other, research shows.

William Brady, Assistant Professor of Management and Organisations at Northwestern University.

Interactions especially on social media are influenced by the flow of information controlled by algorithms. These algorithms are amplifying the information that sustains engagement – and could be described as “click bait”. Brady suggests that a side effect of this clicking and returning to the platforms is that “algorithms amplify information that people are strongly biased to learn from”. He has called this “PRIME” – prestigious, in-group, moral and emotional information. This type of learning is not new and would have served a purpose from an evolutionary perspective – learning from prestigious individuals is efficient as we can copy the successful behaviour. Also from a moral point of view, those who violate moral norms can be sanctioned as it would help the community maintain cooperation. With social media this PRIME information is giving a poor signal as prestige can be faked and our feeds can be full of negative and moral information which will lead to conflict rather than cooperation. This can foster dysfunction as social learning should support cooperation and problem solving, but the algorithms are designed to increase engagement only. Brady calls this “mismatch functional misalignment”.

So what, why does this matter?

People can start to form incorrect perceptions of their social world, this can lead to a polarisation of their political views, seeing the “in group” and “out-group” as being more sharply divided than divided than they really are. The author also found that the more a post is shared the more outrage it generates. So when these algorithms amplify moral and emotional information the to think in a critical way. Is a coordinated campaign misinformation is included in this and is itself then amplified.

What next?

Research in this area is new and there is some controversy around whether this type of online polarisation being amplified spills over into the offline world is debateable. More research is needed to understand the outcomes that occur “when humans and algorithms interact in feedback loops of social learning”. For research to continue ethical concerns such as privacy need to be considered. Brady would like to see “what can be done to make algorithms foster accurate human social learning rather than exploit social learning biases”. He suggests we need an algorithm that “increases engagement while also penalising PRIME information”.

Link: https://www.scientificamerican.com/article/social-media-algorithms-warp-how- people-learn-from-each-other/