Reducing Social Media Engagement With Affectively Polarised Content Vi

Bonisiwe Shabane
-
reducing social media engagement with affectively polarised content vi

Communications Psychology volume 3, Article number: 11 (2025) Cite this article The generation and distribution of hyper-partisan content on social media has gained millions of exposure across platforms, often allowing malevolent actors to influence and disrupt democracies. The spread of this content is facilitated by real users’ engaging with it on platforms. The current study tests the efficacy of an ‘inoculation’ intervention via six online survey-based experiments in the UK and US. Experiments 1–3 (total N = 3276) found that the inoculation significantly reduced self-reported engagement with polarising stimuli. However, Experiments 4–6 (total N = 1878) found no effects on participants’ self-produced written text discussing the topic.

The implications of these findings are discussed in the context of the literature on polarisation and previous interventions to reduce engagement with disinformation. In the last few decades, political polarisation has grown in numerous countries worldwide1,2,3. Of particular concern is the rise in affective polarisation, that is the disparity between feelings of warmth towards political in-groups versus political out-groups4. This concept of affective polarisation is rooted in Social Identity Theory5, which posits that humans are naturally inclined to categorise themselves and others into in-groups and out-groups, with greater salience of these identities encouraging... However, rather than observing any great increase in in-group warmth, instead researchers have noted declining warmth towards political foes, or a growth in so called negative partisanship6. The rise in affective polarisation may in part be driven by our increased dependence upon social media for news gathering and the explosion in the number of hyper-partisan outlets (e.g.

Breitbart) that create and spread hyper-partisan content on social media7,8,9,10. Much of this content could be classed as disinformation, the deliberate creation and distribution of false or manipulated information7,8,11. Creators of such content have varied objectives, including monetisation of the sharing of sensationalist or partisan news, but often there is an incentive to influence and reduce trust in democratic processes by increasing group... Indeed, exposure to partisan reporting that is critical of those out-groups has been found to decrease ratings of trust and liking of those groups, feeding into negative partisanship6,16. The full scale of the problem has become clearer over the last decade. For example, investigations into the reach of foreign disinformation operations, including the internet research agency based in St Petersburg, have found hundreds of millions of exposures to disinforming and hyper-partisan posts on Twitter and...

Inoculating Against Affective Polarization: A Series of Experiments Examining the Effectiveness of Preemptive Interventions A team of researchers from the University of Bristol conducted a series of six experiments to investigate whether "inoculation theory," a communication framework used to preemptively counter misinformation, could be effectively applied to mitigate... Affective polarization, characterized by animosity and distrust towards opposing political groups, poses a significant threat to democratic discourse and societal cohesion. The researchers hypothesized that preemptively exposing individuals to information about the manipulative tactics often used to fuel affective polarization would reduce their susceptibility to such tactics and decrease their likelihood of engaging with and... This research program was meticulously pre-registered, with detailed methodologies and analysis plans outlined in advance to ensure transparency and rigor. Ethical approvals were obtained from the University of Bristol’s Psychology Ethics Committee, and informed consent was secured from all participants.

The first two experiments focused on the context of Brexit in Great Britain. Participants, recruited through YouGov, were exposed to either an inoculation video explaining common manipulation techniques used in affectively polarized content or a control video about the British political system. Subsequently, they were presented with synthetic news headlines related to Brexit, some employing derogatory, affectively polarized language and others using more neutral phrasing. The key outcome measures were participants’ self-reported likelihood of clicking on and sharing the headlines. Experiment 2 refined the inoculation video and utilized a broader range of outcome measures. These initial studies provided promising results, suggesting that inoculation could indeed reduce engagement with affectively polarized content.

Experiment 3 shifted the focus to the United States, examining affective polarization in the context of the abortion debate. Using a similar methodology as the previous experiments, participants were randomly assigned to an inoculation or control condition, and then presented with real-world tweets containing news links related to Roe v. Wade. The headlines were selected based on their level of affective polarization, and some were subtly modified to heighten the presence of affectively charged language. This experiment further solidified the findings of the earlier studies, demonstrating the generalizability of the inoculation approach across different political contexts and highly divisive issues. Experiments 4, 5, and 6 sought to investigate the impact of inoculation on the language used by individuals when expressing their own views on politically charged topics.

Specifically, Experiment 4 focused on the abortion debate in the US, asking participants to write short essays expressing their opinions. The researchers then analyzed the text using natural language processing techniques to quantify the level of affective polarization present in participants’ writing. Experiments 5 and 6 refined this methodology, prompting participants to respond to simulated social media posts that opposed their own stance on abortion. Experiment 6 was a direct replication of Experiment 4, with added pre-screening to ensure a balanced representation of pro-choice and pro-life participants. Assistant Professor of Computer Science, Johns Hopkins University This research was partially supported by a Hoffman-Yee grant from the Stanford Institute for Human-Centered Artificial Intelligence.

Reducing the visibility of polarizing content in social media feeds can measurably lower partisan animosity. To come up with this finding, my colleagues and I developed a method that let us alter the ranking of people’s feeds, previously something only the social media companies could do. Reranking social media feeds to reduce exposure to posts expressing anti-democratic attitudes and partisan animosity affected people’s emotions and their views of people with opposing political views. I’m a computer scientist who studies social computing, artificial intelligence and the web. Because only social media platforms can modify their algorithms, we developed and released an open-source web tool that allowed us to rerank the feeds of consenting participants on X, formerly Twitter, in real time. Received 2024 Mar 9; Accepted 2025 Jan 10; Collection date 2025.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit... The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from... To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The generation and distribution of hyper-partisan content on social media has gained millions of exposure across platforms, often allowing malevolent actors to influence and disrupt democracies. The spread of this content is facilitated by real users’ engaging with it on platforms.

The current study tests the efficacy of an ‘inoculation’ intervention via six online survey-based experiments in the UK and US. Experiments 1–3 (total N = 3276) found that the inoculation significantly reduced self-reported engagement with polarising stimuli. However, Experiments 4–6 (total N = 1878) found no effects on participants’ self-produced written text discussing the topic. The implications of these findings are discussed in the context of the literature on polarisation and previous interventions to reduce engagement with disinformation. Subject terms: Human behaviour, Psychology A series of experiments tested an inoculation intervention to reduce engagement with affectively polarized content on social media.

The intervention successfully reduced self-reported sharing of polarizing content but did not affect how users wrote about polarized topics.

People Also Search

Communications Psychology Volume 3, Article Number: 11 (2025) Cite This

Communications Psychology volume 3, Article number: 11 (2025) Cite this article The generation and distribution of hyper-partisan content on social media has gained millions of exposure across platforms, often allowing malevolent actors to influence and disrupt democracies. The spread of this content is facilitated by real users’ engaging with it on platforms. The current study tests the efficacy ...

The Implications Of These Findings Are Discussed In The Context

The implications of these findings are discussed in the context of the literature on polarisation and previous interventions to reduce engagement with disinformation. In the last few decades, political polarisation has grown in numerous countries worldwide1,2,3. Of particular concern is the rise in affective polarisation, that is the disparity between feelings of warmth towards political in-groups...

Breitbart) That Create And Spread Hyper-partisan Content On Social Media7,8,9,10.

Breitbart) that create and spread hyper-partisan content on social media7,8,9,10. Much of this content could be classed as disinformation, the deliberate creation and distribution of false or manipulated information7,8,11. Creators of such content have varied objectives, including monetisation of the sharing of sensationalist or partisan news, but often there is an incentive to influence and reduc...

Inoculating Against Affective Polarization: A Series Of Experiments Examining The

Inoculating Against Affective Polarization: A Series of Experiments Examining the Effectiveness of Preemptive Interventions A team of researchers from the University of Bristol conducted a series of six experiments to investigate whether "inoculation theory," a communication framework used to preemptively counter misinformation, could be effectively applied to mitigate... Affective polarization, c...

The First Two Experiments Focused On The Context Of Brexit

The first two experiments focused on the context of Brexit in Great Britain. Participants, recruited through YouGov, were exposed to either an inoculation video explaining common manipulation techniques used in affectively polarized content or a control video about the British political system. Subsequently, they were presented with synthetic news headlines related to Brexit, some employing deroga...