Social Media Algorithms Affective Polarization
New research shows the impact that social media algorithms can have on partisan political feelings, using a new tool that hijacks the way platforms rank content. How much does someone’s social media algorithm really affect how they feel about a political party, whether it’s one they identify with or one they feel negatively about? Until now, the answer has escaped researchers because they’ve had to rely on the cooperation of social media platforms. New, intercollegiate research published Nov. 27 in Science, co-led by Northeastern University researcher Chenyan Jia, sidesteps this issue by installing an extension on consenting participants’ browsers that automatically reranks the posts those users see, in real time and still... Jia and her team discovered that after one week, users’ feelings toward the opposing party shifted by about two points — an effect normally seen over three years — revealing algorithms’ strong influence on...
Assistant Professor of Computer Science, Johns Hopkins University This research was partially supported by a Hoffman-Yee grant from the Stanford Institute for Human-Centered Artificial Intelligence. Reducing the visibility of polarizing content in social media feeds can measurably lower partisan animosity. To come up with this finding, my colleagues and I developed a method that let us alter the ranking of people’s feeds, previously something only the social media companies could do. Reranking social media feeds to reduce exposure to posts expressing anti-democratic attitudes and partisan animosity affected people’s emotions and their views of people with opposing political views. I’m a computer scientist who studies social computing, artificial intelligence and the web.
Because only social media platforms can modify their algorithms, we developed and released an open-source web tool that allowed us to rerank the feeds of consenting participants on X, formerly Twitter, in real time. A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media.iStock A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts and without the direct cooperation of the platform. The study, from researchers at the University of Washington, Stanford University and Northeastern University, also indicates that it may one day be possible to let users take control of their social media algorithms. The researchers created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters...
Researchers published their findings Nov. 27 in Science. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community?
Learn more about arXivLabs. A study shows that the order in which platforms like X display content to their users affects their animosity towards other ideological groups A team of U.S. researchers has shown that the order in which political messages are displayed on social media platforms does affect polarization — one of the most debated issues since the rise of social media and the... The phenomenon is equally strong regardless of the user’s political orientation, the academics note in an article published on Thursday in Science. Social media is an important source of political information.
For hundreds of millions of people worldwide, it is even the main channel for political engagement: they receive political content, share it, and express their opinions through these platforms. Given the relevance of social media in this sphere, understanding how the algorithms that operate on these platforms work is crucial — but opacity is the norm in the industry. That makes it extremely difficult to estimate the extent to which the selection of highlighted content shapes users’ political views. How did the researchers overcome algorithmic opacity to alter the order of posts that social media users see? Tiziano Piccardi from Stanford University and his colleagues developed a browser extension that intercepts and reorders the feed (the chronological timeline of posts) of certain social networks in real time. The tool uses a large language model (LLM) to assign a score to each piece of content, measuring the extent to which it contained “antidemocratic attitudes and partisan animosity” (AAPA).
Once scored, the posts were reordered one way or another — without any collaboration from the platform or reliance on its algorithm. The experiment involved 1,256 participants, who had all been duly informed. The study focused on X, as it is the social network most used in the U.S. for expressing political opinions, and it was conducted during the weeks leading up to the 2024 presidential election to ensure a high circulation of political messages. Organize your preprints, BibTeX, and PDFs with Paperpile. Enhance arXiv with our new Chrome Extension.
Abstract: There is widespread concern about the negative impacts of social media feed ranking algorithms on political polarization. Leveraging advancements in LLMs, we develop an approach to re-rank feeds in real-time to test the effects of content that is likely to polarize: expressions of antidemocratic attitudes and partisan animosity (AAPA). In a preregistered 10-day field experiment on X/Twitter with 1,256 consented participants, we increase or decrease participants' exposure to AAPA in their algorithmically curated feeds. We observe more positive outparty feelings when AAPA exposure is decreased and more negative outparty feelings when AAPA exposure is increased. Exposure to AAPA content also results in an immediate increase in negative emotions, such as sadness and anger. The interventions do not significantly impact traditional engagement metrics such as re-post and favorite rates.
These findings highlight a potential pathway for developing feed algorithms that mitigate affective polarization by addressing content that undermines the shared values required for a healthy democracy. Organize your preprints, BibTeX, and PDFs with Paperpile. In the ongoing discourse on the influence of social media on political polarization, the paper authored by Piccardi et al. is a timely exploration of how algorithmic curation of social media feeds can affect user sentiment and polarization. The paper specifically investigates the effects of exposure to content reflecting antidemocratic attitudes and partisan animosity (AAPA) in social media feeds, focusing on whether intervention in real-time content ranking can influence affective polarization. A new Stanford-led study is challenging the idea that political toxicity is simply an unavoidable element of online culture.
Instead, the research suggests that the political toxicity many users encounter on social media is a design choice that can be reversed. Researchers have unveiled a browser-based tool that can cool the political temperature of an X feed by quietly downranking hostile or antidemocratic posts. Remarkably, this can occur without requiring any deletions, bans, or cooperation from X itself. The study offers the takeaway that algorithmic interventions can meaningfully reduce partisan animosity while still preserving political speech. It also advances a growing movement advocating user control over platform ranking systems and the algorithms that shape what they see, which were traditionally guarded as proprietary, opaque, and mainly optimized for engagement rather... The research tool was built by a multidisciplinary team across Stanford, Northeastern University, and the University of Washington, composed of computer scientists, psychologists, communication scholars, and information scientists.
Their goal in the experiment was to counter the engagement-driven amplification of divisive content that tends to reward outrage, conflict, and emotionally charged posts, without silencing political speech. Using a large language model, the tool analyzes posts in real time and identifies several categories of harmful political subject matter, including calls for political violence, attacks on democratic norms, and extreme hostility toward... When the system flags such content, it simply pushes those posts lower in the feed so they are less noticeable, like seating your argumentative uncle at the far end of the table during the...
People Also Search
- Reranking partisan animosity in algorithmic social media feeds alters ...
- PDF Reranking partisan animosity in algorithmic social media feeds alters ...
- How Does Social Media Impact Political Polarization?
- Down-ranking polarizing content lowers emotional temperature on social ...
- Social media research tool can reduce polarization — it could also lead ...
- [2411.14652] Reranking partisan animosity in algorithmic social media ...
- Algorithms do widen the divide: Social media feeds shape political ...
- Social Media Algorithms & Affective Polarization
- New Algorithmic Tool Shows Social Media Polarization Isn't Inevitable
- Social media research tool can lower political temperature—it could ...
New Research Shows The Impact That Social Media Algorithms Can
New research shows the impact that social media algorithms can have on partisan political feelings, using a new tool that hijacks the way platforms rank content. How much does someone’s social media algorithm really affect how they feel about a political party, whether it’s one they identify with or one they feel negatively about? Until now, the answer has escaped researchers because they’ve had t...
Assistant Professor Of Computer Science, Johns Hopkins University This Research
Assistant Professor of Computer Science, Johns Hopkins University This research was partially supported by a Hoffman-Yee grant from the Stanford Institute for Human-Centered Artificial Intelligence. Reducing the visibility of polarizing content in social media feeds can measurably lower partisan animosity. To come up with this finding, my colleagues and I developed a method that let us alter the r...
Because Only Social Media Platforms Can Modify Their Algorithms, We
Because only social media platforms can modify their algorithms, we developed and released an open-source web tool that allowed us to rerank the feeds of consenting participants on X, formerly Twitter, in real time. A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independ...
Researchers Published Their Findings Nov. 27 In Science. ArXivLabs Is
Researchers published their findings Nov. 27 in Science. arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that a...
Learn More About ArXivLabs. A Study Shows That The Order
Learn more about arXivLabs. A study shows that the order in which platforms like X display content to their users affects their animosity towards other ideological groups A team of U.S. researchers has shown that the order in which political messages are displayed on social media platforms does affect polarization — one of the most debated issues since the rise of social media and the... The pheno...