2411 14652 Reranking Partisan Animosity In Algorithmic Social Media
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Assistant Professor of Computer Science, Johns Hopkins University
This research was partially supported by a Hoffman-Yee grant from the Stanford Institute for Human-Centered Artificial Intelligence. Reducing the visibility of polarizing content in social media feeds can measurably lower partisan animosity. To come up with this finding, my colleagues and I developed a method that let us alter the ranking of people’s feeds, previously something only the social media companies could do. Reranking social media feeds to reduce exposure to posts expressing anti-democratic attitudes and partisan animosity affected people’s emotions and their views of people with opposing political views. I’m a computer scientist who studies social computing, artificial intelligence and the web. Because only social media platforms can modify their algorithms, we developed and released an open-source web tool that allowed us to rerank the feeds of consenting participants on X, formerly Twitter, in real time.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. New research shows the impact that social media algorithms can have on partisan political feelings, using a new tool that hijacks the way platforms rank content. How much does someone’s social media algorithm really affect how they feel about a political party, whether it’s one they identify with or one they feel negatively about? Until now, the answer has escaped researchers because they’ve had to rely on the cooperation of social media platforms. New, intercollegiate research published Nov. 27 in Science, co-led by Northeastern University researcher Chenyan Jia, sidesteps this issue by installing an extension on consenting participants’ browsers that automatically reranks the posts those users see, in real time and still...
Jia and her team discovered that after one week, users’ feelings toward the opposing party shifted by about two points — an effect normally seen over three years — revealing algorithms’ strong influence on... Organize your preprints, BibTeX, and PDFs with Paperpile. Enhance arXiv with our new Chrome Extension. Abstract: There is widespread concern about the negative impacts of social media feed ranking algorithms on political polarization. Leveraging advancements in LLMs, we develop an approach to re-rank feeds in real-time to test the effects of content that is likely to polarize: expressions of antidemocratic attitudes and partisan animosity (AAPA). In a preregistered 10-day field experiment on X/Twitter with 1,256 consented participants, we increase or decrease participants' exposure to AAPA in their algorithmically curated feeds.
We observe more positive outparty feelings when AAPA exposure is decreased and more negative outparty feelings when AAPA exposure is increased. Exposure to AAPA content also results in an immediate increase in negative emotions, such as sadness and anger. The interventions do not significantly impact traditional engagement metrics such as re-post and favorite rates. These findings highlight a potential pathway for developing feed algorithms that mitigate affective polarization by addressing content that undermines the shared values required for a healthy democracy. Organize your preprints, BibTeX, and PDFs with Paperpile. In the ongoing discourse on the influence of social media on political polarization, the paper authored by Piccardi et al.
is a timely exploration of how algorithmic curation of social media feeds can affect user sentiment and polarization. The paper specifically investigates the effects of exposure to content reflecting antidemocratic attitudes and partisan animosity (AAPA) in social media feeds, focusing on whether intervention in real-time content ranking can influence affective polarization. A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media.iStock A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts and without the direct cooperation of the platform. The study, from researchers at the University of Washington, Stanford University and Northeastern University, also indicates that it may one day be possible to let users take control of their social media algorithms.
The researchers created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters... Researchers published their findings Nov. 27 in Science. Today, social media platforms hold sole power to study the effects of feed ranking algorithms. We developed a platform-independent method that reranks participants’ feeds in real-time and used this method to conduct a preregistered 10-day field experiment with 1,256 participants on X during the 2024 U.S. presidential campaign.
Our experiment used a large language model to rerank posts that expressed antidemocratic attitudes and partisan animosity (AAPA). Decreasing or increasing AAPA exposure shifted out-party partisan animosity by two points on a 100-point feeling thermometer, with no detectable differences across party lines, providing causal evidence that exposure to AAPA content alters affective... This work establishes a method to study feed algorithms without requiring platform cooperation, enabling independent evaluation of ranking interventions in naturalistic settings. Social media algorithms profoundly impact our lives: they curate what we see (?) in ways that can shape our opinions (?, ?, ?), our moods (?, ?, ?), and our actions (?, ?, ?,... Due to the power that these ranking algorithms have to direct our attention, the research literature has articulated many theories and results detailing the impact that ranking algorithms have on us (?, ?, ?,... However, validating these theories and results has remained extremely difficult because the ranking algorithm behavior is determined by the social media platforms, and only the platforms themselves can test alternative feed designs and causally...
Platforms, however, face political and financial pressures that constrain the kinds of experiments they can launch and share (?). Concerns about lawsuits and the need to preserve engagement-driven revenue further limit what platforms are willing to test, leaving massive gaps in the design space of ranking algorithms that have been explored in naturalistic... To address this gap, in this work, we present an approach that enables researchers to rerank participants’ social media feeds in real-time as they browse, without requiring platform permission or cooperation. We built a browser extension—a small add-on to a web browser that modifies how web pages appear or behave, similar to an ad blocker. Our extension intercepts and modifies X’s web-based feed in real-time and reranks the feed using Large Language Model (LLM)-based rescoring, with only a negligible increase in page load time. This web extension allows us to rerank content according to experimentally controlled conditions.
The design opens a new paradigm for algorithmic experimentation: it provides external researchers with a tool for conducting independent field experiments and evaluating the causal effects of algorithmic content curation on user attitudes and... This capability allowed us to investigate a pressing question: can feed algorithms cause affective polarization—hostility toward opposing political parties (?, ?, ?, ?)? This concern has grown since the 2016 U.S. presidential election (?), and the debate remains ongoing after the 2020 and 2024 elections. If social media algorithms are causing affective polarization, they might not only bear responsibility for rising political incivility online (?) but also pose a risk to trust in democratic institutions (?). In this case, isolating the algorithmic design choices that cause polarization could offer alternative algorithmic approaches (?).
A major hypothesized mechanism for how feed algorithms cause polarization is a self-reinforcing engagement loop: users engage with content aligning with their political views, the feed algorithm interprets this engagement as a positive signal,... Some studies support this hypothesis, finding that online interactions exacerbate polarization (?), potentially because of the increased visibility of hostile political discussions (?), divisive language (?, ?, ?, ?, ?), and content that reinforces... However, large-scale field experiments aimed at reducing polarization by intervening on the feed algorithm—for example, by increasing exposure to out-party content—have found both a decrease (?) and an increase (?) in polarization. Similarly, recent large-scale experiments on Facebook and Instagram found no evidence that reduced exposure to in-party sources or a simpler reverse-chronological algorithm affected polarization and political attitudes (?, ?) during the 2020 U.S. election. These mixed results reveal the difficulty in identifying what, if any, algorithmic intervention might help reduce polarization, especially during politically charged times.
People Also Search
- [2411.14652] Reranking partisan animosity in algorithmic social media ...
- Reranking partisan animosity in algorithmic social media feeds alters ...
- Down-ranking polarizing content lowers emotional temperature on social ...
- AITopics | Reranking partisan animosity in algorithmic social media ...
- How Does Social Media Impact Political Polarization?
- Social Media Algorithms & Affective Polarization
- Social Media Algorithms Can Shape Affective Polarization via Exposure ...
- Social media research tool can reduce polarization — it could also lead ...
- PDF Supplementary Materials for - Science | AAAS
ArXivLabs Is A Framework That Allows Collaborators To Develop And
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...
This Research Was Partially Supported By A Hoffman-Yee Grant From
This research was partially supported by a Hoffman-Yee grant from the Stanford Institute for Human-Centered Artificial Intelligence. Reducing the visibility of polarizing content in social media feeds can measurably lower partisan animosity. To come up with this finding, my colleagues and I developed a method that let us alter the ranking of people’s feeds, previously something only the social med...
This Site Is Protected By ReCAPTCHA And The Google Privacy
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. New research shows the impact that social media algorithms can have on partisan political feelings, using a new tool that hijacks the way platforms rank content. How much does someone’s social media algorithm really affect how they feel about a political party, whether it’s one they identify with or one t...
Jia And Her Team Discovered That After One Week, Users’
Jia and her team discovered that after one week, users’ feelings toward the opposing party shifted by about two points — an effect normally seen over three years — revealing algorithms’ strong influence on... Organize your preprints, BibTeX, and PDFs with Paperpile. Enhance arXiv with our new Chrome Extension. Abstract: There is widespread concern about the negative impacts of social media feed ra...
We Observe More Positive Outparty Feelings When AAPA Exposure Is
We observe more positive outparty feelings when AAPA exposure is decreased and more negative outparty feelings when AAPA exposure is increased. Exposure to AAPA content also results in an immediate increase in negative emotions, such as sadness and anger. The interventions do not significantly impact traditional engagement metrics such as re-post and favorite rates. These findings highlight a pote...