Reranking Partisan Animosity In Algorithmic Social Media Feeds Alters
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. A small tweak to your social media feed can make your opponents feel a little less like enemies.
In a new study published in Science, a Stanford-led team used a browser extension and a large language model to rerank posts on X during the 2024 U.S. presidential campaign, showing that changing the visibility of the most hostile political content can measurably dial down partisan heat without deleting a single post or asking the platform for permission. The experiment, run with 1,256 Democrats and Republicans who used X in the weeks after an attempted assassination of Donald Trump and the withdrawal of Joe Biden from the race, targeted a particular kind... The researchers focused on posts that expressed antidemocratic attitudes and partisan animosity, such as cheering political violence, rejecting bipartisan cooperation, or suggesting that democratic rules are expendable when they get in the way of... To reach inside a platform they did not control, first author Tiziano Piccardi and colleagues built a browser extension that quietly intercepted the web version of the X timeline. Every time a participant opened the For you feed, the extension captured the posts, sent them to a remote backend, and had a large language model score each political post on eight dimensions of...
If a post hit at least four of those eight factors, it was tagged as the kind of content most likely to inflame. The tool then reordered the feed for consenting users in real time. In one experiment, it pushed those posts down the feed so participants would need to scroll further to hit the worst material. In a parallel experiment, it did the opposite and pulled that content higher. “Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them,” said Michael Bernstein, a professor of computer science in Stanford’s School of Engineering... “We have demonstrated an approach that lets researchers and end users have that power.”
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. New research shows the impact that social media algorithms can have on partisan political feelings, using a new tool that hijacks the way platforms rank content. How much does someone’s social media algorithm really affect how they feel about a political party, whether it’s one they identify with or one they feel negatively about? Until now, the answer has escaped researchers because they’ve had to rely on the cooperation of social media platforms. New, intercollegiate research published Nov. 27 in Science, co-led by Northeastern University researcher Chenyan Jia, sidesteps this issue by installing an extension on consenting participants’ browsers that automatically reranks the posts those users see, in real time and still...
Jia and her team discovered that after one week, users’ feelings toward the opposing party shifted by about two points — an effect normally seen over three years — revealing algorithms’ strong influence on... American Association for the Advancement of Science (AAAS) A new experiment using an AI-powered browser extension to reorder feeds on X (formerly Twitter), and conducted independently of the X platform’s algorithm, shows that even small changes in exposure to hostile political content... The findings provide direct causal evidence of the impact of algorithmically controlled post ranking on a user’s social media feed. Social media has become an important source of political information for many people worldwide. However, the platform’s algorithms exert a powerful influence on what we encounter during use, subtly steering thoughts, emotions, and behaviors in poorly understood ways.
Although many explanations for how these ranking algorithms affect us have been proposed, testing these theories has proven exceptionally difficult. This is because the platform operators alone control how their proprietary algorithms behave and are the only ones capable of experimenting with different feed designs and evaluating their causal effects. To sidestep these challenges, Tiziano Piccardi and colleagues developed a novel method that lets researchers reorder people’s social media feeds in real time as they browse, without permission from the platforms themselves. Piccardi et al. created a lightweight, non-intrusive browser extension, much like an ad blocker, that intercepts and reshapes X’s web feed in real time, leveraging large language model-based classifiers to evaluate and reorder posts based on their... This tool allowed the authors to systematically identify and vary how content expressing antidemocratic attitudes and partisan animosity (AAPA) appeared on a user’s feed and observe the effects under controlled experimental conditions.
In a 10-day field experiment on X involving 1,256 participants and conducted during a volatile stretch of the 2024 U.S. presidential campaign, individuals were randomly assigned to feeds with heightened, reduced, or unchanged levels of AAPA content. Piccardi et al. discovered that, relative to the control group, reducing exposure to AAPA content made people feel warmer toward the opposing political party, shifting the baseline by more than 2 points on a 100-point scale. Increasing exposure resulted in a comparable shift toward colder feelings toward the opposing party. According to the authors, the observed effects are substantial, roughly comparable to three years’ worth of change in affective polarization over the duration of the intervention, though it remains unknown if these effects persist...
What’s more, these shifts did not appear to fall disproportionately on any particular group of users. These shifts also extended to emotional experience; participants reported changes in anger and sadness through brief in-feed surveys, demonstrating that algorithmically mediated exposure to political hostility can shape both affective polarization and moment-to-moment emotional... “One study – or set of studies – will never be the final word on how social media affects political attitudes. What is true of Facebook might not be true of TikTok, and what was true of Twitter 4 years ago might not be relevant to X today,” write Jennifer Allen and Joshua Tucker in... “The way forward is to embrace creative research and to build methodologies that adapt to the current moment. Piccardi et al.
present a viable tool for doing that.” Reranking partisan animosity in algorithmic social media feeds alters affective polarization Piccardi, Tiziano12; Saveski, Martin3; Jia, Chenyan4; Hancock, Jeffrey2; Tsai, Jeanne2; Bernstein, Michael2 Click names to download individual files Download full dataset Today, social media platforms hold the sole power to study the effects of feed ranking algorithms. We developed a platform-independent method that reranks participants' feeds in real-time and used this method to conduct a preregistered 10-day field experiment with 1,256 participants on X during the 2024 U.S.
presidential campaign. Our experiment used a large language model to rerank posts that expressed anti-democratic attitudes and partisan animosity (AAPA). Decreasing or increasing AAPA exposure shifted out-party partisan animosity by two points on a 100-point feeling thermometer, with no detectable differences across party lines, providing causal evidence that exposure to AAPA content alters affective... This work establishes a method to study feed algorithms without requiring platform cooperation, enabling independent evaluation of ranking interventions in naturalistic settings. If you have any questions, please feel free to reach out to: Participants' responses to the pre-experiment survey.
A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media.iStock A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts and without the direct cooperation of the platform. The study, from researchers at the University of Washington, Stanford University and Northeastern University, also indicates that it may one day be possible to let users take control of their social media algorithms. The researchers created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters... Researchers published their findings Nov.
27 in Science.
People Also Search
- Reranking partisan animosity in algorithmic social media feeds alters ...
- PDF Reranking partisan animosity in algorithmic social media feeds alters ...
- Algorithms Can Pull Us Apart. This Tool Shows They Can Bring Us Back
- Down-ranking polarizing content lowers emotional temperature on social ...
- AITopics | Reranking partisan animosity in algorithmic social media ...
- How Does Social Media Impact Political Polarization?
- Platform-independent experiment shows tweaking X's feed can alter ...
- Social media research tool can reduce polarization — it could also lead ...
ArXivLabs Is A Framework That Allows Collaborators To Develop And
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add v...
In A New Study Published In Science, A Stanford-led Team
In a new study published in Science, a Stanford-led team used a browser extension and a large language model to rerank posts on X during the 2024 U.S. presidential campaign, showing that changing the visibility of the most hostile political content can measurably dial down partisan heat without deleting a single post or asking the platform for permission. The experiment, run with 1,256 Democrats a...
If A Post Hit At Least Four Of Those Eight
If a post hit at least four of those eight factors, it was tagged as the kind of content most likely to inflame. The tool then reordered the feed for consenting users in real time. In one experiment, it pushed those posts down the feed so participants would need to scroll further to hit the worst material. In a parallel experiment, it did the opposite and pulled that content higher. “Social media ...
This Site Is Protected By ReCAPTCHA And The Google Privacy
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. New research shows the impact that social media algorithms can have on partisan political feelings, using a new tool that hijacks the way platforms rank content. How much does someone’s social media algorithm really affect how they feel about a political party, whether it’s one they identify with or one t...
Jia And Her Team Discovered That After One Week, Users’
Jia and her team discovered that after one week, users’ feelings toward the opposing party shifted by about two points — an effect normally seen over three years — revealing algorithms’ strong influence on... American Association for the Advancement of Science (AAAS) A new experiment using an AI-powered browser extension to reorder feeds on X (formerly Twitter), and conducted independently of the ...