New Stanford Algorithm Derank Divisive Political Posts On X

Bonisiwe Shabane
-
new stanford algorithm derank divisive political posts on x

A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media.iStock A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts and without the direct cooperation of the platform. The study, from researchers at the University of Washington, Stanford University and Northeastern University, also indicates that it may one day be possible to let users take control of their social media algorithms. The researchers created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters... Researchers published their findings Nov.

27 in Science. A new Stanford-led study is challenging the idea that political toxicity is simply an unavoidable element of online culture. Instead, the research suggests that the political toxicity many users encounter on social media is a design choice that can be reversed. Researchers have unveiled a browser-based tool that can cool the political temperature of an X feed by quietly downranking hostile or antidemocratic posts. Remarkably, this can occur without requiring any deletions, bans, or cooperation from X itself. The study offers the takeaway that algorithmic interventions can meaningfully reduce partisan animosity while still preserving political speech.

It also advances a growing movement advocating user control over platform ranking systems and the algorithms that shape what they see, which were traditionally guarded as proprietary, opaque, and mainly optimized for engagement rather... The research tool was built by a multidisciplinary team across Stanford, Northeastern University, and the University of Washington, composed of computer scientists, psychologists, communication scholars, and information scientists. Their goal in the experiment was to counter the engagement-driven amplification of divisive content that tends to reward outrage, conflict, and emotionally charged posts, without silencing political speech. Using a large language model, the tool analyzes posts in real time and identifies several categories of harmful political subject matter, including calls for political violence, attacks on democratic norms, and extreme hostility toward... When the system flags such content, it simply pushes those posts lower in the feed so they are less noticeable, like seating your argumentative uncle at the far end of the table during the... A small tweak to your social media feed can make your opponents feel a little less like enemies.

In a new study published in Science, a Stanford-led team used a browser extension and a large language model to rerank posts on X during the 2024 U.S. presidential campaign, showing that changing the visibility of the most hostile political content can measurably dial down partisan heat without deleting a single post or asking the platform for permission. The experiment, run with 1,256 Democrats and Republicans who used X in the weeks after an attempted assassination of Donald Trump and the withdrawal of Joe Biden from the race, targeted a particular kind... The researchers focused on posts that expressed antidemocratic attitudes and partisan animosity, such as cheering political violence, rejecting bipartisan cooperation, or suggesting that democratic rules are expendable when they get in the way of... To reach inside a platform they did not control, first author Tiziano Piccardi and colleagues built a browser extension that quietly intercepted the web version of the X timeline. Every time a participant opened the For you feed, the extension captured the posts, sent them to a remote backend, and had a large language model score each political post on eight dimensions of...

If a post hit at least four of those eight factors, it was tagged as the kind of content most likely to inflame. The tool then reordered the feed for consenting users in real time. In one experiment, it pushed those posts down the feed so participants would need to scroll further to hit the worst material. In a parallel experiment, it did the opposite and pulled that content higher. “Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them,” said Michael Bernstein, a professor of computer science in Stanford’s School of Engineering... “We have demonstrated an approach that lets researchers and end users have that power.”

New research from Stanford University demonstrates that algorithm intervention can reduce partisan animosity and control political unrest on X feeds A groundbreaking study from Stanford University has unveiled a new web-based research AI tool capable of significantly cooling down the partisan rhetoric on social media platforms like X, all without the platform’s direct involvement. The multidisciplinary research, published in the journal Science, not only offers a concrete way to reduce political polarisation but also paves the path for users to gain more control over the proprietary algorithms that... The researchers sought to counter the toxic cycle where social media algorithms often amplify emotionally charged, divisive content to maximise user engagement. The developed tool acts as a seamless web extension, leveraging a large language model (LLM) to scan a user’s X feed for posts containing anti-democratic attitudes and partisan animosity. This harmful content includes things like advocating for violence or extreme measures against the opposing party.

Instead of removing the content, the AI tool simply reorders the feed, pushing these incendiary posts lower down the timeline. Assistant Professor of Computer Science, Johns Hopkins University This research was partially supported by a Hoffman-Yee grant from the Stanford Institute for Human-Centered Artificial Intelligence. Reducing the visibility of polarizing content in social media feeds can measurably lower partisan animosity. To come up with this finding, my colleagues and I developed a method that let us alter the ranking of people’s feeds, previously something only the social media companies could do. Reranking social media feeds to reduce exposure to posts expressing anti-democratic attitudes and partisan animosity affected people’s emotions and their views of people with opposing political views.

I’m a computer scientist who studies social computing, artificial intelligence and the web. Because only social media platforms can modify their algorithms, we developed and released an open-source web tool that allowed us to rerank the feeds of consenting participants on X, formerly Twitter, in real time. Stanford researchers created a browser extension that reduces partisan hostility on X by reordering posts instead of removing them. In a one-week field trial with 1,256 users ahead of the 2024 U.S. election, downgrading anti-democratic and highly polarizing posts led to more positive attitudes toward political opponents. The effect was observed among both liberals and conservatives.

Experts say the approach shows promise but should be replicated and ethically evaluated in other contexts. Researchers at Stanford developed a browser extension that noticeably reduced partisan hostility on X by changing the order of posts in users' feeds rather than hiding or deleting content. The team built a browser extension that used an AI language model to evaluate posts in real time and dynamically reorder each participant's feed. The system identified content expressing anti-democratic sentiments or partisan hostility — including calls for violence or jailing political opponents — and pushed those posts lower in the timeline without removing them. The field trial ran for one week ahead of the 2024 U.S. presidential election with 1,256 X users who were randomly assigned to two parallel conditions: one in which polarizing posts were ranked higher and another in which those posts were ranked lower.

Participants whose feeds downgraded anti-democratic and highly partisan posts showed more positive attitudes toward the opposing political party. The effect appeared across the political spectrum and was observed among both self-identified liberals and conservatives. Get the latest updates delivered to your inbox every day, and stay up-to-date for free 🧠📈 Get the latest updates delivered to your inbox every day, and stay up-to-date for free 🧠📈 Reranking partisan animosity in algorithmic social media feeds alters affective polarization Social media algorithms profoundly impact our lives: They curate what we see (1) in ways that can shape our opinions (2-4), moods (5-7), and actions (8-12).

Because of the power that these ranking algorithms have to direct our attention, the research literature has articulated many theories and results detailing the impact that ranking algorithms have on us (13-17). However, validating these theories and results has remained extremely difficult because the ranking algorithm behavior is determined by the social media platforms, and only the platforms themselves can test alternative feed designs and causally... Platforms, however, face political and financial pressures that constrain the kinds of experiments they can launch and share (18). Concerns about lawsuits and the need to preserve engagement-driven revenue further limit what platforms are willing to test, leaving massive gaps in the design space of ranking algorithms that have been explored in naturalistic... To address this gap, we present an approach that enables researchers to rerank participants' social media feeds in real time as they browse, without requiring platform permission or cooperation. We built a browser extension, a small add-on to a web browser that modifies how web pages appear or behave, similar to an ad blocker.

Our extension intercepts and modifies X's web-based feed in real time and reranks the feed using large language model (LLM)-based rescoring, with only a negligible increase in page load time. This web extension allows us to rerank content according to experimentally controlled conditions. The design opens a new paradigm for algorithmic experimentation: It provides external researchers with a tool for conducting independent field experiments and evaluating the causal effects of algorithmic content curation on user attitudes and... This capability allowed us to investigate a pressing question: Can feed algorithms cause affective polarization, i.e., hostility toward opposing political parties (19-22)? This concern has grown since the 2016 US presidential election (23), and the debate remains ongoing after the 2020 and 2024 elections. If social media algorithms are causing affective polarization, they might not only bear responsibility for rising political incivility online (24), they might also pose a risk to trust in democratic institutions (25).

In this case, isolating the algorithmic design choices that cause polarization could offer alternative algorithmic approaches (26). A major hypothesized mechanism for how feed algorithms cause polarization is a self-reinforcing engagement loop: users engage with content aligning with their political views, the feed algorithm interprets this engagement as a positive signal,... Some studies support this hypothesis, finding that online interactions exacerbate polarization (27), potentially because of the increased visibility of hostile political discussions (28), divisive language (29-33), and content that reinforces existing beliefs (34). However, large-scale field experiments aimed at reducing polarization by intervening on the feed algorithm -- for example, by increasing exposure to out-party content -- have found both a decrease (35) and an increase (36)... Similarly, recent large-scale experiments on Facebook and Instagram found no evidence that reduced exposure to in-party sources or a simpler reverse-chronological algorithm affected polarization and political attitudes (23, 37) during the 2020 US election. These mixed results reveal the difficulty in identifying what, if any, algorithmic intervention might help reduce polarization, especially during politically charged times.

We distilled the goals of these prior interventions to a direct hypothesis that we could operationalize through real-time LLM reranking: that feed algorithms cause affective polarization by exposing users specifically to content that polarizes. An algorithm that up-ranks content reflecting genuine political dialogue is less likely to polarize than one that up-ranks demagoguery. This content-focused hypothesis has been difficult to operationalize into interventions, making studies that intervene on cross-partisan exposure and reverse-chronological ranking attractive but more diffuse in their impact and thus more likely to observe mixed... However, by connecting our real-time reranking infrastructure with recent advances in LLMs, we could create a ranking intervention that more directly targets the focal hypothesis (38) without needing platform collaboration. We drew, in particular, on a recent large-scale field experiment that articulated eight categories of antidemocratic attitudes and partisan animosity as bipartisan threats to the healthy functioning of democracy (39). We operationalized these eight categories into an artificial intelligence (AI) classifier that labels expressions of these constructs in social media posts, does so with accuracy comparable to trained annotators, and produces depolarization effects in...

This real-time classification enabled us to perform a scalable, content-based reranking experiment on participants' own feeds in the field (41). We conducted a preregistered field experiment on X, the most used social media platform for political discourse in the US (42), using our extension to dynamically rerank participants' social media content by either increasing... The experiment was conducted during a pivotal moment in the 2024 US election cycle, from July to August 2024, an important period for understanding how social media feeds impact affective polarization. Major political events during the study period included the attempted assassination of Donald Trump, the withdrawal of Joe Biden from the 2024 presidential race, and the nomination of Kamala Harris as the Democratic Party's... These events allow us to examine the impact of heterogeneous AAPA content on partisan polarization and hostility. We measured the intervention's effect on affective polarization (43) and emotional experience (44).

Compared with control conditions that did not rerank the feed, decreased AAPA exposure led to warmer feelings toward the political outgroup, whereas increased AAPA exposure led to colder feelings. These changes also affected participants' levels of negative emotions (anger and sadness) as measured through in-feed surveys. Social media research tool lowers the political temperature

People Also Search

A Web-based Method Was Shown To Mitigate Political Polarization On

A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media.iStock A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts...

27 In Science. A New Stanford-led Study Is Challenging The

27 in Science. A new Stanford-led study is challenging the idea that political toxicity is simply an unavoidable element of online culture. Instead, the research suggests that the political toxicity many users encounter on social media is a design choice that can be reversed. Researchers have unveiled a browser-based tool that can cool the political temperature of an X feed by quietly downranking ...

It Also Advances A Growing Movement Advocating User Control Over

It also advances a growing movement advocating user control over platform ranking systems and the algorithms that shape what they see, which were traditionally guarded as proprietary, opaque, and mainly optimized for engagement rather... The research tool was built by a multidisciplinary team across Stanford, Northeastern University, and the University of Washington, composed of computer scientist...

In A New Study Published In Science, A Stanford-led Team

In a new study published in Science, a Stanford-led team used a browser extension and a large language model to rerank posts on X during the 2024 U.S. presidential campaign, showing that changing the visibility of the most hostile political content can measurably dial down partisan heat without deleting a single post or asking the platform for permission. The experiment, run with 1,256 Democrats a...

If A Post Hit At Least Four Of Those Eight

If a post hit at least four of those eight factors, it was tagged as the kind of content most likely to inflame. The tool then reordered the feed for consenting users in real time. In one experiment, it pushed those posts down the feed so participants would need to scroll further to hit the worst material. In a parallel experiment, it did the opposite and pulled that content higher. “Social media ...