Tweaking The Feed On X Can Change Our Political Polarisation
American Association for the Advancement of Science (AAAS) A new experiment using an AI-powered browser extension to reorder feeds on X (formerly Twitter), and conducted independently of the X platform’s algorithm, shows that even small changes in exposure to hostile political content... The findings provide direct causal evidence of the impact of algorithmically controlled post ranking on a user’s social media feed. Social media has become an important source of political information for many people worldwide. However, the platform’s algorithms exert a powerful influence on what we encounter during use, subtly steering thoughts, emotions, and behaviors in poorly understood ways. Although many explanations for how these ranking algorithms affect us have been proposed, testing these theories has proven exceptionally difficult.
This is because the platform operators alone control how their proprietary algorithms behave and are the only ones capable of experimenting with different feed designs and evaluating their causal effects. To sidestep these challenges, Tiziano Piccardi and colleagues developed a novel method that lets researchers reorder people’s social media feeds in real time as they browse, without permission from the platforms themselves. Piccardi et al. created a lightweight, non-intrusive browser extension, much like an ad blocker, that intercepts and reshapes X’s web feed in real time, leveraging large language model-based classifiers to evaluate and reorder posts based on their... This tool allowed the authors to systematically identify and vary how content expressing antidemocratic attitudes and partisan animosity (AAPA) appeared on a user’s feed and observe the effects under controlled experimental conditions. In a 10-day field experiment on X involving 1,256 participants and conducted during a volatile stretch of the 2024 U.S.
presidential campaign, individuals were randomly assigned to feeds with heightened, reduced, or unchanged levels of AAPA content. Piccardi et al. discovered that, relative to the control group, reducing exposure to AAPA content made people feel warmer toward the opposing political party, shifting the baseline by more than 2 points on a 100-point scale. Increasing exposure resulted in a comparable shift toward colder feelings toward the opposing party. According to the authors, the observed effects are substantial, roughly comparable to three years’ worth of change in affective polarization over the duration of the intervention, though it remains unknown if these effects persist... What’s more, these shifts did not appear to fall disproportionately on any particular group of users.
These shifts also extended to emotional experience; participants reported changes in anger and sadness through brief in-feed surveys, demonstrating that algorithmically mediated exposure to political hostility can shape both affective polarization and moment-to-moment emotional... “One study – or set of studies – will never be the final word on how social media affects political attitudes. What is true of Facebook might not be true of TikTok, and what was true of Twitter 4 years ago might not be relevant to X today,” write Jennifer Allen and Joshua Tucker in... “The way forward is to embrace creative research and to build methodologies that adapt to the current moment. Piccardi et al. present a viable tool for doing that.”
Reranking partisan animosity in algorithmic social media feeds alters affective polarization Stanford University researchers may have hit on something surprisingly effective: a small change in how posts appear on X (formerly Twitter) can make people less hostile toward the other side of the political divide. And the twist is that nothing is censored. Nothing is deleted. The posts are simply pushed a little lower in the feed. The study, run during the heated 2024 US election season, used a browser-based tool that sat on top of X’s existing algorithm.
Around 1,200 volunteers installed it and continued using X as usual for ten days. The tool scanned their timelines for posts that contained extreme rhetoric, partisan hostility, or anti-democratic sentiments. Instead of blocking them, it quietly shifted those posts further down the feed so they didn’t appear front and centre. ALSO READ: Incognito Isn’t Full Privacy: How To Completely Wipe Hidden Browsing Traces At the conclusion of the experiment, both liberal and conservative participants reported significantly warmer attitudes toward people on opposing political sides—significantly more so than participants in the control group, who witnessed unchanging levels of... The key insight is simple: posts at the top of your feed determine your emotional baseline.
When content that triggers emotions no longer hits first, temperatures typically decrease, and posts can still be seen by scrolling—they just lose some of their power due to being directly in your face. A study shows that the order in which platforms like X display content to their users affects their animosity towards other ideological groups A team of U.S. researchers has shown that the order in which political messages are displayed on social media platforms does affect polarization — one of the most debated issues since the rise of social media and the... The phenomenon is equally strong regardless of the user’s political orientation, the academics note in an article published on Thursday in Science. Social media is an important source of political information.
For hundreds of millions of people worldwide, it is even the main channel for political engagement: they receive political content, share it, and express their opinions through these platforms. Given the relevance of social media in this sphere, understanding how the algorithms that operate on these platforms work is crucial — but opacity is the norm in the industry. That makes it extremely difficult to estimate the extent to which the selection of highlighted content shapes users’ political views. How did the researchers overcome algorithmic opacity to alter the order of posts that social media users see? Tiziano Piccardi from Stanford University and his colleagues developed a browser extension that intercepts and reorders the feed (the chronological timeline of posts) of certain social networks in real time. The tool uses a large language model (LLM) to assign a score to each piece of content, measuring the extent to which it contained “antidemocratic attitudes and partisan animosity” (AAPA).
Once scored, the posts were reordered one way or another — without any collaboration from the platform or reliance on its algorithm. The experiment involved 1,256 participants, who had all been duly informed. The study focused on X, as it is the social network most used in the U.S. for expressing political opinions, and it was conducted during the weeks leading up to the 2024 presidential election to ensure a high circulation of political messages. The tool could also open up ways to design measures that not only reduce partisan hostility but foster greater social trust and a healthier democratic discourse across party lines, Bernstein said. — Pixabay
LOS ANGELES: Researchers at Stanford University say they have developed a tool that can noticeably reduce partisan hostility in X feeds by reordering posts rather than blocking them. The study, published in the journal Science, suggests it may one day be possible to let users control their own social media algorithms, not only on X, formerly Twitter, but on other platforms. After acquiring Twitter in 2022, tech billionaire Elon Musk removed many restrictions intended to protect users from hate speech and misinformation. Users who share Musk’s right-leaning political views saw their voices gain more weight on the service. The Stanford team, working without collaboration from X, built a browser extension that resorted participants’ X feeds. A new study led by researchers at Stanford demonstrates that social media polarisation can be eased through modest adjustments to content ranking rather than content removal or heavy moderation.
The team developed a browser-based tool that reorders users’ feeds on X to push down posts containing extreme partisan hostility, calls for political violence or antidemocratic rhetoric. When applied in a field experiment during the 2024 US election, this intervention softened negative attitudes toward opposing political groups across the partisan spectrum. The experiment involved 1,256 individuals who consented to have their feed content re-ranked over a 10-day period. Some participants had exposure to aggressive and antidemocratic posts reduced, while others experienced an increased exposure. Participants in the reduced-hostility group exhibited a shift toward warmer feelings for those supporting the other party, while those exposed to more hostility became colder. The change, though modest in absolute terms, is equivalent to about three years of polarisation as historically measured between 1978 and 2020.
The research underscores that platform algorithms play a decisive role in shaping political discourse. By using a large-language model to flag and re-score content in real time, the tool identifies posts that breach democratic norms or express intense partisan animosity — without removing any content or requiring cooperation... All posts remain available, but those judged toxic appear later in the feed hierarchy, reducing their visibility. The findings challenge the notion that political toxicity and division are an unavoidable byproduct of social media. Instead, they suggest that design decisions — particularly around algorithmic ranking — influence how divisive content spreads and how users perceive opposing viewpoints. Crucially, the intervention did not significantly alter standard engagement metrics like likes or reposts, indicating that it may be possible to curb polarisation without sacrificing platform activity or user engagement.
Authors of the study believe this opens the door to giving users and researchers more control over the algorithms shaping their experience — a shift away from opaque, engagement-driven systems toward more socially purposeful... They envisage a future in which individualized feed algorithms prioritise civic health and cross-party empathy rather than outrage and conflict. A new experiment using an AI-powered browser extension to reorder feeds on X (formerly Twitter), and conducted independently of the X platform’s algorithm, shows that even small changes in exposure to hostile political content... The findings provide direct causal evidence of the impact of algorithmically controlled post ranking on a user’s social media feed. Social media has become an important source of political information for many people worldwide. However, the platform’s algorithms exert a powerful influence on what we encounter during use, subtly steering thoughts, emotions, and behaviors in poorly understood ways.
Although many explanations for how these ranking algorithms affect us have been proposed, testing these theories has proven exceptionally difficult. This is because the platform operators alone control how their proprietary algorithms behave and are the only ones capable of experimenting with different feed designs and evaluating their causal effects. To sidestep these challenges, Tiziano Piccardi and colleagues developed a novel method that lets researchers reorder people’s social media feeds in real time as they browse, without permission from the platforms themselves. Piccardi et al. created a lightweight, non-intrusive browser extension, much like an ad blocker, that intercepts and reshapes X’s web feed in real time, leveraging large language model-based classifiers to evaluate and reorder posts based on their... This tool allowed the authors to systematically identify and vary how content expressing antidemocratic attitudes and partisan animosity (AAPA) appeared on a user’s feed and observe the effects under controlled experimental conditions.
People Also Search
- Tweaking the feed on X can change our political polarisation
- Social media research tool lowers the political temperature
- Platform-independent experiment shows tweaking X's feed can alter ...
- Small changes to 'for you' feed on X can rapidly increase political ...
- Stanford Researchers Show A Simple Feed Tweak On X Can Reduce Political ...
- Algorithms do widen the divide: Social media feeds shape political ...
- Researchers tone down polarisation on X with tweaks to algorithm
- Researchers tone down polarization on X with tweaks to algorithm - MSN
- Algorithmic Feed Changes Show Polarisation Isn't Inevitable
American Association For The Advancement Of Science (AAAS) A New
American Association for the Advancement of Science (AAAS) A new experiment using an AI-powered browser extension to reorder feeds on X (formerly Twitter), and conducted independently of the X platform’s algorithm, shows that even small changes in exposure to hostile political content... The findings provide direct causal evidence of the impact of algorithmically controlled post ranking on a user’...
This Is Because The Platform Operators Alone Control How Their
This is because the platform operators alone control how their proprietary algorithms behave and are the only ones capable of experimenting with different feed designs and evaluating their causal effects. To sidestep these challenges, Tiziano Piccardi and colleagues developed a novel method that lets researchers reorder people’s social media feeds in real time as they browse, without permission fr...
Presidential Campaign, Individuals Were Randomly Assigned To Feeds With Heightened,
presidential campaign, individuals were randomly assigned to feeds with heightened, reduced, or unchanged levels of AAPA content. Piccardi et al. discovered that, relative to the control group, reducing exposure to AAPA content made people feel warmer toward the opposing political party, shifting the baseline by more than 2 points on a 100-point scale. Increasing exposure resulted in a comparable ...
These Shifts Also Extended To Emotional Experience; Participants Reported Changes
These shifts also extended to emotional experience; participants reported changes in anger and sadness through brief in-feed surveys, demonstrating that algorithmically mediated exposure to political hostility can shape both affective polarization and moment-to-moment emotional... “One study – or set of studies – will never be the final word on how social media affects political attitudes. What is...
Reranking Partisan Animosity In Algorithmic Social Media Feeds Alters Affective
Reranking partisan animosity in algorithmic social media feeds alters affective polarization Stanford University researchers may have hit on something surprisingly effective: a small change in how posts appear on X (formerly Twitter) can make people less hostile toward the other side of the political divide. And the twist is that nothing is censored. Nothing is deleted. The posts are simply pushed...