Algorithms Do Widen The Divide Social Media Feeds Shape Political
A study shows that the order in which platforms like X display content to their users affects their animosity towards other ideological groups A team of U.S. researchers has shown that the order in which political messages are displayed on social media platforms does affect polarization — one of the most debated issues since the rise of social media and the... The phenomenon is equally strong regardless of the user’s political orientation, the academics note in an article published on Thursday in Science. Social media is an important source of political information. For hundreds of millions of people worldwide, it is even the main channel for political engagement: they receive political content, share it, and express their opinions through these platforms.
Given the relevance of social media in this sphere, understanding how the algorithms that operate on these platforms work is crucial — but opacity is the norm in the industry. That makes it extremely difficult to estimate the extent to which the selection of highlighted content shapes users’ political views. How did the researchers overcome algorithmic opacity to alter the order of posts that social media users see? Tiziano Piccardi from Stanford University and his colleagues developed a browser extension that intercepts and reorders the feed (the chronological timeline of posts) of certain social networks in real time. The tool uses a large language model (LLM) to assign a score to each piece of content, measuring the extent to which it contained “antidemocratic attitudes and partisan animosity” (AAPA). Once scored, the posts were reordered one way or another — without any collaboration from the platform or reliance on its algorithm.
The experiment involved 1,256 participants, who had all been duly informed. The study focused on X, as it is the social network most used in the U.S. for expressing political opinions, and it was conducted during the weeks leading up to the 2024 presidential election to ensure a high circulation of political messages. New research shows the impact that social media algorithms can have on partisan political feelings, using a new tool that hijacks the way platforms rank content. How much does someone’s social media algorithm really affect how they feel about a political party, whether it’s one they identify with or one they feel negatively about? Until now, the answer has escaped researchers because they’ve had to rely on the cooperation of social media platforms.
New, intercollegiate research published Nov. 27 in Science, co-led by Northeastern University researcher Chenyan Jia, sidesteps this issue by installing an extension on consenting participants’ browsers that automatically reranks the posts those users see, in real time and still... Jia and her team discovered that after one week, users’ feelings toward the opposing party shifted by about two points — an effect normally seen over three years — revealing algorithms’ strong influence on... Researchers in the United States have developed a new tool that allows independent scientists to study how social media algorithms affect users—without needing permission from the platforms themselves. The findings suggest that platforms could reduce political polarisation by down-ranking hostile content in their algorithms. The tool, a browser extension powered by artificial intelligence (AI), scans posts on X, formerly Twitter, for any themes of anti-democratic and extremely negative partisan views, such as posts that could call for violence...
It then re-orders posts on the X feed in a “matter of seconds,” the study showed, so the polarising content was nearer to the bottom of a user’s feed. The team of researchers from Stanford University, the University of Washington, and Northeastern University then tested the browser extension on the X feeds of over 1,200 participants who consented to having them modified for... A new Stanford-led study is challenging the idea that political toxicity is simply an unavoidable element of online culture. Instead, the research suggests that the political toxicity many users encounter on social media is a design choice that can be reversed. Researchers have unveiled a browser-based tool that can cool the political temperature of an X feed by quietly downranking hostile or antidemocratic posts. Remarkably, this can occur without requiring any deletions, bans, or cooperation from X itself.
The study offers the takeaway that algorithmic interventions can meaningfully reduce partisan animosity while still preserving political speech. It also advances a growing movement advocating user control over platform ranking systems and the algorithms that shape what they see, which were traditionally guarded as proprietary, opaque, and mainly optimized for engagement rather... The research tool was built by a multidisciplinary team across Stanford, Northeastern University, and the University of Washington, composed of computer scientists, psychologists, communication scholars, and information scientists. Their goal in the experiment was to counter the engagement-driven amplification of divisive content that tends to reward outrage, conflict, and emotionally charged posts, without silencing political speech. Using a large language model, the tool analyzes posts in real time and identifies several categories of harmful political subject matter, including calls for political violence, attacks on democratic norms, and extreme hostility toward... When the system flags such content, it simply pushes those posts lower in the feed so they are less noticeable, like seating your argumentative uncle at the far end of the table during the...
A small tweak to your social media feed can make your opponents feel a little less like enemies. In a new study published in Science, a Stanford-led team used a browser extension and a large language model to rerank posts on X during the 2024 U.S. presidential campaign, showing that changing the visibility of the most hostile political content can measurably dial down partisan heat without deleting a single post or asking the platform for permission. The experiment, run with 1,256 Democrats and Republicans who used X in the weeks after an attempted assassination of Donald Trump and the withdrawal of Joe Biden from the race, targeted a particular kind... The researchers focused on posts that expressed antidemocratic attitudes and partisan animosity, such as cheering political violence, rejecting bipartisan cooperation, or suggesting that democratic rules are expendable when they get in the way of... To reach inside a platform they did not control, first author Tiziano Piccardi and colleagues built a browser extension that quietly intercepted the web version of the X timeline.
Every time a participant opened the For you feed, the extension captured the posts, sent them to a remote backend, and had a large language model score each political post on eight dimensions of... If a post hit at least four of those eight factors, it was tagged as the kind of content most likely to inflame. The tool then reordered the feed for consenting users in real time. In one experiment, it pushed those posts down the feed so participants would need to scroll further to hit the worst material. In a parallel experiment, it did the opposite and pulled that content higher. “Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them,” said Michael Bernstein, a professor of computer science in Stanford’s School of Engineering...
“We have demonstrated an approach that lets researchers and end users have that power.” In the lead-up to the 2024 U.S. Presidential Election, social media has become the big tent where everyone-from serious Political commentators to your uncle who shares cat memes-comes to discuss politics. But here's the kicker: a lot of the political Content you see on social platforms isn't just from people you follow. In fact, about half of the tweets on the "For You" timeline come from accounts you don’t even know exist! So, what political thoughts are sneaking into your feed, and how does this play into our democratic conversations?
Social media Algorithms are like tense magicians constantly at work behind the curtains. They decide which tweets pop up on our screens, all while we sip our coffee and swipe away. It’s no wonder that figuring out how these algorithms work feels like trying to guess a magician's next trick. Previous studies show that certain political voices get amplified within the tweets of accounts we do follow. But what about those mysterious tweets from accounts we don’t follow? That’s where the real curiosity lies.
As we gear up for the elections, we need to dig into how these algorithms influence the political content we consume and what that means for our points of view. To get to the bottom of this algorithmic mess, we created 120 "sock-puppet" accounts-basically, fake accounts with different political leanings. We set them loose in the wilds of social media to see what political content they were exposed to over three weeks. Our goal? To find out if users with different political beliefs were being fed different kinds of political content, and whether there was any bias in the recommendations they received. Spoiler alert: we found some intriguing results.
It turns out that the algorithm tends to favor a few popular accounts across the board, but right-leaning users got the short end of the stick when it came to Exposure inequality. Both left and right-leaning users saw more of what they agreed with, while encountering less of what they didn’t like. A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media.iStock A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts and without the direct cooperation of the platform. The study, from researchers at the University of Washington, Stanford University and Northeastern University, also indicates that it may one day be possible to let users take control of their social media algorithms.
The researchers created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters... Researchers published their findings Nov. 27 in Science. In early 2025, tens of thousands of users across the U.S. reported something unusual: they opened their social media apps. They discovered that they were suddenly following Republican or MAGA-themed pages they had never previously engaged with.
Some were being recommended AI tools branded with conservative slogans. Others found that they had unintentionally unfollowed progressive news outlets, activist accounts, or Democratic political figures, without ever adjusting their settings. These reports weren’t isolated. They came from everyday users across Facebook, Instagram, X (formerly Twitter), and TikTok, representing a diverse range of political views. This wasn’t a glitch—it was the result of a combination of algorithmic recalibration, political influence, and deliberate platform-level changes that occurred after key events, including Elon Musk’s open endorsement of Trump and structural shifts... In some cases, platform engineers quietly updated “engagement scoring” models, boosting content aligned with Republican messaging under the guise of “neutrality” or “balance.” On TikTok, users who watched even a few seconds of patriotic...
Liberal and moderate users weren’t the only ones impacted. Conservative users also reported their feeds shifting, but toward increasingly radical, rage-inducing content. Even those who identified as traditional Republicans saw more conspiracy-themed posts, less nuance, and fewer centrist or policy-focused voices. Meanwhile, apolitical users were inadvertently drawn into political feeds, often through lifestyle or trending content that served as Trojan horses for ideological messaging. This wasn’t about individual choice—it was systemic. Algorithms were updated to prioritize “stickiness,” keeping users engaged at any cost.
Political content, especially when framed as urgent, emotional, or provocative, tends to perform well. And the platforms know it. That’s why your feed changed—because their business model requires it. You didn’t consent to be reprogrammed, but that’s what happened, and it’s still happening, whether you notice it or not.
People Also Search
- Algorithms do widen the divide: Social media feeds shape political ...
- How Does Social Media Impact Political Polarization?
- Social media algorithms can alter political views, study says
- Reranking partisan animosity in algorithmic social media feeds alters ...
- New Algorithmic Tool Shows Social Media Polarization Isn't Inevitable
- Algorithms Can Pull Us Apart. This Tool Shows They Can Bring Us Back
- Social media research tool lowers the political temperature
- The Hidden Influence of Social Media Algorithms in Politics
- Social media research tool can reduce polarization — it could also lead ...
- Reprogrammed Feeds: How Social Media Algorithms Shape Political Identity
A Study Shows That The Order In Which Platforms Like
A study shows that the order in which platforms like X display content to their users affects their animosity towards other ideological groups A team of U.S. researchers has shown that the order in which political messages are displayed on social media platforms does affect polarization — one of the most debated issues since the rise of social media and the... The phenomenon is equally strong rega...
Given The Relevance Of Social Media In This Sphere, Understanding
Given the relevance of social media in this sphere, understanding how the algorithms that operate on these platforms work is crucial — but opacity is the norm in the industry. That makes it extremely difficult to estimate the extent to which the selection of highlighted content shapes users’ political views. How did the researchers overcome algorithmic opacity to alter the order of posts that soci...
The Experiment Involved 1,256 Participants, Who Had All Been Duly
The experiment involved 1,256 participants, who had all been duly informed. The study focused on X, as it is the social network most used in the U.S. for expressing political opinions, and it was conducted during the weeks leading up to the 2024 presidential election to ensure a high circulation of political messages. New research shows the impact that social media algorithms can have on partisan ...
New, Intercollegiate Research Published Nov. 27 In Science, Co-led By
New, intercollegiate research published Nov. 27 in Science, co-led by Northeastern University researcher Chenyan Jia, sidesteps this issue by installing an extension on consenting participants’ browsers that automatically reranks the posts those users see, in real time and still... Jia and her team discovered that after one week, users’ feelings toward the opposing party shifted by about two point...
It Then Re-orders Posts On The X Feed In A
It then re-orders posts on the X feed in a “matter of seconds,” the study showed, so the polarising content was nearer to the bottom of a user’s feed. The team of researchers from Stanford University, the University of Washington, and Northeastern University then tested the browser extension on the X feeds of over 1,200 participants who consented to having them modified for... A new Stanford-led s...