The Hidden Influence Of Social Media Algorithms In Politics
In the lead-up to the 2024 U.S. Presidential Election, social media has become the big tent where everyone-from serious Political commentators to your uncle who shares cat memes-comes to discuss politics. But here's the kicker: a lot of the political Content you see on social platforms isn't just from people you follow. In fact, about half of the tweets on the "For You" timeline come from accounts you don’t even know exist! So, what political thoughts are sneaking into your feed, and how does this play into our democratic conversations? Social media Algorithms are like tense magicians constantly at work behind the curtains.
They decide which tweets pop up on our screens, all while we sip our coffee and swipe away. It’s no wonder that figuring out how these algorithms work feels like trying to guess a magician's next trick. Previous studies show that certain political voices get amplified within the tweets of accounts we do follow. But what about those mysterious tweets from accounts we don’t follow? That’s where the real curiosity lies. As we gear up for the elections, we need to dig into how these algorithms influence the political content we consume and what that means for our points of view.
To get to the bottom of this algorithmic mess, we created 120 "sock-puppet" accounts-basically, fake accounts with different political leanings. We set them loose in the wilds of social media to see what political content they were exposed to over three weeks. Our goal? To find out if users with different political beliefs were being fed different kinds of political content, and whether there was any bias in the recommendations they received. Spoiler alert: we found some intriguing results. It turns out that the algorithm tends to favor a few popular accounts across the board, but right-leaning users got the short end of the stick when it came to Exposure inequality.
Both left and right-leaning users saw more of what they agreed with, while encountering less of what they didn’t like. New research shows the impact that social media algorithms can have on partisan political feelings, using a new tool that hijacks the way platforms rank content. How much does someone’s social media algorithm really affect how they feel about a political party, whether it’s one they identify with or one they feel negatively about? Until now, the answer has escaped researchers because they’ve had to rely on the cooperation of social media platforms. New, intercollegiate research published Nov. 27 in Science, co-led by Northeastern University researcher Chenyan Jia, sidesteps this issue by installing an extension on consenting participants’ browsers that automatically reranks the posts those users see, in real time and still...
Jia and her team discovered that after one week, users’ feelings toward the opposing party shifted by about two points — an effect normally seen over three years — revealing algorithms’ strong influence on... A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media.iStock A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts and without the direct cooperation of the platform. The study, from researchers at the University of Washington, Stanford University and Northeastern University, also indicates that it may one day be possible to let users take control of their social media algorithms. The researchers created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters...
Researchers published their findings Nov. 27 in Science. Moody College researcher leads unprecedented study with Meta exploring the role of social media in elections In the aftermath of the 2016 election, politicians, the media and everyday people raised numerous concerns about the effects of social media on democracy and how platforms like Facebook and Instagram influence people’s political... What role do these powerful social networks and the algorithms that run them have in how people view candidates or feel about important issues? Over the past several years, a multi-university academic team has been working alongside Meta to answer these very important questions as part of an unprecedented research project co-led by Moody College Communication Studies professor...
As part of the project, the team had access to data from Meta that has never before been made available to researchers and were given the ability to alter the Facebook and Instagram feeds... In the summer of 2023, researchers released their first findings from the project in a sweep of papers published in both Nature and Science journals. And while they found that algorithms have a tremendous effect on what people see on their feeds, changing these algorithms to change what people see doesn’t necessarily affect people’s political attitudes. Also, when the researchers looked at platform-wide data from U.S. adults, they found that many political news URLs were seen, and engaged with, primarily by conservatives or liberals, but not both. American Association for the Advancement of Science (AAAS)
A new experiment using an AI-powered browser extension to reorder feeds on X (formerly Twitter), and conducted independently of the X platform’s algorithm, shows that even small changes in exposure to hostile political content... The findings provide direct causal evidence of the impact of algorithmically controlled post ranking on a user’s social media feed. Social media has become an important source of political information for many people worldwide. However, the platform’s algorithms exert a powerful influence on what we encounter during use, subtly steering thoughts, emotions, and behaviors in poorly understood ways. Although many explanations for how these ranking algorithms affect us have been proposed, testing these theories has proven exceptionally difficult. This is because the platform operators alone control how their proprietary algorithms behave and are the only ones capable of experimenting with different feed designs and evaluating their causal effects.
To sidestep these challenges, Tiziano Piccardi and colleagues developed a novel method that lets researchers reorder people’s social media feeds in real time as they browse, without permission from the platforms themselves. Piccardi et al. created a lightweight, non-intrusive browser extension, much like an ad blocker, that intercepts and reshapes X’s web feed in real time, leveraging large language model-based classifiers to evaluate and reorder posts based on their... This tool allowed the authors to systematically identify and vary how content expressing antidemocratic attitudes and partisan animosity (AAPA) appeared on a user’s feed and observe the effects under controlled experimental conditions. In a 10-day field experiment on X involving 1,256 participants and conducted during a volatile stretch of the 2024 U.S. presidential campaign, individuals were randomly assigned to feeds with heightened, reduced, or unchanged levels of AAPA content.
Piccardi et al. discovered that, relative to the control group, reducing exposure to AAPA content made people feel warmer toward the opposing political party, shifting the baseline by more than 2 points on a 100-point scale. Increasing exposure resulted in a comparable shift toward colder feelings toward the opposing party. According to the authors, the observed effects are substantial, roughly comparable to three years’ worth of change in affective polarization over the duration of the intervention, though it remains unknown if these effects persist... What’s more, these shifts did not appear to fall disproportionately on any particular group of users. These shifts also extended to emotional experience; participants reported changes in anger and sadness through brief in-feed surveys, demonstrating that algorithmically mediated exposure to political hostility can shape both affective polarization and moment-to-moment emotional...
“One study – or set of studies – will never be the final word on how social media affects political attitudes. What is true of Facebook might not be true of TikTok, and what was true of Twitter 4 years ago might not be relevant to X today,” write Jennifer Allen and Joshua Tucker in... “The way forward is to embrace creative research and to build methodologies that adapt to the current moment. Piccardi et al. present a viable tool for doing that.” Reranking partisan animosity in algorithmic social media feeds alters affective polarization
A study shows that the order in which platforms like X display content to their users affects their animosity towards other ideological groups A team of U.S. researchers has shown that the order in which political messages are displayed on social media platforms does affect polarization — one of the most debated issues since the rise of social media and the... The phenomenon is equally strong regardless of the user’s political orientation, the academics note in an article published on Thursday in Science. Social media is an important source of political information. For hundreds of millions of people worldwide, it is even the main channel for political engagement: they receive political content, share it, and express their opinions through these platforms.
Given the relevance of social media in this sphere, understanding how the algorithms that operate on these platforms work is crucial — but opacity is the norm in the industry. That makes it extremely difficult to estimate the extent to which the selection of highlighted content shapes users’ political views. How did the researchers overcome algorithmic opacity to alter the order of posts that social media users see? Tiziano Piccardi from Stanford University and his colleagues developed a browser extension that intercepts and reorders the feed (the chronological timeline of posts) of certain social networks in real time. The tool uses a large language model (LLM) to assign a score to each piece of content, measuring the extent to which it contained “antidemocratic attitudes and partisan animosity” (AAPA). Once scored, the posts were reordered one way or another — without any collaboration from the platform or reliance on its algorithm.
The experiment involved 1,256 participants, who had all been duly informed. The study focused on X, as it is the social network most used in the U.S. for expressing political opinions, and it was conducted during the weeks leading up to the 2024 presidential election to ensure a high circulation of political messages. The last decade taught us painful lessons about how social media can reshape democracy: misinformation spreads faster than truth, online communities harden into echo chambers, and political divisions deepen as polarization grows. Now, another wave of technology is transforming how voters learn about elections—only faster, at scale, and with far less visibility. Large language models (LLMs) like ChatGPT, Claude, and Gemini, among others, are becoming the new vessels (and sometimes, arbiters) of political information.
Our research suggests their influence is already rippling through our democracy. LLMs are being adopted at a pace that makes social media uptake look slow. At the same time, traffic to traditional news and search sites has declined. As the 2026 midterms near, more than half of Americans now have access to AI, which can be used to gather information about candidates, issues, and elections. Meanwhile, researchers and firms are exploring the use of AI to simulate polling results or to understand how to synthesize voter opinions. These models may appear neutral—politically unbiased, and merely summarizing facts from different sources found in their training data or on the internet.
At the same time, they operate as black boxes, designed and trained in ways users can’t see. Researchers are actively trying to unravel the question of whose opinions LLMs reflect. Given their immense power, prevalence, and ability to “personalize” information, these models have the potential to shape what voters believe about candidates, issues, and elections as a whole. And we don’t yet know the extent of that influence.
People Also Search
- The Hidden Influence of Social Media Algorithms in Politics
- How Does Social Media Impact Political Polarization?
- Social media research tool can reduce polarization — it could also lead ...
- Platform-independent experiments on social media | Science
- The Surprising Impact of Algorithms | Moody College of Communication
- Social media research tool lowers the political temperature
- Platform-independent experiment shows tweaking X's feed can alter ...
- Algorithms do widen the divide: Social media feeds shape political ...
- AI Is Transforming Politics, Much Like Social Media Did - TIME
- The Role of Social Media Algorithms in Shaping Public Opinion During ...
In The Lead-up To The 2024 U.S. Presidential Election, Social
In the lead-up to the 2024 U.S. Presidential Election, social media has become the big tent where everyone-from serious Political commentators to your uncle who shares cat memes-comes to discuss politics. But here's the kicker: a lot of the political Content you see on social platforms isn't just from people you follow. In fact, about half of the tweets on the "For You" timeline come from accounts...
They Decide Which Tweets Pop Up On Our Screens, All
They decide which tweets pop up on our screens, all while we sip our coffee and swipe away. It’s no wonder that figuring out how these algorithms work feels like trying to guess a magician's next trick. Previous studies show that certain political voices get amplified within the tweets of accounts we do follow. But what about those mysterious tweets from accounts we don’t follow? That’s where the ...
To Get To The Bottom Of This Algorithmic Mess, We
To get to the bottom of this algorithmic mess, we created 120 "sock-puppet" accounts-basically, fake accounts with different political leanings. We set them loose in the wilds of social media to see what political content they were exposed to over three weeks. Our goal? To find out if users with different political beliefs were being fed different kinds of political content, and whether there was ...
Both Left And Right-leaning Users Saw More Of What They
Both left and right-leaning users saw more of what they agreed with, while encountering less of what they didn’t like. New research shows the impact that social media algorithms can have on partisan political feelings, using a new tool that hijacks the way platforms rank content. How much does someone’s social media algorithm really affect how they feel about a political party, whether it’s one th...
Jia And Her Team Discovered That After One Week, Users’
Jia and her team discovered that after one week, users’ feelings toward the opposing party shifted by about two points — an effect normally seen over three years — revealing algorithms’ strong influence on... A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of ...