Platform Independent Experiments On Social Media
Changing algorithms with artificial intelligence tools can influence partisan animosity. American Association for the Advancement of Science (AAAS) A new experiment using an AI-powered browser extension to reorder feeds on X (formerly Twitter), and conducted independently of the X platform’s algorithm, shows that even small changes in exposure to hostile political content... The findings provide direct causal evidence of the impact of algorithmically controlled post ranking on a user’s social media feed. Social media has become an important source of political information for many people worldwide. However, the platform’s algorithms exert a powerful influence on what we encounter during use, subtly steering thoughts, emotions, and behaviors in poorly understood ways.
Although many explanations for how these ranking algorithms affect us have been proposed, testing these theories has proven exceptionally difficult. This is because the platform operators alone control how their proprietary algorithms behave and are the only ones capable of experimenting with different feed designs and evaluating their causal effects. To sidestep these challenges, Tiziano Piccardi and colleagues developed a novel method that lets researchers reorder people’s social media feeds in real time as they browse, without permission from the platforms themselves. Piccardi et al. created a lightweight, non-intrusive browser extension, much like an ad blocker, that intercepts and reshapes X’s web feed in real time, leveraging large language model-based classifiers to evaluate and reorder posts based on their... This tool allowed the authors to systematically identify and vary how content expressing antidemocratic attitudes and partisan animosity (AAPA) appeared on a user’s feed and observe the effects under controlled experimental conditions.
In a 10-day field experiment on X involving 1,256 participants and conducted during a volatile stretch of the 2024 U.S. presidential campaign, individuals were randomly assigned to feeds with heightened, reduced, or unchanged levels of AAPA content. Piccardi et al. discovered that, relative to the control group, reducing exposure to AAPA content made people feel warmer toward the opposing political party, shifting the baseline by more than 2 points on a 100-point scale. Increasing exposure resulted in a comparable shift toward colder feelings toward the opposing party. According to the authors, the observed effects are substantial, roughly comparable to three years’ worth of change in affective polarization over the duration of the intervention, though it remains unknown if these effects persist...
What’s more, these shifts did not appear to fall disproportionately on any particular group of users. These shifts also extended to emotional experience; participants reported changes in anger and sadness through brief in-feed surveys, demonstrating that algorithmically mediated exposure to political hostility can shape both affective polarization and moment-to-moment emotional... “One study – or set of studies – will never be the final word on how social media affects political attitudes. What is true of Facebook might not be true of TikTok, and what was true of Twitter 4 years ago might not be relevant to X today,” write Jennifer Allen and Joshua Tucker in... “The way forward is to embrace creative research and to build methodologies that adapt to the current moment. Piccardi et al.
present a viable tool for doing that.” Reranking partisan animosity in algorithmic social media feeds alters affective polarization New research shows the impact that social media algorithms can have on partisan political feelings, using a new tool that hijacks the way platforms rank content. How much does someone’s social media algorithm really affect how they feel about a political party, whether it’s one they identify with or one they feel negatively about? Until now, the answer has escaped researchers because they’ve had to rely on the cooperation of social media platforms. New, intercollegiate research published Nov.
27 in Science, co-led by Northeastern University researcher Chenyan Jia, sidesteps this issue by installing an extension on consenting participants’ browsers that automatically reranks the posts those users see, in real time and still... Jia and her team discovered that after one week, users’ feelings toward the opposing party shifted by about two points — an effect normally seen over three years — revealing algorithms’ strong influence on... Political polarization is a defining feature of modern society, and increasingly, research points to a surprising culprit: the algorithms powering our online experiences. Artificial intelligence (AI) tools, designed to personalize content and maximize engagement, are inadvertently exacerbating partisan animosity by creating echo chambers and reinforcing existing biases. This isn’t a deliberate attempt to divide, but rather an unintended outcome of optimizing for metrics like clicks and time spent on platforms. What: AI algorithms are contributing to increased political polarization.
Where: Primarily online, across social media platforms and search engines. When: The effect has become increasingly pronounced in the last decade, coinciding with the widespread adoption of AI-driven personalization. Why it Matters: increased polarization undermines democratic discourse and can lead to political instability. A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media.iStock A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts and without the direct cooperation of the platform.
The study, from researchers at the University of Washington, Stanford University and Northeastern University, also indicates that it may one day be possible to let users take control of their social media algorithms. The researchers created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters... Researchers published their findings Nov. 27 in Science. As social media platforms become ever more ubiquitous, policymakers are grappling with how to counter their effects on political attitudes and electoral outcomes, addiction and mental health, misinformation and toxic content. This column suggests practical ways that academic researchers can help guide the design of government regulations aiming to address these issues.
The authors explain how to run experiments that use social media platforms to recruit subjects, how to harness platform features and technologies to collect data and generate variation, and limitations to consider when conducting... Social media platforms have become ubiquitous in modern economies. As of 2023, there were more than five billion active social media users worldwide, representing over 60% of the world population. In the US, the average user spent 10.5% of their lives on these services (Kemp 2024). Partially due to the increasing share of time that users spend on social media, policymakers have raised concerns that these platforms can influence political attitudes and electoral outcomes (Fujiwara et al. 2020), lead to significant mental health and addiction challenges (Braghieri et al.
2022), and expose consumers to misinformation and toxic content (Jiménez-Durán et al. 2022). In addition, the dominant social media platforms have considerable market power; as such, it is not clear that market competition can help resolve these policy concerns. Regulators in the EU have implemented several policies to deal with these issues – such as the Digital Markets Act (DMA), the General Data Protection Regulation (GDPR), and the Digital Services Act (DSA) –... How can the research community provide evidence to help guide the design of such regulations? One option is to empirically evaluate policies after they have been implemented – as has been the case for the EU’s GDPR and DMA, Apple’s ATT, and the German NetzDG Law – which can...
2022, Aridor et. al 2024, Johnson 2024, Pape et. al 2025). This provides policymakers with meaningful evidence only after years of implementation and only by evaluating policies that were actually implemented, not counterfactual policies that were considered. Another option is to have platforms explicitly conduct experiments simulating the effects of proposed policy interventions (Guess et. al 2023, Nyhan et.
al 2023, Wernerfelt et. al 2025). This option comes with its own set of challenges, as it provides platforms with outsized influence on the type of questions and interventions that can be studied, as the platforms are not impartial agents... al 2023a, b). In a forthcoming chapter in the Handbook of Experimental Methods in the Social Sciences (Aridor et al. 2025), we provide a practical guide to a third option that exploits how third-party technologies and platform features can be used for researcher-generated experimental variation.
Our method combines the best of the two aforementioned options: it is accessible to researchers without requiring explicit platform cooperation, and it allows for counterfactual policy evaluation before the implementation of a chosen policy. Our paper provides detailed documentation for running such experiments: starting from using social media platforms for recruitment of experimental subjects, documenting how to use a combination of platform features and technologies such as Chrome... Overall, this methodology serves as a powerful toolkit to study policy issues not only on social media platforms, but also on platforms such as Amazon (Farronato et. al 2024), Google Search (Allcott et. al 2025), and YouTube (Aridor forthcoming). We document several experiments that we conducted and explain how they relate to policy challenges.
People Also Search
- Platform-independent experiments on social media | Science
- Platform-independent experiments on social media - PubMed
- Platform-independent experiments on social media. - Semantic Scholar
- Platform-independent experiment shows tweaking X's feed can alter ...
- How Does Social Media Impact Political Polarization?
- Platform-Independent Social Media Experiments: A Science Approach
- Social media research tool lowers the political temperature
- Social media research tool can reduce polarization — it could also lead ...
- A practical guide to running social media experiments | CEPR
- PDF frame:overviewExperiments on Social Media - guyaridor.net
Changing Algorithms With Artificial Intelligence Tools Can Influence Partisan Animosity.
Changing algorithms with artificial intelligence tools can influence partisan animosity. American Association for the Advancement of Science (AAAS) A new experiment using an AI-powered browser extension to reorder feeds on X (formerly Twitter), and conducted independently of the X platform’s algorithm, shows that even small changes in exposure to hostile political content... The findings provide d...
Although Many Explanations For How These Ranking Algorithms Affect Us
Although many explanations for how these ranking algorithms affect us have been proposed, testing these theories has proven exceptionally difficult. This is because the platform operators alone control how their proprietary algorithms behave and are the only ones capable of experimenting with different feed designs and evaluating their causal effects. To sidestep these challenges, Tiziano Piccardi...
In A 10-day Field Experiment On X Involving 1,256 Participants
In a 10-day field experiment on X involving 1,256 participants and conducted during a volatile stretch of the 2024 U.S. presidential campaign, individuals were randomly assigned to feeds with heightened, reduced, or unchanged levels of AAPA content. Piccardi et al. discovered that, relative to the control group, reducing exposure to AAPA content made people feel warmer toward the opposing politica...
What’s More, These Shifts Did Not Appear To Fall Disproportionately
What’s more, these shifts did not appear to fall disproportionately on any particular group of users. These shifts also extended to emotional experience; participants reported changes in anger and sadness through brief in-feed surveys, demonstrating that algorithmically mediated exposure to political hostility can shape both affective polarization and moment-to-moment emotional... “One study – or ...
Present A Viable Tool For Doing That.” Reranking Partisan Animosity
present a viable tool for doing that.” Reranking partisan animosity in algorithmic social media feeds alters affective polarization New research shows the impact that social media algorithms can have on partisan political feelings, using a new tool that hijacks the way platforms rank content. How much does someone’s social media algorithm really affect how they feel about a political party, whethe...