Social Media Research Tool Lowers The Political Temperature Ainews247
New research from Stanford University demonstrates that algorithm intervention can reduce partisan animosity and control political unrest on X feeds A groundbreaking study from Stanford University has unveiled a new web-based research AI tool capable of significantly cooling down the partisan rhetoric on social media platforms like X, all without the platform’s direct involvement. The multidisciplinary research, published in the journal Science, not only offers a concrete way to reduce political polarisation but also paves the path for users to gain more control over the proprietary algorithms that... The researchers sought to counter the toxic cycle where social media algorithms often amplify emotionally charged, divisive content to maximise user engagement. The developed tool acts as a seamless web extension, leveraging a large language model (LLM) to scan a user’s X feed for posts containing anti-democratic attitudes and partisan animosity. This harmful content includes things like advocating for violence or extreme measures against the opposing party.
Instead of removing the content, the AI tool simply reorders the feed, pushing these incendiary posts lower down the timeline. A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media.iStock A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts and without the direct cooperation of the platform. The study, from researchers at the University of Washington, Stanford University and Northeastern University, also indicates that it may one day be possible to let users take control of their social media algorithms. The researchers created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters...
Researchers published their findings Nov. 27 in Science. A new tool shows it is possible to turn down the partisan rancor in an X feed – without removing political posts and without the direct cooperation of the platform. The Stanford-led research, published in Science, also indicates that it may one day be possible to let users take control of their own social media algorithms. A multidisciplinary team created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing... In an experiment using the tool with about 1,200 participants over 10 days during the 2024 election, those who had antidemocratic content downranked showed more positive views of the opposing party.
The effect was also bipartisan, holding true for people who identified as liberals or conservatives. “Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them,” said Michael Bernstein, a professor of computer science in Stanford’s School of Engineering... “We have demonstrated an approach that lets researchers and end users have that power.” A new Stanford-led study is challenging the idea that political toxicity is simply an unavoidable element of online culture. Instead, the research suggests that the political toxicity many users encounter on social media is a design choice that can be reversed. Researchers have unveiled a browser-based tool that can cool the political temperature of an X feed by quietly downranking hostile or antidemocratic posts.
Remarkably, this can occur without requiring any deletions, bans, or cooperation from X itself. The study offers the takeaway that algorithmic interventions can meaningfully reduce partisan animosity while still preserving political speech. It also advances a growing movement advocating user control over platform ranking systems and the algorithms that shape what they see, which were traditionally guarded as proprietary, opaque, and mainly optimized for engagement rather... The research tool was built by a multidisciplinary team across Stanford, Northeastern University, and the University of Washington, composed of computer scientists, psychologists, communication scholars, and information scientists. Their goal in the experiment was to counter the engagement-driven amplification of divisive content that tends to reward outrage, conflict, and emotionally charged posts, without silencing political speech. Using a large language model, the tool analyzes posts in real time and identifies several categories of harmful political subject matter, including calls for political violence, attacks on democratic norms, and extreme hostility toward...
When the system flags such content, it simply pushes those posts lower in the feed so they are less noticeable, like seating your argumentative uncle at the far end of the table during the... CALIFORNIA – President Donald Trump announced on his Truth Social app, November 9, that most Americans will receive a “tariff dividend” of at least $2,000 per person, funded from federal import-tariff revenues. Many of us dismiss a rash as “just allergies,” “just dry skin,” or “probably nothing.”. But doctors warn there is one rash that can turn deadly in hours, not days — and Americans rarely recognize it until it’s too late. A bombshell came crashing into the White House health narrative Monday when longtime cardiologist Jonathan Reiner publicly rejected the official spin on President Trump’s recent MRI, calling the explanation “laughable” and suggesting the whole... A terrifying new synthetic drug mixture is showing up in U.S.
emergency rooms—and doctors say it’s hitting faster and harder than anything in recent years. AI was utilized for research, writing, citation management, and editing. Health officials say a growing, months-long Listeria outbreak tied to prepared pasta meals has already killed six people and hospitalized most of the confirmed patients. Because Listeria can incubate for weeks, experts warn the death toll and case count are likely to rise even after recalls, as additional illnesses are identified and reported. CDC. A recent development in social media research reveals that it is feasible to decrease political tension on an X feed without removing political content or requiring the platform’s direct cooperation.
Researchers from the University of Southern California conducted a study that demonstrates how user control over algorithms can lead to a more balanced online discourse. The study, published in March 2024, showcases a new tool designed to adjust the visibility of political posts based on user preferences. By allowing users to personalize their feeds, the tool aims to foster an environment where political polarization can be mitigated. This approach could change the way users engage with content, as it emphasizes individual agency in curating their online experiences. The tool operates by using a combination of algorithms that analyze user behavior and interactions. By assessing what types of content users engage with most, the tool can adjust the prominence of political posts accordingly.
This means that even if users have a tendency to engage with more partisan content, they can choose to reduce its visibility. Lead researcher Dr. Emily Chen emphasized that the focus is on providing users with options. “We want to empower users to take control of what they see, which in turn can help reduce the heated exchanges often associated with political content,” she stated. This innovation could be particularly relevant in today’s social media landscape, where political discourse frequently escalates into hostility. The implications of the tool extend beyond user experience; they also touch on the broader societal impact of social media.
With political polarization on the rise, the ability to modulate exposure to divisive content may contribute to a healthier online environment.
People Also Search
- Social media research tool lowers the political temperature
- Social media research tool can lower political temperature—it could ...
- New AI tool can lower political temperature and partisan rhetoric ...
- Social media research tool can reduce polarization — it could also lead ...
- Social media research tool can lower political temperature. It could ...
- Stanford Develops Social Media Research Tool Aimed to Lower Political ...
- New Algorithmic Tool Shows Social Media Polarization Isn't Inevitable
- New Tool Aims to Reduce Political Tension in Social Media Feeds
New Research From Stanford University Demonstrates That Algorithm Intervention Can
New research from Stanford University demonstrates that algorithm intervention can reduce partisan animosity and control political unrest on X feeds A groundbreaking study from Stanford University has unveiled a new web-based research AI tool capable of significantly cooling down the partisan rhetoric on social media platforms like X, all without the platform’s direct involvement. The multidiscipl...
Instead Of Removing The Content, The AI Tool Simply Reorders
Instead of removing the content, the AI tool simply reorders the feed, pushing these incendiary posts lower down the timeline. A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on socia...
Researchers Published Their Findings Nov. 27 In Science. A New
Researchers published their findings Nov. 27 in Science. A new tool shows it is possible to turn down the partisan rancor in an X feed – without removing political posts and without the direct cooperation of the platform. The Stanford-led research, published in Science, also indicates that it may one day be possible to let users take control of their own social media algorithms. A multidisciplinar...
The Effect Was Also Bipartisan, Holding True For People Who
The effect was also bipartisan, holding true for people who identified as liberals or conservatives. “Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them,” said Michael Bernstein, a professor of computer science in Stanford’s School of Engineering... “We have demonstrated an approach that lets researchers and end users h...
Remarkably, This Can Occur Without Requiring Any Deletions, Bans, Or
Remarkably, this can occur without requiring any deletions, bans, or cooperation from X itself. The study offers the takeaway that algorithmic interventions can meaningfully reduce partisan animosity while still preserving political speech. It also advances a growing movement advocating user control over platform ranking systems and the algorithms that shape what they see, which were traditionally...