New Tool Aims To Reduce Political Discourse Tension On X Platform

Bonisiwe Shabane
-
new tool aims to reduce political discourse tension on x platform

A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media.iStock A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts and without the direct cooperation of the platform. The study, from researchers at the University of Washington, Stanford University and Northeastern University, also indicates that it may one day be possible to let users take control of their social media algorithms. The researchers created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters... Researchers published their findings Nov.

27 in Science. New research from Stanford University demonstrates that algorithm intervention can reduce partisan animosity and control political unrest on X feeds A groundbreaking study from Stanford University has unveiled a new web-based research AI tool capable of significantly cooling down the partisan rhetoric on social media platforms like X, all without the platform’s direct involvement. The multidisciplinary research, published in the journal Science, not only offers a concrete way to reduce political polarisation but also paves the path for users to gain more control over the proprietary algorithms that... The researchers sought to counter the toxic cycle where social media algorithms often amplify emotionally charged, divisive content to maximise user engagement. The developed tool acts as a seamless web extension, leveraging a large language model (LLM) to scan a user’s X feed for posts containing anti-democratic attitudes and partisan animosity.

This harmful content includes things like advocating for violence or extreme measures against the opposing party. Instead of removing the content, the AI tool simply reorders the feed, pushing these incendiary posts lower down the timeline. A new Stanford-led study is challenging the idea that political toxicity is simply an unavoidable element of online culture. Instead, the research suggests that the political toxicity many users encounter on social media is a design choice that can be reversed. Researchers have unveiled a browser-based tool that can cool the political temperature of an X feed by quietly downranking hostile or antidemocratic posts. Remarkably, this can occur without requiring any deletions, bans, or cooperation from X itself.

The study offers the takeaway that algorithmic interventions can meaningfully reduce partisan animosity while still preserving political speech. It also advances a growing movement advocating user control over platform ranking systems and the algorithms that shape what they see, which were traditionally guarded as proprietary, opaque, and mainly optimized for engagement rather... The research tool was built by a multidisciplinary team across Stanford, Northeastern University, and the University of Washington, composed of computer scientists, psychologists, communication scholars, and information scientists. Their goal in the experiment was to counter the engagement-driven amplification of divisive content that tends to reward outrage, conflict, and emotionally charged posts, without silencing political speech. Using a large language model, the tool analyzes posts in real time and identifies several categories of harmful political subject matter, including calls for political violence, attacks on democratic norms, and extreme hostility toward... When the system flags such content, it simply pushes those posts lower in the feed so they are less noticeable, like seating your argumentative uncle at the far end of the table during the...

A new research tool developed by scientists at the University of California, Berkeley, demonstrates the potential to reduce political hostility in online discussions on platforms like X (formerly Twitter). This innovative tool enables users to adjust their feeds, lowering the intensity of partisan content without eliminating political posts entirely or requiring direct cooperation from the platform itself. The researchers focused on understanding how users interact with political content on social media. Their findings suggest that through algorithm adjustments, users can gain greater control over what appears in their feeds. This could lead to a more civil discourse online, addressing concerns about the overwhelming negativity often associated with political discussions. The tool operates by modifying the algorithm that curates content in a user’s feed.

Instead of simply suppressing political content, it allows users to choose how much partisan material they wish to engage with. The goal is to create a more balanced environment that promotes healthy debate rather than divisive rhetoric. According to the study conducted in August 2023, participants who utilized this tool reported a significant decrease in feelings of anger and frustration when engaging with political posts. This suggests that user agency over content exposure could lead to a more constructive online experience. The researchers believe that implementing such tools could be a step toward alleviating the toxic atmosphere that often permeates social media platforms. The study emphasizes the importance of user control in shaping online interactions, which may ultimately foster a more respectful exchange of ideas.

Reducing exposure to antagonistic political content on social media dramatically decreases partisan hostility and negative emotions. Algorithmic interventions that limit divisive posts produce changes equivalent to three years of polarization shifts, offering platforms a clear solution to heal democratic discourse and rebuild social trust. While algorithms can control content visibility, they neither reshape beliefs nor decrease ideological division in a sustained or meaningful way. With the role of echo chambers and filter bubbles often far overstated, we must instead focus on the root causes of social tension to improve social discourse.

People Also Search

A Web-based Method Was Shown To Mitigate Political Polarization On

A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media.iStock A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts...

27 In Science. New Research From Stanford University Demonstrates That

27 in Science. New research from Stanford University demonstrates that algorithm intervention can reduce partisan animosity and control political unrest on X feeds A groundbreaking study from Stanford University has unveiled a new web-based research AI tool capable of significantly cooling down the partisan rhetoric on social media platforms like X, all without the platform’s direct involvement. T...

This Harmful Content Includes Things Like Advocating For Violence Or

This harmful content includes things like advocating for violence or extreme measures against the opposing party. Instead of removing the content, the AI tool simply reorders the feed, pushing these incendiary posts lower down the timeline. A new Stanford-led study is challenging the idea that political toxicity is simply an unavoidable element of online culture. Instead, the research suggests tha...

The Study Offers The Takeaway That Algorithmic Interventions Can Meaningfully

The study offers the takeaway that algorithmic interventions can meaningfully reduce partisan animosity while still preserving political speech. It also advances a growing movement advocating user control over platform ranking systems and the algorithms that shape what they see, which were traditionally guarded as proprietary, opaque, and mainly optimized for engagement rather... The research tool...

A New Research Tool Developed By Scientists At The University

A new research tool developed by scientists at the University of California, Berkeley, demonstrates the potential to reduce political hostility in online discussions on platforms like X (formerly Twitter). This innovative tool enables users to adjust their feeds, lowering the intensity of partisan content without eliminating political posts entirely or requiring direct cooperation from the platfor...