Countering Disinformation Effectively An Evidence Based Policy Guide

Bonisiwe Shabane
-
countering disinformation effectively an evidence based policy guide

This report offers a high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation. The most effective investments, based on a case study analysis appear to be: Supporting local journalism; Media literacy Education; and Changing recommendation algorithms. Key findings based on insights from 10 case studies include: Empowering Citizens Against Disinformation: The Promise and Challenges of Media Literacy Training In today’s digital age, the proliferation of misinformation and disinformation poses a significant threat to informed decision-making and democratic processes. Recognizing this challenge, media literacy training has emerged as a crucial countermeasure, empowering individuals to critically evaluate information and navigate the complex media landscape.

A growing body of research indicates that effective media literacy programs can significantly enhance people’s ability to identify false narratives and untrustworthy sources, acting as a vital defense against manipulation. The core principle of media literacy lies in equipping individuals with the skills to access, analyze, evaluate, create, and act upon information from diverse communication channels. This involves fostering an understanding of media industry practices, recognizing common disinformation tactics, and developing proficiency in navigating digital technologies. While numerous media literacy initiatives exist, their effectiveness hinges on the specific pedagogical approaches employed. The most successful programs cultivate "actionable skepticism" or "information literacy," empowering individuals to take ownership of their media consumption and actively seek out reliable sources. Research reveals that media literacy training yields the most positive outcomes when it not only imparts skills but also fosters a sense of agency and responsibility.

Individuals who feel confident in their ability to find credible information and who prioritize responsible media consumption demonstrate greater resilience against misinformation. This internal locus of control, coupled with the ability to discern factual accuracy, promotes proactive engagement with the information landscape. Studies have shown, for example, that individuals with a high locus of control are more likely to take corrective actions on social media, such as reporting misinformation or engaging constructively with those spreading it. Modern media literacy education emphasizes the importance of "lateral reading," a technique where individuals verify information by consulting multiple trusted sources. This approach has proven more effective than traditional methods focused on identifying superficial markers of unreliable websites, as misinformation sources are increasingly sophisticated in their presentation. Studies have demonstrated that lateral reading training significantly enhances students’ ability to distinguish between credible and fabricated claims, fostering a more discerning approach to online content.

We’re sorry, this site is currently experiencing technical difficulties. Please try again in a few moments. Exception: forbidden RESEARCH: Countering Disinformation Effectively: An Evidence-Based Policy Guide, by Jon Bateman & Dean Jackson, published by the Carnegie Endowment for International Peace (Available Here) A few days ago, the Carnegie Endowment for International Peace released a new report focused on how a variety of stakeholders, including democratic governments and platforms can counter disinformation. Using evidence-based recommendations, the report highlights that there is no one-size-fits-all approach and that policymakers should “act like investors, pursuing a diversified mixture of counter-disinformation efforts while learning and rebalancing over time.”

RESEARCH: Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, by Evan Hubinger et al., published in Cryptography and Security (Available Here) As the US government zeroes in on technology and safety, this study looks at the ability for individuals to manipulate large language models (LLMs) like Anthropic’s chatbot Claude, to switch off its safety controls. The study worryingly finds that, “once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety” with consequences that could resonate deeply across communities,... NEWS: Carmakers ditching AM radio is ‘unsafe,’ some lawmakers say, by Valerie Yurk, published in Roll Call (Available Here)

People Also Search

This Report Offers A High-level, Evidence-informed Guide To Some Of

This report offers a high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation. The most effective investments, based on a case study analysis appear to be: Supporting local journalism; Media literacy Education; and Changing recommendation algorithms. Key findings based on insights from 10 case studies includ...

A Growing Body Of Research Indicates That Effective Media Literacy

A growing body of research indicates that effective media literacy programs can significantly enhance people’s ability to identify false narratives and untrustworthy sources, acting as a vital defense against manipulation. The core principle of media literacy lies in equipping individuals with the skills to access, analyze, evaluate, create, and act upon information from diverse communication chan...

Individuals Who Feel Confident In Their Ability To Find Credible

Individuals who feel confident in their ability to find credible information and who prioritize responsible media consumption demonstrate greater resilience against misinformation. This internal locus of control, coupled with the ability to discern factual accuracy, promotes proactive engagement with the information landscape. Studies have shown, for example, that individuals with a high locus of ...

We’re Sorry, This Site Is Currently Experiencing Technical Difficulties. Please

We’re sorry, this site is currently experiencing technical difficulties. Please try again in a few moments. Exception: forbidden RESEARCH: Countering Disinformation Effectively: An Evidence-Based Policy Guide, by Jon Bateman & Dean Jackson, published by the Carnegie Endowment for International Peace (Available Here) A few days ago, the Carnegie Endowment for International Peace released a new repo...

RESEARCH: Sleeper Agents: Training Deceptive LLMs That Persist Through Safety

RESEARCH: Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, by Evan Hubinger et al., published in Cryptography and Security (Available Here) As the US government zeroes in on technology and safety, this study looks at the ability for individuals to manipulate large language models (LLMs) like Anthropic’s chatbot Claude, to switch off its safety controls. The study worry...