Misinformation Interventions And Online Sharing Behaviour Lessons
The spread of misinformation on social media continues to pose challenges. While prior research has shown some success in reducing susceptibility to misinformation at scale, how individual-level interventions impact the quality of content shared on social networks remains understudied. Across two pre-registered longitudinal studies, we ran two Twitter/X ad campaigns, targeting a total of 967 640 Twitter/X users with either a previously validated 'inoculation' video about emotional manipulation or a control video. We hypothesized that Twitter/X users who saw the inoculation video would engage less with negative-emotional content and share less content from unreliable sources. We do not find evidence for our hypotheses, observing no meaningful changes in posting or retweeting post-intervention. Our findings are most likely compromised by Twitter/X's 'fuzzy matching' policy, which introduced substantial noise in our data (approx.
7.5% of targeted individuals were actually exposed to the intervention). Our findings are thus probably the result of treatment non-compliance rather than 'true' null effects. Importantly, we also demonstrate that different statistical analyses and time windows (looking at the intervention's effects over 1 h versus 6 h or 24 h, etc.) can yield different and even opposite significant effects,... Keywords: field study; inoculation theory; intervention; misinformation; null results. We declare we have no competing interests. Examples of a Twitter/X ad with the inoculation intervention (left) and the control…
Tweet counts of tweets containing language related to anger, happiness, affection, fear and… Jon Roozenbeek*, Jana Lasser, Malia Marks, Tianzhu Qin, David Garcia, Beth Goldberg, Ramit Debnath, Sander van der Linden, Stephan Lewandowsky Research output: Contribution to journal › Article (Academic Journal) › peer-review Research output: Contribution to journal › Article (Academic Journal) › peer-review T1 - Misinformation interventions and online sharing behaviour T2 - Lessons learned from two preregistered field studies
The spread of misinformation on social media continues to pose challenges. While prior research has shown some success in reducing susceptibility to misinformation at scale, how individual-level interventions impact the quality of content shared on social networks remains understudied. Across two pre-registered longitudinal studies, we ran two Twitter/X ad campaigns, targeting a total of 967,640 Twitter/X users with either a previously validated “inoculation” video about emotional manipulation or a control video. We hypothesized that Twitter/X users who saw the inoculation video would engage less with negative-emotional content and share less content from unreliable sources. We do not find evidence for our hypotheses, observing no meaningful changes in posting or retweeting post-intervention. Our findings are most likely compromised by Twitter/X’s “fuzzy matching” policy, which introduced substantial noise in our data (~7.5% of targeted individuals were actually exposed to the intervention).
Our findings are thus likely the result of treatment non-compliance rather than “true” null effects. Importantly, we also demonstrate that different statistical analyses and time windows (looking at the intervention’s effects over 1 hour versus 6 hours or 24 hours, etc.) can yield different and even opposite significant effects,... Nature Human Behaviour volume 8, pages 1044–1052 (2024)Cite this article The spread of misinformation through media and social networks threatens many aspects of society, including public health and the state of democracies. One approach to mitigating the effect of misinformation focuses on individual-level interventions, equipping policymakers and the public with essential tools to curb the spread and influence of falsehoods. Here we introduce a toolbox of individual-level interventions for reducing harm from online misinformation.
Comprising an up-to-date account of interventions featured in 81 scientific papers from across the globe, the toolbox provides both a conceptual overview of nine main types of interventions, including their target, scope and examples,... The nine types of interventions covered are accuracy prompts, debunking and rebuttals, friction, inoculation, lateral reading and verification strategies, media-literacy tips, social norms, source-credibility labels, and warning and fact-checking labels. This is a preview of subscription content, access via your institution Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription
People Also Search
- Misinformation interventions and online sharing behaviour: lessons ...
- Misinformation interventions and online sharing behavior: Lessons ...
- Behavior Change Interventions Combating Online Misinformation: A ...
- Toolbox of individual-level interventions against online misinformation ...
- PDF How can we combat online misinformation? - Alan Turing Institute
- The Online Misinformation Engagement Framework - ScienceDirect
The Spread Of Misinformation On Social Media Continues To Pose
The spread of misinformation on social media continues to pose challenges. While prior research has shown some success in reducing susceptibility to misinformation at scale, how individual-level interventions impact the quality of content shared on social networks remains understudied. Across two pre-registered longitudinal studies, we ran two Twitter/X ad campaigns, targeting a total of 967 640 T...
7.5% Of Targeted Individuals Were Actually Exposed To The Intervention).
7.5% of targeted individuals were actually exposed to the intervention). Our findings are thus probably the result of treatment non-compliance rather than 'true' null effects. Importantly, we also demonstrate that different statistical analyses and time windows (looking at the intervention's effects over 1 h versus 6 h or 24 h, etc.) can yield different and even opposite significant effects,... Ke...
Tweet Counts Of Tweets Containing Language Related To Anger, Happiness,
Tweet counts of tweets containing language related to anger, happiness, affection, fear and… Jon Roozenbeek*, Jana Lasser, Malia Marks, Tianzhu Qin, David Garcia, Beth Goldberg, Ramit Debnath, Sander van der Linden, Stephan Lewandowsky Research output: Contribution to journal › Article (Academic Journal) › peer-review Research output: Contribution to journal › Article (Academic Journal) › peer-rev...
The Spread Of Misinformation On Social Media Continues To Pose
The spread of misinformation on social media continues to pose challenges. While prior research has shown some success in reducing susceptibility to misinformation at scale, how individual-level interventions impact the quality of content shared on social networks remains understudied. Across two pre-registered longitudinal studies, we ran two Twitter/X ad campaigns, targeting a total of 967,640 T...
Our Findings Are Thus Likely The Result Of Treatment Non-compliance
Our findings are thus likely the result of treatment non-compliance rather than “true” null effects. Importantly, we also demonstrate that different statistical analyses and time windows (looking at the intervention’s effects over 1 hour versus 6 hours or 24 hours, etc.) can yield different and even opposite significant effects,... Nature Human Behaviour volume 8, pages 1044–1052 (2024)Cite this a...