Tackling Misinformation To Enforce Efforts In Global Disaster Risk
When an outbreak occurs, fake news and conspiracy theories can circulate rapidly, often going viral. For those of us working in animal health, the comparison between how misinformation spreads and how viruses propagate is strikingly appropriate. In 2020, the World Health Organization (WHO) warned that, in addition to the challenges brought by the pandemic, the world was grappling with an “infodemic” sparked by the proliferation of conspiracy theories and falsehoods... This marked the first pandemic in history in which misinformation spread on an unprecedented scale thanks to technological advancements and the Internet. A study published in the American Journal of Tropical Medicine and Hygiene estimated that at least 800 people may have died worldwide due to coronavirus-related misinformation during the first three months of 2020. At the time, it was not uncommon to see the pandemic described as a “hoax” or the virus labeled a “bioweapon” in online content.The pandemic underscored the role that misinformation plays during health crises,...
The COVID-19 emergency, however, uniquely prepared us for future infodemics, leaving behind a legacy of awareness for generations to come. Misinformation is a false, deceptive, misleading or manipulated information not disseminated with the intention to deceive. It is often spread by people who do not realise it is false and do not intend to cause harm. Propaganda and conspiracy theories about diseases can provide people with simple and easy answers to complex questions. Throughout history, deceptive and misleading information has been used to manipulate people, especially those who don’t have the scientific knowledge to see through falsehoods, leading to widespread mistrust, anxiety and fear. When information is deliberately created, presented and disseminated with the intent to deceive, mislead or cause harm to advance specific agendas or distort public opinion, the phenomenon is described as ‘disinformation’.
The rise of social media has only compounded the problem. In recent years, platforms such as Facebook and X (formerly Twitter) have become places where people seek answers and reassurance during times of uncertainty, including pandemics and natural disasters. Unfortunately, these platforms also create fertile ground for unverified statements and generally harmful content.Dr Helen Roberts, a G7 Advisor on Exotic Disease Control at the UK’s Department for Environment, Food & Rural Affairs (DEFRA),... In the United States, some social media users were suggesting to drink raw milk with HPAI in it, falsely claiming that it would vaccinate people against the flu. But it is not just social media users who fall prey to misinformation. During outbreaks, even reputable news outlets can sometimes misinterpret information from official sources.
Identifying misinformation may be a daunting task due to its nature and pervasiveness. However, our collective resilience against it can be strengthened. You have full access to this open access article Misinformation significantly challenges disaster risk management by increasing risks and complicating response efforts. This technical note introduces a methodology toolbox designed to help policy makers, decision makers, practitioners, and scientists systematically assess, prevent, and mitigate the risks and impacts of misinformation in disaster scenarios. The methodology consists of eight steps, each offering specific tools and strategies to help address misinformation effectively.
The process begins with defining the communication context using PESTEL analysis and Berlo’s communication model to assess external factors and information flow. It then focuses on identifying misinformation patterns through data collection and analysis using advanced AI methods. The impact of misinformation on risk perceptions is assessed through established theoretical frameworks, guiding the development of targeted strategies. The methodology includes practical measures for mitigating misinformation, such as implementing AI tools for prebunking and debunking false information. Evaluating the effectiveness of these measures is crucial, and continuous monitoring is recommended to adapt strategies in real-time. Ethical considerations are outlined to ensure compliance with international laws and data privacy regulations.
The final step emphasizes managerial aspects, including clear communication and public education, to build trust and promote reliable information sources. This structured approach provides practical insights for enhancing disaster response and reducing the risks associated with misinformation. Avoid common mistakes on your manuscript. Misinformation during disasters can intensify risks and hinder effective disaster risk management (DRM). This paper introduces a systematic methodology to assess social media misinformation risks and impacts in DRM. By offering structured tools and strategies, it aids researchers, policymakers, decision makers, and practitioners in understanding, preventing, and mitigating misinformation, ultimately fostering more resilient communities and enhancing response efforts.
The scientific literature identifies different types of information disorders, among which misinformation is commonly understood as “false” or “misleading” information, shared without the intent to deceive. Lazer et al. (1979) define it in contrast to disinformation, which is deliberately false and spread with the intent to mislead. They place both within the broader context of “fake news,” a term they describe as fabricated content mimicking news but lacking journalistic intent or process. Ireton and Posetti (2018a) similarly highlight misinformation and disinformation as core categories of information disorder and caution against the use of “fake news” due to its politicization and its use to discredit journalism. While DiFonzo and Bordia (2007) focus on rumors—unverified and socially meaningful information circulating in uncertain contexts—the present study emphasizes misinformation and rumors as broad, commonly used terms in scholarly work to encompass various forms...
The Infodemic: How Misinformation Complicates Disease Outbreaks and Disaster Response In an increasingly interconnected world, the rapid spread of misinformation poses a significant threat, impacting public health, disaster response, and societal trust. During disease outbreaks, the proliferation of false or misleading information can be as contagious as the disease itself, hindering effective control measures and exacerbating the crisis. This phenomenon, often termed an "infodemic," was starkly evident during the COVID-19 pandemic, where conspiracy theories and falsehoods about the virus circulated widely online. The World Health Organization (WHO) recognized this infodemic as a major challenge alongside the pandemic itself, highlighting the unprecedented scale of misinformation driven by technological advancements and the internet. A study even estimated that hundreds of deaths in the early months of the pandemic were directly attributable to misinformation about the virus.
The COVID-19 pandemic served as a harsh wake-up call, demonstrating the real-world consequences of misinformation during health crises. False narratives about the virus’s origins, treatments, and preventative measures not only fueled fear and anxiety but also undermined public trust in health authorities and governments. The pandemic exposed vulnerabilities in information ecosystems and underscored the urgent need for strategies to combat misinformation effectively. One key takeaway is the recognition that misinformation can spread just like a virus, exploiting social networks and digital platforms to reach vast audiences quickly. This understanding is crucial for developing targeted interventions and communication strategies to counter the spread of false narratives. The pervasiveness of misinformation extends beyond pandemics, impacting a range of disaster risk reduction efforts.
During natural disasters, the spread of inaccurate information about evacuation routes, emergency services, or the nature of the threat can lead to confusion, panic, and hinder rescue operations. Similarly, in animal health emergencies, misinformation can impede disease control efforts. For example, during the avian influenza outbreak, false claims circulated online suggesting that drinking raw milk containing the virus could provide immunity. Such misinformation can be particularly dangerous, leading to risky behaviors and undermining scientifically sound preventative measures. Addressing the challenge of misinformation requires a multi-pronged approach involving scientists, journalists, policymakers, and the public. Scientists and veterinarians play a crucial role in debunking false claims and providing accurate, evidence-based information.
Journalists must uphold ethical principles and prioritize accuracy over speed, especially when reporting on sensitive topics like disease outbreaks. Media literacy education for the public is also essential, empowering individuals to critically assess information sources and identify misinformation. Misinformation and disinformation are pervasive threats in today’s hyper-connected world, particularly in the context of disaster risk management. As noted in a recent statement by the United Nations Office for Disaster Risk Reduction (UNDRR), “a flood of false information raises disaster risks” (UNDRR, 2025). During crises, this misinformation can spread more rapidly than the disasters themselves, often resulting in chaotic responses, widespread panic, and ultimately, loss of life. The urgent need for accurate information is emphasized, as the failure to manage misinformation directly correlates with increased vulnerability of communities to disasters—be they natural or man-made.
The systemic implications of misinformation are profound, impacting global governance and resilience frameworks. During times of crisis, decision-makers rely heavily on data and intelligence to inform their actions. However, when misinformation infiltrates this process, it distorts situational awareness, leading to suboptimal decisions and resource allocation. The intersection of misinformation and disaster risk management necessitates a comprehensive understanding of both the operational challenges and strategic pathways to mitigate these risks. Operationally, governments and agencies must be equipped to counter misinformation proactively while also fostering a culture of transparency and public trust. The challenge of misinformation demands a strategic response, involving coordinated efforts across various governance levels.
Policymakers must engage in cross-sectoral collaboration that includes technology partners, public health agencies, and community organizations. By doing so, they can establish protocols to identify, counter, and diminish false narratives before they exacerbate disaster impacts. This requires investment in technological tools that facilitate real-time data verification as well as frameworks that encourage public engagement in sharing verified information. Operationally, the rapid dissemination of false information poses significant hurdles for disaster response teams. With better tools and frameworks, agencies can streamline their approach to information dissemination and ensure that citizens receive accurate, timely updates during emergencies. Addressing these operational challenges involves training personnel to recognize misinformation and developing communication strategies that leverage social media effectively.
Investing in capacity-building for local communities can empower individuals to identify and refute misinformation, thus enabling a collective resilience framework. By harnessing the collective insights of both global governance frameworks and local community knowledge, the risks associated with misinformation can be mitigated. Furthermore, fostering a resilient information ecosystem is not just a tactical response; it is fundamental to safeguarding lives and fortifying global systems against future disasters. The UN’s 2024 Global Risk Report ranks mis- and disinformation as a top global threat. UN Development Coordination Office Chief of Communications and Results Reporting, Carolina G. Azevedo, explores why youth-led, UN-backed efforts in Kenya and Costa Rica may hold lessons for building trust in the age of Artificial Intelligence (AI).
In a world shaken by conflict, climate shocks and inequality, some of the most dangerous threats may also be the least visible. Disinformation is one of them. According to the recently launched UN Global Risk Report 2024, mis- and disinformation is not only a top global threat—it’s the one countries feel least prepared to address. Over 1,100 experts from 136 countries ranked it among the gravest risks, and more than 80 per cent said it’s already happening. This isn’t just a communications issue—it’s a crisis of trust. Tackling it means protecting communities from harm while upholding freedom of expression and other human rights.
Disinformation can unravel the threads that hold societies together. In settings with increased instability, it can tip societies into violence. It can also corrode the norms of debate and science-backed evidence that societies take for granted. Paper by Rosa Vicari: “…Misinformation significantly challenges disaster risk management by increasing risks and complicating response efforts. This technical note introduces a methodology toolbox designed to help policy makers, decision makers, practitioners, and scientists systematically assess, prevent, and mitigate the risks and impacts of misinformation in disaster scenarios. The methodology consists of eight steps, each offering specific tools and strategies to help address misinformation effectively.
The process begins with defining the communication context using PESTEL analysis and Berlo’s communication model to assess external factors and information flow. It then focuses on identifying misinformation patterns through data collection and analysis using advanced AI methods. The impact of misinformation on risk perceptions is assessed through established theoretical frameworks, guiding the development of targeted strategies. The methodology includes practical measures for mitigating misinformation, such as implementing AI tools for prebunking and debunking false information. Evaluating the effectiveness of these measures is crucial, and continuous monitoring is recommended to adapt strategies in real-time. Ethical considerations are outlined to ensure compliance with international laws and data privacy regulations.
People Also Search
- Tackling misinformation to enforce efforts in global disaster risk ...
- A toolbox to deal with misinformation in disaster risk management
- Combating Misinformation to Strengthen Disaster Risk Reduction
- Combatting Disaster Risks: The Urgent Need to Tackle Misinformation and ...
- Disinformation Is a Global Risk. So Why Are We Still Treating It Like a ...
- Handling false information in emergency management: A cross-national ...
- When Disasters Meet Deception: Tackling Disinformation in Crisis ...
- Confronting Misinformation During Disasters: Strategies for Crisis ...
- Combating Misinformation During Natural Disasters | DISA
When An Outbreak Occurs, Fake News And Conspiracy Theories Can
When an outbreak occurs, fake news and conspiracy theories can circulate rapidly, often going viral. For those of us working in animal health, the comparison between how misinformation spreads and how viruses propagate is strikingly appropriate. In 2020, the World Health Organization (WHO) warned that, in addition to the challenges brought by the pandemic, the world was grappling with an “infodemi...
The COVID-19 Emergency, However, Uniquely Prepared Us For Future Infodemics,
The COVID-19 emergency, however, uniquely prepared us for future infodemics, leaving behind a legacy of awareness for generations to come. Misinformation is a false, deceptive, misleading or manipulated information not disseminated with the intention to deceive. It is often spread by people who do not realise it is false and do not intend to cause harm. Propaganda and conspiracy theories about dis...
The Rise Of Social Media Has Only Compounded The Problem.
The rise of social media has only compounded the problem. In recent years, platforms such as Facebook and X (formerly Twitter) have become places where people seek answers and reassurance during times of uncertainty, including pandemics and natural disasters. Unfortunately, these platforms also create fertile ground for unverified statements and generally harmful content.Dr Helen Roberts, a G7 Adv...
Identifying Misinformation May Be A Daunting Task Due To Its
Identifying misinformation may be a daunting task due to its nature and pervasiveness. However, our collective resilience against it can be strengthened. You have full access to this open access article Misinformation significantly challenges disaster risk management by increasing risks and complicating response efforts. This technical note introduces a methodology toolbox designed to help policy ...
The Process Begins With Defining The Communication Context Using PESTEL
The process begins with defining the communication context using PESTEL analysis and Berlo’s communication model to assess external factors and information flow. It then focuses on identifying misinformation patterns through data collection and analysis using advanced AI methods. The impact of misinformation on risk perceptions is assessed through established theoretical frameworks, guiding the de...