Artificial Intelligence Misinformation And Emergency Communication

Bonisiwe Shabane
-
artificial intelligence misinformation and emergency communication

Prepared to respond to nuclear and radiological emergencies The IAEA’s simulator trains countries to use social media effectively during nuclear or radiological emergencies, including how to counter misinformation. (Photo: IAEA) From translation bots to deepfake detectors, artificial intelligence (AI) tools are transforming how authorities warn, inform and reassure people about emergencies. But these technologies can be risky if they fall into the wrong hands, or if they are deployed before facts are verified. In a world saturated with synthetic content, perception can overshadow reality.

This is especially critical in emergency communications — where speed matters, but so does trust. In June 2025, the IAEA convened a group of leading experts on public communication in nuclear and radiological emergencies to examine how AI is changing the rules of engagement. The goal was to help countries adapt through evidence-based guidance, new research and practical capacity building. “As AI reshapes the information landscape, we want to support countries with effective guidance, connect them with leading experts, and help them navigate this fast-moving and constantly evolving field,” said Nayana Jayarajan, Outreach Officer... You have full access to this open access article Ensuring accurate information during natural disasters is vital for effective emergency response and public safety.

Disasters like earthquakes and hurricanes often trigger misinformation, complicating response efforts and endangering lives. Historical events, such as Hurricane Katrina and the COVID-19 pandemic, illustrate the harmful impact of false information. Artificial intelligence (AI), with technologies like natural language processing and machine learning, offers promising solutions for detecting and mitigating misinformation. This paper explores AI’s role in managing misinformation during disasters, highlighting its potential to improve disaster response, enhance public trust, and strengthen community resilience. Avoid common mistakes on your manuscript. Natural disasters, such as earthquakes, hurricanes, floods, and wildfires, have profound impacts on societies worldwide.

These events can cause significant loss of life, displacement of populations, and extensive damage to infrastructure (Teh & Khan, 2021). In the immediate aftermath of such disasters, the need for accurate and timely information becomes critical for effective response and recovery efforts (Natural Research Council, 1999). However, the chaotic nature of these events often leads to the spread of misinformation, which can exacerbate the challenges faced by emergency responders and affected communities (Tran et al., 2020). Misinformation during natural disasters can take many forms, including rumors, false claims, and deliberate disinformation (see Table 1 for a selection of real-world examples). This misinformation can spread rapidly through various channels, particularly social media, and create confusion and panic among the public. For instance, during Hurricane Katrina in 2005, false reports of violence and chaos hampered rescue operations and contributed to a breakdown of trust between the public and authorities (Miller, 2016; Brezina & Phipps Jr,...

Similarly, during the COVID-19 pandemic, misinformation regarding safety measures and treatments spread widely and complicated public health efforts (Caceres et al., 2022). Professor, Disaster & Emergency Management, Faculty of Liberal Arts & Professional Studies & Director, CIFAL York, York University, Canada Associate Professor, Computer Science and Engineering, York University, Canada Maleknaz Nayebi receives funding from NSERC. Ali Asgary does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. York University provides funding as a member of The Conversation CA-FR.

Artificial intelligence (AI) can support public health authorities in communicating risks and managing the spread of false information during public health emergencies, according to a new study published by the University of Zurich and... However, its use must be guided by strong ethical principles to protect public trust and safety. The study, published in BMJ Global Health, is the first international, multi- and transdisciplinary effort to comprehensively assess the impact of AI on risk communication, community engagement and infodemic management (RCCE-IM). Through a consensus-building process, a panel of 54 experts from 27 countries evaluated the opportunities and challenges of using AI in emergency management. A previous WHO/Europe study showed that confusion around health information, especially during outbreaks and disasters, when science is evolving, can negatively impact people’s health decisions and hinder protective action. Meanwhile, University of Zurich and WHO/Europe experts found that AI tools have the potential to significantly improve how health authorities tailor messages to specific populations, listen to public concerns in real time, and enhance...

However, the study also underscores the risks related to the use of AI, including algorithmic bias, privacy concerns and the potential to worsen health inequalities. If health messages are not well targeted or data is not used carefully, it can unintentionally harm vulnerable communities or contribute to spreading mis- and disinformation. “We have seen how rapidly false information can spread in emergencies and impact people’s lives. This is one of the major challenges of our times. AI has the potential to help address this effectively by identifying harmful narratives early and targeting relevant and accurate information to diverse audiences. But while the results of this study are encouraging, they are also a reminder to proceed with caution.

Innovation should never come at the cost of trust or safety,” said Cristiana Salvi, Regional Adviser for Risk Communication, Community Engagement and Infodemic Management, Health Emergencies at WHO/Europe. The AI Misinformation Tsunami: How Generated Content Threatens Emergency Response The rise of readily accessible AI tools has ushered in a new era of misinformation, posing unprecedented challenges to emergency response efforts. No longer confined to the fringes of the internet, AI-generated fake videos, images, and audio clips can now be created and disseminated in minutes, often reaching millions before fact-checkers can intervene. This new reality was starkly illustrated during a 2025 tsunami alert, where fabricated videos of colossal waves inundating coastlines went viral, while an AI chatbot spread false information about cancelled alerts. This incident underscores the urgent need to address the growing threat of AI-driven misinformation during crises.

The speed and sophistication of AI-generated content exacerbate the existing challenges of misinformation. Falsehoods, as demonstrated by a 2018 MIT study, already spread faster and wider than truth online. The advent of user-friendly AI tools only amplifies this phenomenon, generating realistic deepfakes and fueling what’s termed the “liar’s dividend,” a skepticism towards genuine information due to the prevalence of fakes. This hesitation can be fatal in emergencies, where rapid, informed action is crucial. Beyond natural disasters, AI-generated misinformation has permeated various crisis scenarios. From exaggerated wildfire footage and fabricated flood reports to manipulated imagery during geopolitical conflicts, AI-generated content has muddied the waters of truth, making it increasingly difficult to discern fact from fiction.

The use of deepfakes in the Russia-Ukraine war and the dissemination of fake nuclear alerts during the India-Pakistan standoff highlight the potential for AI-driven misinformation to escalate tensions and undermine public trust. Even official government channels have been implicated in the spread of AI-generated misinformation, either intentionally for propaganda purposes or unintentionally due to lack of verification. The dangers of AI-generated misinformation extend to sensitive areas like nuclear safety and public health. During the Fukushima wastewater release, AI-generated images of mutated marine life were circulated, fueling public anxieties and undermining scientific consensus. Similarly, fabricated satellite images and audio clips during the India-Pakistan standoff demonstrated the potential for AI-driven misinformation to inflame international tensions and even contribute to escalation of conflict. This manipulation of information underscores the urgent need for strategies to combat the spread of AI-generated falsehoods in high-stakes situations.

In today’s digital age, the intersection of artificial intelligence (AI) and disinformation presents a complex and evolving challenge, particularly during emergency and crisis situations. While AI holds tremendous potential to enhance crisis management through predictive analytics, efficient resource allocation, and rapid information dissemination, it also serves as a double-edged sword. The same technologies that can provide lifesaving insights are also leveraged to spread disinformation, creating confusion, panic, and mistrust among the public. Understanding this dichotomy is crucial for developing effective strategies to mitigate the adverse effects while maximizing the benefits of AI in crisis scenarios. AI technologies have revolutionized emergency response efforts. Machine learning algorithms can analyze vast amounts of data in real-time to predict natural disasters, track disease outbreaks, and optimize emergency response logistics.

For instance, AI-driven models can forecast hurricane paths, allowing for timely evacuations and preparations. During the COVID-19 pandemic, AI was instrumental in identifying hotspots, predicting case surges, and allocating medical resources efficiently. Moreover, AI-powered communication tools facilitate swift dissemination of critical information to the public. Chatbots, social media monitoring tools, and automated alert systems ensure that accurate and timely information reaches those in need, potentially saving lives and minimizing chaos. Despite its advantages, AI also poses significant risks when used to propagate disinformation. Disinformation campaigns, often orchestrated using AI technologies, can undermine trust in authorities and disrupt crisis management efforts.

During emergencies, the spread of false information can lead to dangerous behaviors, such as ignoring evacuation orders, rejecting medical advice, or spreading fear and panic. AI-generated deepfakes and synthetic media further complicate the landscape. These sophisticated tools can create realistic but entirely fabricated audio and video content, making it increasingly difficult for individuals to discern truth from falsehood. In the heat of a crisis, such disinformation can escalate tensions, provoke irrational responses, and hinder coordinated efforts to address the situation. A multidisciplinary team of researchers have developed a pioneering framework to combat misinformation during disasters, a growing challenge in the artificial intelligence (AI) era that threatens global resilience, public safety, and trust in institutions. The study, titled “A Toolbox to Deal with Misinformation in Disaster Risk Management” and published in AI & Society, presents an eight-step methodological framework for identifying, analyzing, and mitigating false or misleading information in...

Built upon the integration of artificial intelligence, communication science, and risk governance, the toolbox serves as a structured guide for policymakers, disaster managers, and researchers tasked with managing information flow during emergencies such as... The study acknowledges that misinformation has become a major operational and ethical challenge in disaster management. While real-time data and AI models have improved early warning and response systems, the same technologies have also amplified the spread of unverified or manipulative content. To address this duality, the authors present an eight-step toolbox, a modular framework that can be adapted by national agencies, local authorities, and humanitarian organizations. Each step builds a sequential process from situational understanding to mitigation and ethical management.

People Also Search

Prepared To Respond To Nuclear And Radiological Emergencies The IAEA’s

Prepared to respond to nuclear and radiological emergencies The IAEA’s simulator trains countries to use social media effectively during nuclear or radiological emergencies, including how to counter misinformation. (Photo: IAEA) From translation bots to deepfake detectors, artificial intelligence (AI) tools are transforming how authorities warn, inform and reassure people about emergencies. But th...

This Is Especially Critical In Emergency Communications — Where Speed

This is especially critical in emergency communications — where speed matters, but so does trust. In June 2025, the IAEA convened a group of leading experts on public communication in nuclear and radiological emergencies to examine how AI is changing the rules of engagement. The goal was to help countries adapt through evidence-based guidance, new research and practical capacity building. “As AI r...

Disasters Like Earthquakes And Hurricanes Often Trigger Misinformation, Complicating Response

Disasters like earthquakes and hurricanes often trigger misinformation, complicating response efforts and endangering lives. Historical events, such as Hurricane Katrina and the COVID-19 pandemic, illustrate the harmful impact of false information. Artificial intelligence (AI), with technologies like natural language processing and machine learning, offers promising solutions for detecting and mit...

These Events Can Cause Significant Loss Of Life, Displacement Of

These events can cause significant loss of life, displacement of populations, and extensive damage to infrastructure (Teh & Khan, 2021). In the immediate aftermath of such disasters, the need for accurate and timely information becomes critical for effective response and recovery efforts (Natural Research Council, 1999). However, the chaotic nature of these events often leads to the spread of misi...

Similarly, During The COVID-19 Pandemic, Misinformation Regarding Safety Measures And

Similarly, during the COVID-19 pandemic, misinformation regarding safety measures and treatments spread widely and complicated public health efforts (Caceres et al., 2022). Professor, Disaster & Emergency Management, Faculty of Liberal Arts & Professional Studies & Director, CIFAL York, York University, Canada Associate Professor, Computer Science and Engineering, York University, Canada Maleknaz ...