Disinformation Ai And The Fragmented Future Of Pr
April marks Disinformation Awareness Month, a global initiative led by the Institute for Public Relations (IPR) to spotlight the mounting risks of false and misleading information in our hyper-connected world. As communicators face the twin pressures of media disruption and information warfare, IPR’s latest study—“Navigating a Changing Media Landscape”—arrives as essential reading for global PR professionals. The research, based on interviews with 44 senior communications leaders across sectors, offers a nuanced view of how Chief Communications Officers (CCOs) and media relations professionals are adapting to an increasingly volatile media environment—not... The contraction of the journalism industry is not confined to North America. Across the globe, local and regional news outlets are being shuttered at an alarming rate. In the UK, the Press Gazette has reported the closure of nearly 300 local publications over the past two decades.
In Africa and parts of Asia, news organisations face severe funding shortages and political pressures. In the Gulf and South Asia, state-linked or politically influenced media dominate, further distorting the information landscape. IPR’s report underscores that fewer journalists are covering broader beats with less depth, time, and industry knowledge—forcing communicators to step in as both educators and content providers. “We’re often dealing with someone covering a catch-all beat… It puts the onus on us to educate reporters,” said one insurance comms director. Falsehoods, fabrications, fake news – disinformation is nothing new. For centuries, people have taken deliberate action to mislead the public.
In medieval Europe, Jewish communities were persecuted because people believed conspiracy theories suggesting that Jews spread the Black Death by poisoning wells. In 1937, Joseph Stalin doctored newspaper photographs to remove those who no longer aligned with him, altering the historical record to fit the political ambitions of the present. The advent of social media helped democratise access to information – giving (almost) anyone, (almost) anywhere, the ability to create and disseminate ideas, opinions, and make-up tutorials to millions of people all over the... Bad actors, or just misinformed ones, can now share whatever they want with whomever they want at an unprecedented scale. Thanks to generative AI tools, it’s now even cheaper and easier to create misleading audio or visual content at scale. This new, more polluted, information environment has real-world impact.
For our institutions (however imperfect they may be), a disordered information ecosystem results in everything from lower voter turnout, impeded effectiveness of emergency responses during natural disasters and mistrust in evidence-based health advice. Like any viral TikTok moment, trends in misinformation and disinformation will also evolve. New technologies create new opportunities for scale and impact; new platforms give access to new audiences. In the same way BBC Research & Development's Advisory team explored trends shaping the future of social media, we now look to the future of disinformation. We want to know how misinformation and disinformation are changing – and what technologies drive that change. Most importantly, we want to understand public service media’s role in enabling a healthier information ecosystem beyond our journalistic output.
R&D has already been developing new tools and standards for dealing with trust online. A founding member of the Coalition for Content Provenance and Authenticity (C2PA), we recently trialled content credentials with BBC Verify. We’ve also built deepfake detection tools to help journalists assess whether a video or a photo has been altered by AI. But it’s important to understand where things are going, not just where they are today. Based on some preliminary expert interviews, a new picture is emerging: Jason Crawforth is the Founder and CEO of Swear.com, a company working to restore confidence in digital media authenticity.
As GenAI tools surge in accessibility and sophistication, a new era of cyber risk is emerging—one not defined by ransomware or phishing but by synthetic realities. In its Top Strategic Technology Trends for 2025, Gartner Inc. named disinformation security as a critical discipline. This recognizes the profound impact AI-generated falsehoods could have on organizations across sectors. The message is clear: Disinformation is not a future concern but an urgent, evolving threat. Disinformation, the intentional spread of false or manipulated content, has evolved from a geopolitical tactic into a systemic risk for enterprises.
Today, anyone with access to GenAI can fabricate hyperrealistic video, audio or images. Deepfakes and synthetic voice clones are now attack vectors. According to Gartner, while only 5% of enterprises had implemented disinformation safeguards as of 2024, that number is expected to rise to 50% by 2028. This growth underscores the fact that digital trust is becoming a cornerstone of operational resilience. Organizations must now view content authenticity as seriously as they do malware detection or video surveillance. Disinformation presents risks that span reputational, legal and operational domains:
Artificial intelligence offers enormous promise to advance progress, and powerful capabilities to disrupt it. This policy brief is the first installment of a series that examines how advances in AI could be exploited to enhance operations that automate disinformation. Introducing the RICHDATA framework—a disinformation kill chain—this report describes the stages and techniques used by human operators to build disinformation campaigns. This is Part 1 of a two-part series. Click here for Part 2. The age of information has brought with it the age of disinformation.
Powered by the speed and data volume of the internet, disinformation has emerged as an insidious instrument of geopolitical power competition and domestic political warfare. It is used by both state and non-state actors to shape global public opinion, sow chaos, and chip away at trust. Artificial intelligence (AI), specifically machine learning (ML), is poised to amplify disinformation campaigns—influence operations that involve covert efforts to intentionally spread false or misleading information. In this series, we examine how these technologies could be used to spread disinformation. Part 1 considers disinformation campaigns and the set of stages or building blocks used by human operators. In many ways they resemble a digital marketing campaign, one with malicious intent to disrupt and deceive.
We offer a framework, RICHDATA, to describe the stages of disinformation campaigns and commonly used techniques. Part 2 of the series examines how AI/ML technologies may shape future disinformation campaigns. We break disinformation campaigns into multiple stages. Through reconnaissance, operators surveil the environment and understand the audience that they are trying to manipulate. They require infrastructure—messengers, believable personas, social media accounts, and groups—to carry their narratives. A ceaseless flow of content, from posts and long-reads to photos, memes, and videos, is a must to ensure their messages seed, root, and grow.
Once deployed into the stream of the internet, these units of disinformation are amplified by bots, platform algorithms, and social-engineering techniques to spread the campaign’s narratives. But blasting disinformation is not always enough: broad impact comes from sustained engagement with unwitting users through trolling—the disinformation equivalent of hand-to-hand combat. In its final stage, a disinformation operation is actualized by changing the minds of unwitting targets or even mobilizing them to action to sow chaos. Regardless of origin, disinformation campaigns that grow an organic following can become endemic to a society and indistinguishable from its authentic discourse. They can undermine a society’s ability to discern fact from fiction creating a lasting trust deficit. Edited by: Ludmilla Huntsman, Cognitive Security Alliance, United States
Reviewed by: J. D. Opdyke, DataMineit, LLC, United States Hugh Lawson-Tancred, Birkbeck University of London, United Kingdom *Correspondence: Alexander Romanishyn, a.romanishyn@ise-group.org Received 2025 Jan 31; Accepted 2025 Jun 30; Collection date 2025.
Subodh Mishra is Global Head of Communications at ISS STOXX. This post is based on an ISS ESG memorandum by Avleen Kaur, Corporate Ratings Research Sector Head for Technology, Media, and Telecommunications, at ISS ESG. In an era of rapidly evolving digital technologies, information integrity has become a growing concern. Current threats include “misinformation,” defined as inaccurate information shared without the intent to cause harm; and “disinformation,” inaccurate information deliberately disseminated with the purpose of deceiving audiences and doing harm. According to the World Economic Forum’s Global Risks Report 2025, survey respondents identified misinformation and disinformation as leading global risks. Moreover, misinformation and disinformation can interact with and be exacerbated by other technological and societal factors, such as the rise of AI-generated content.
This post examines some contemporary online risks, including problems highlighted by ISS ESG Screening & Controversies data. Additional data from the ISS ESG Corporate Rating offer insight into how companies in the Interactive Media and Online Communications industry are responding to such risks. The post also reviews evolving regulation that is shaping the digital landscape and the response to misinformation, disinformation, and related threats. With an estimated two-thirds of the global population having an online presence, the majority of whom are also social media users, the number of people such content might reach has also expanded significantly. In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship. Artificial intelligence has turbocharged state efforts to crack down on internet freedoms over the past year.
Governments and political actors around the world, in both democracies and autocracies, are using AI to generate texts, images, and video to manipulate public opinion in their favor and to automatically censor critical online... In a new report released by Freedom House, a human rights advocacy group, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.” The annual report, Freedom on the Net, scores and ranks countries according to their relative degree of internet freedom, as measured by a host of factors like internet shutdowns, laws limiting online expression, and... The 2023 edition, released on October 4, found that global internet freedom declined for the 13th consecutive year, driven in part by the proliferation of artificial intelligence. “Internet freedom is at an all-time low, and advances in AI are actually making this crisis even worse,” says Allie Funk, a researcher on the report. Funk says one of their most important findings this year has to do with changes in the way governments use AI, though we are just beginning to learn how the technology is boosting digital...
People Also Search
- Disinformation, AI and the Fragmented Future of PR
- Disinformation in the Age of Artificial Intelligence (AI): Implications ...
- Fake,Or,Fact,On,A,Head,,False,And,Truth,Information, - | Strategic ...
- The next wave of disinformation: AI, fact-checks, and the fight ... - BBC
- Disinformation Security: Securing Trust In The Age Of AI - Forbes
- AI and the Future of Disinformation Campaigns
- Misinformation and Disinformation and their impact on the future of PR ...
- AI-driven disinformation: policy recommendations for democratic ...
- Misinformation and Disinformation in the Digital Age: A Rising Risk for ...
- How generative AI is boosting the spread of disinformation and ...
April Marks Disinformation Awareness Month, A Global Initiative Led By
April marks Disinformation Awareness Month, a global initiative led by the Institute for Public Relations (IPR) to spotlight the mounting risks of false and misleading information in our hyper-connected world. As communicators face the twin pressures of media disruption and information warfare, IPR’s latest study—“Navigating a Changing Media Landscape”—arrives as essential reading for global PR pr...
In Africa And Parts Of Asia, News Organisations Face Severe
In Africa and parts of Asia, news organisations face severe funding shortages and political pressures. In the Gulf and South Asia, state-linked or politically influenced media dominate, further distorting the information landscape. IPR’s report underscores that fewer journalists are covering broader beats with less depth, time, and industry knowledge—forcing communicators to step in as both educat...
In Medieval Europe, Jewish Communities Were Persecuted Because People Believed
In medieval Europe, Jewish communities were persecuted because people believed conspiracy theories suggesting that Jews spread the Black Death by poisoning wells. In 1937, Joseph Stalin doctored newspaper photographs to remove those who no longer aligned with him, altering the historical record to fit the political ambitions of the present. The advent of social media helped democratise access to i...
For Our Institutions (however Imperfect They May Be), A Disordered
For our institutions (however imperfect they may be), a disordered information ecosystem results in everything from lower voter turnout, impeded effectiveness of emergency responses during natural disasters and mistrust in evidence-based health advice. Like any viral TikTok moment, trends in misinformation and disinformation will also evolve. New technologies create new opportunities for scale and...
R&D Has Already Been Developing New Tools And Standards For
R&D has already been developing new tools and standards for dealing with trust online. A founding member of the Coalition for Content Provenance and Authenticity (C2PA), we recently trialled content credentials with BBC Verify. We’ve also built deepfake detection tools to help journalists assess whether a video or a photo has been altered by AI. But it’s important to understand where things are go...