The Next Wave Of Disinformation Ai Fact Checks And The Fight Bbc
Falsehoods, fabrications, fake news – disinformation is nothing new. For centuries, people have taken deliberate action to mislead the public. In medieval Europe, Jewish communities were persecuted because people believed conspiracy theories suggesting that Jews spread the Black Death by poisoning wells. In 1937, Joseph Stalin doctored newspaper photographs to remove those who no longer aligned with him, altering the historical record to fit the political ambitions of the present. The advent of social media helped democratise access to information – giving (almost) anyone, (almost) anywhere, the ability to create and disseminate ideas, opinions, and make-up tutorials to millions of people all over the... Bad actors, or just misinformed ones, can now share whatever they want with whomever they want at an unprecedented scale.
Thanks to generative AI tools, it’s now even cheaper and easier to create misleading audio or visual content at scale. This new, more polluted, information environment has real-world impact. For our institutions (however imperfect they may be), a disordered information ecosystem results in everything from lower voter turnout, impeded effectiveness of emergency responses during natural disasters and mistrust in evidence-based health advice. Like any viral TikTok moment, trends in misinformation and disinformation will also evolve. New technologies create new opportunities for scale and impact; new platforms give access to new audiences. In the same way BBC Research & Development's Advisory team explored trends shaping the future of social media, we now look to the future of disinformation.
We want to know how misinformation and disinformation are changing – and what technologies drive that change. Most importantly, we want to understand public service media’s role in enabling a healthier information ecosystem beyond our journalistic output. R&D has already been developing new tools and standards for dealing with trust online. A founding member of the Coalition for Content Provenance and Authenticity (C2PA), we recently trialled content credentials with BBC Verify. We’ve also built deepfake detection tools to help journalists assess whether a video or a photo has been altered by AI. But it’s important to understand where things are going, not just where they are today.
Based on some preliminary expert interviews, a new picture is emerging: A wave of disinformation has been unleashed online since Israel began strikes on Iran last week, with dozens of posts reviewed by BBC Verify seeking to amplify the effectiveness of Tehran's response. Our analysis found a number of videos - created using artificial intelligence - boasting of Iran's military capabilities, alongside fake clips showing the aftermath of strikes on Israeli targets. The three most viewed fake videos BBC Verify found have collectively amassed over 100 million views across multiple platforms. Pro-Israeli accounts have also shared disinformation online, mainly by recirculating old clips of protests and gatherings in Iran, falsely claiming that they show mounting dissent against the government and support among Iranians for Israel's... Israel launched strikes in Iran on 13 June, leading to several rounds of Iranian missile and drone attacks on Israel.
One organisation that analyses open-source imagery described the volume of disinformation online as "astonishing" and accused some "engagement farmers" of seeking to profit from the conflict by sharing misleading content designed to attract attention... The US has justified its air and naval campaign as necessary to fight drug smuggling into the US. After the resignations of Director General and CEO of News, The Culture, Media and Sport committee invited a number of senior BBC figures to be quizzed on what has happened. The UN says hundreds of people were killed in the protests. Verified clips show the violent actions of police as they attempted to crush protests. There's growing concern that current tax and spending policies help pensioners, but are unfair on younger generations.
BBC Verify's Jake Horton looks at how Trump's stance has changed over time. Disinformation. By now we are all aware of its polarising effects and real world consequences. But how many of us are aware of the new and growing threat to trusted information that’s emerging from Generative AI’s explosion on the scene? I’m talking about ‘distortion’. Distortion is what happens when an AI assistant ‘scrapes’ information to respond to a question and serves up an answer that’s factually incorrect, misleading, and potentially dangerous.
Don’t get me wrong - AI is the future and brings endless opportunities. Here at BBC News we are already forging ahead with AI tools that will help us deliver more trusted journalism to more consumers in more formats – and on platforms where they need it. And we are in discussions with tech companies around new AI applications that could further enhance and improve our output. But the price of AI’s extraordinary benefits must not be a world where people searching for answers are served distorted, defective content that presents itself as fact. In what can feel like a chaotic world, it surely cannot be right that consumers seeking clarity are met with yet more confusion. BBC Broadcaster Philippa Thomas is joined by:
In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship. Artificial intelligence has turbocharged state efforts to crack down on internet freedoms over the past year. Governments and political actors around the world, in both democracies and autocracies, are using AI to generate texts, images, and video to manipulate public opinion in their favor and to automatically censor critical online... In a new report released by Freedom House, a human rights advocacy group, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.” The annual report, Freedom on the Net, scores and ranks countries according to their relative degree of internet freedom, as measured by a host of factors like internet shutdowns, laws limiting online expression, and... The 2023 edition, released on October 4, found that global internet freedom declined for the 13th consecutive year, driven in part by the proliferation of artificial intelligence.
“Internet freedom is at an all-time low, and advances in AI are actually making this crisis even worse,” says Allie Funk, a researcher on the report. Funk says one of their most important findings this year has to do with changes in the way governments use AI, though we are just beginning to learn how the technology is boosting digital... At the 2025 Milton Wolf Seminar, panel discussions tackled one of the urgent questions of the digital age: How can truth be verified in a world where its boundaries are increasingly blurred? Against a backdrop of widespread disinformation, increasing polarisation and declining public trust, the very notion of truth itself has become contested. In such an environment, fact-checking faces a dual challenge: not only is it harder to agree on what qualifies as truth, but disinformation now spreads with unprecedented speed and scale, outpacing traditional methods of... Amid these challenges, large language models (LLMs) have emerged as both a source of the problem and, paradoxically, a potential part of the solution.
LLMs are advanced generative artificial intelligence (AI) systems trained on large amounts of internet data to generate human-like output. On the one hand, LLMs can produce convincing falsehoods rapidly and at scale, exacerbating the spread of disinformation. On the other hand, its advanced capabilities might also be harnessed to detect, counter and even stop disinformation. This raises a critical question: Could generative AI, despite its risks, become an ally in the fight for truth? The 2025 Milton Wolf Seminar placed a strong emphasis on the dangers posed by AI, such as how it can produce disinformation and mislead individuals. This blog post explores an alternative angle.
Instead of viewing AI only through the lens of risk, it asks whether this technology might also serve as part of the solution. In this blog post, I will first discuss the evolution of fact-checking and how it adapted to the changing information ecosystem. Next, I will examine the challenges fact-checkers face today, especially the scale and speed of disinformation. I will then turn to LLMs, to consider whether this technology can help support the fact-checking process. Finally, I will reflect on how LLMs might strengthen the ongoing fight for truth. Disinformation itself is not new, but social media has profoundly transformed how quickly and widely it spreads.
Platforms such as Facebook and X (formerly Twitter) have redefined how citizens consume information. While democratising access, they have also created unregulated and minimally controlled spaces where disinformation can rapidly proliferate (Wittenberg & Berinsky, 2020). Unlike traditional journalism, where media professionals served as gatekeepers and information had to pass through institutional filters before reaching the public, social media platforms allow anyone to publish and share content instantly, without any... This has given rise to retroactive gatekeeping: a form of fact-checking that involves verifying the accuracy of claims after they have already begun circulating online (Singer, 2023). Artificial intelligence (AI) threatens to "supercharge" disinformation and incite violence at elections, the US deputy attorney general has warned. Speaking exclusively to the BBC, Lisa Monaco described AI as the "ultimate double-edged sword".
It could deliver "profound benefits" to society but also be used by "malicious actors" to "sow chaos", she added. And she revealed plans to make the use of AI by criminals an aggravating factor in sentencing in US courts. The former federal prosecutor, who is in the UK to deliver a lecture on AI at the University of Oxford, said violent criminals who used guns were given longer sentences.
People Also Search
- The next wave of disinformation: AI, fact-checks, and the fight ... - BBC
- Israel-Iran conflict unleashes wave of AI disinformation - BBC
- BBC Verify | Latest News & Updates | BBC News
- Deborah Turness - AI Distortion is new threat to trusted information - BBC
- The power of the machine - harnessing AI to fight disinformation - BBC
- How generative AI is boosting the spread of disinformation and ...
- Fact-Checking in the Digital Age: Can Generative AI Become an Ally ...
- Redrawing the lines against disinformation: How AI is shaping the ...
- PDF Full Fact Report 2024: Trust and truth in the age of AI
- AI could 'supercharge' election disinformation, US tells the BBC
Falsehoods, Fabrications, Fake News – Disinformation Is Nothing New. For
Falsehoods, fabrications, fake news – disinformation is nothing new. For centuries, people have taken deliberate action to mislead the public. In medieval Europe, Jewish communities were persecuted because people believed conspiracy theories suggesting that Jews spread the Black Death by poisoning wells. In 1937, Joseph Stalin doctored newspaper photographs to remove those who no longer aligned wi...
Thanks To Generative AI Tools, It’s Now Even Cheaper And
Thanks to generative AI tools, it’s now even cheaper and easier to create misleading audio or visual content at scale. This new, more polluted, information environment has real-world impact. For our institutions (however imperfect they may be), a disordered information ecosystem results in everything from lower voter turnout, impeded effectiveness of emergency responses during natural disasters an...
We Want To Know How Misinformation And Disinformation Are Changing
We want to know how misinformation and disinformation are changing – and what technologies drive that change. Most importantly, we want to understand public service media’s role in enabling a healthier information ecosystem beyond our journalistic output. R&D has already been developing new tools and standards for dealing with trust online. A founding member of the Coalition for Content Provenance...
Based On Some Preliminary Expert Interviews, A New Picture Is
Based on some preliminary expert interviews, a new picture is emerging: A wave of disinformation has been unleashed online since Israel began strikes on Iran last week, with dozens of posts reviewed by BBC Verify seeking to amplify the effectiveness of Tehran's response. Our analysis found a number of videos - created using artificial intelligence - boasting of Iran's military capabilities, alongs...
One Organisation That Analyses Open-source Imagery Described The Volume Of
One organisation that analyses open-source imagery described the volume of disinformation online as "astonishing" and accused some "engagement farmers" of seeking to profit from the conflict by sharing misleading content designed to attract attention... The US has justified its air and naval campaign as necessary to fight drug smuggling into the US. After the resignations of Director General and C...