The Power Of The Machine Harnessing Ai To Fight Disinformation Bbc
BBC Broadcaster Philippa Thomas is joined by: Falsehoods, fabrications, fake news – disinformation is nothing new. For centuries, people have taken deliberate action to mislead the public. In medieval Europe, Jewish communities were persecuted because people believed conspiracy theories suggesting that Jews spread the Black Death by poisoning wells. In 1937, Joseph Stalin doctored newspaper photographs to remove those who no longer aligned with him, altering the historical record to fit the political ambitions of the present. The advent of social media helped democratise access to information – giving (almost) anyone, (almost) anywhere, the ability to create and disseminate ideas, opinions, and make-up tutorials to millions of people all over the...
Bad actors, or just misinformed ones, can now share whatever they want with whomever they want at an unprecedented scale. Thanks to generative AI tools, it’s now even cheaper and easier to create misleading audio or visual content at scale. This new, more polluted, information environment has real-world impact. For our institutions (however imperfect they may be), a disordered information ecosystem results in everything from lower voter turnout, impeded effectiveness of emergency responses during natural disasters and mistrust in evidence-based health advice. Like any viral TikTok moment, trends in misinformation and disinformation will also evolve. New technologies create new opportunities for scale and impact; new platforms give access to new audiences.
In the same way BBC Research & Development's Advisory team explored trends shaping the future of social media, we now look to the future of disinformation. We want to know how misinformation and disinformation are changing – and what technologies drive that change. Most importantly, we want to understand public service media’s role in enabling a healthier information ecosystem beyond our journalistic output. R&D has already been developing new tools and standards for dealing with trust online. A founding member of the Coalition for Content Provenance and Authenticity (C2PA), we recently trialled content credentials with BBC Verify. We’ve also built deepfake detection tools to help journalists assess whether a video or a photo has been altered by AI.
But it’s important to understand where things are going, not just where they are today. Based on some preliminary expert interviews, a new picture is emerging: The Trusted News Initiative is a partnership, founded by the BBC, that includes organisations from around the globe including; AP, AFP, BBC, CBC/Radio-Canada, European Broadcasting Union (EBU), Financial Times, Information Futures Lab, Google/YouTube, The... TNI members work together to build audience trust and to find solutions to tackle challenges of disinformation. By including media organisations and social media platforms, it is the only forum in the world of its kind designed to take on disinformation in real time. Our most recent conference took place in London and Delhi in March 2023 – you can watch all of the sessions again here.
Project Origin is part of a wide collaboration between media and tech organisations to develop signals that can be tied to media content to allow audiences to determine where content has come from and... The project was started in 2018 by the BBC with CBC/Radio Canada, The New York Times and Microsoft, born of a conviction that media publishers, working in concert with technology and civil society organisations,... In 2020 we joinedwith partners in the Content Authenticity initiative to establish the Coalition for Content Provenance and Authenticity (C2PA), an open standards body to develop and share our work which has since been... A year on, things are even worse, with bad actors using a real time war to mount their latest disinformation offensives and the anti-vaccination narrative consolidating its own community, to create information ecosystem that... It's abundantly clear that audiences need help to identify trustworthy content. A report published in the UK by the regulator Ofcom on March 30th 2022 showed that a third of internet users are unaware that online content might be false or biased.
We believe that our work on Project Origin, alongside promotion of media literacy and fact checking, offers a solution. In the last twelve months we have seen the C2PA release version 1.0 of its technical specification for digital provenance. We’ve built official and unofficial support for our work, with Sony one of the latest large organisations to join the C2PA. Media partners have been taking part in a range of activities examining workflows – the latest of these will welcome the IPTC (International Press Telecommunications Council) and its expertise on board. US Deputy Attorney General Lisa Monaco says AI could be used to "incite violence" Artificial intelligence (AI) threatens to "supercharge" disinformation and incite violence at elections, the US deputy attorney general has warned.
Speaking exclusively to the BBC, Lisa Monaco described AI as the "ultimate double-edged sword". It could deliver "profound benefits" to society but also be used by "malicious actors" to "sow chaos", she added. And she revealed plans to make the use of AI by criminals an aggravating factor in sentencing in US courts. Misinformation is spreading online via videos that particularly appeal to children YouTube channels that use AI to make videos containing false "scientific" information are being recommended to children as "educational content". Investigative BBC journalists working in a team that analyses disinformation, information that is deliberately misleading and false, found more than 50 channels in more than 20 languages spreading disinformation disguised as STEM [Science Technology...
These include pseudo-science - that's presenting information as scientific fact that is not based on proper scientific methods, plus outright false information and conspiracy theories. These are theories or beliefs that some groups of people are trying to deliberately mislead the general public, typically for the benefit of a small powerful group. Examples of conspiracy theories are the existence of electricity-producing pyramids, the denial of human-caused climate change and the existence of aliens. The Escalating Threat of Synthetic Media and the Quest for Authenticity in News Reporting In the rapidly evolving digital landscape, the lines between reality and fabrication are becoming increasingly blurred. The advent of sophisticated artificial intelligence (AI) has ushered in an era where synthetically generated content, including images and videos, is becoming indistinguishable from authentic material.
This poses a significant challenge to the integrity of information, particularly for news media organizations that serve as gatekeepers of truth and accuracy in an era already grappling with misinformation and disinformation. The proliferation of synthetic media, often referred to as “deepfakes,” has raised serious concerns about the potential for manipulation and deception. Deepfakes leverage AI algorithms to create realistic but fabricated content, often depicting individuals saying or doing things they never actually did. This technology has the potential to erode public trust, damage reputations, and even incite violence or social unrest. The ease with which such content can be created and disseminated online presents a formidable challenge to news organizations striving to maintain credibility and uphold journalistic ethics. Recognizing the urgency of this issue, the British Broadcasting Corporation (BBC), a global leader in news and information, has strengthened its collaboration with Japanese technology giant Sony.
This partnership aims to develop robust tools and methodologies for identifying synthetically produced images and videos, thereby bolstering the BBC’s ability to verify the authenticity of the content it disseminates. The BBC, which established a dedicated disinformation unit in 2018, is at the forefront of combating misinformation and views this collaboration as a crucial step in preserving the integrity of news reporting. The joint effort between the BBC and Sony falls under the umbrella of the Coalition for Content Provenance and Authenticity (C2PA), a collaborative industry initiative dedicated to establishing standardized verification workflows for digital media. The C2PA brings together key players from the technology, media, and academic sectors to develop open-source technical specifications for certifying the origin and authenticity of digital content. This collaborative approach is essential to creating a universal standard that can be adopted across the news industry and beyond. Dr.
Christian Schroeder de Witt shares insights on AI's effectiveness in detecting disinformation during a recent BBC News segment. Dr. Christian Schroeder de Witt, a Senior Research Associate at the University of Oxford, appeared live in-studio on BBC News to discuss AI's role in combating disinformation. As an expert in AI and cybersecurity, he shared insights into the limitations of current AI models in detecting disinformation. The segment, titled "Russia and Iran Use Disinformation Against US Elections," explored the fundamental role of AI in detecting and counteracting disinformation campaigns, such as those orchestrated by nation-states. Currently conducting research at The Department, Dr Schroeder de Witt focuses on enhancing AI techniques for cybersecurity and disinformation detection.
He also serves as a Stipendiary Lecturer in Computer Science at St Catherine’s College. During the BBC News interview, he highlighted the challenges faced by current AI models, especially when confronted with new and unfamiliar events. He emphasised the importance of refining these models to better adapt to evolving disinformation tactics and critiqued a recent US study on how targeted misinformation can influence public beliefs. Reflecting on his appearance, Dr. Schroeder de Witt stated: He also underscored his collaboration with BBC Verify, a division dedicated to combating misinformation and verifying news stories.
This partnership, which includes a Big Tech company, reinforces the crucial role of AI in ensuring the accuracy and integrity of information in today’s media landscape. In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship. Artificial intelligence has turbocharged state efforts to crack down on internet freedoms over the past year. Governments and political actors around the world, in both democracies and autocracies, are using AI to generate texts, images, and video to manipulate public opinion in their favor and to automatically censor critical online... In a new report released by Freedom House, a human rights advocacy group, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.” The annual report, Freedom on the Net, scores and ranks countries according to their relative degree of internet freedom, as measured by a host of factors like internet shutdowns, laws limiting online expression, and...
The 2023 edition, released on October 4, found that global internet freedom declined for the 13th consecutive year, driven in part by the proliferation of artificial intelligence. “Internet freedom is at an all-time low, and advances in AI are actually making this crisis even worse,” says Allie Funk, a researcher on the report. Funk says one of their most important findings this year has to do with changes in the way governments use AI, though we are just beginning to learn how the technology is boosting digital... Your research is the real superpower - learn how we maximise its impact through our leading community journals
People Also Search
- The power of the machine - harnessing AI to fight disinformation
- The next wave of disinformation: AI, fact-checks, and the fight ... - BBC
- Trusted News Initiative - Beyond Fake News - BBC
- Project Origin: Securing Trust in Media - Beyond Fake News - BBC
- AI could 'supercharge' election disinformation, US tells the BBC
- AI used to target kids with disinformation - BBC Newsround
- Combating AI-Generated Disinformation: A Joint Initiative by the BBC ...
- Dr. Christian Schroeder de Witt Discusses AI and Disinformation on BBC News
- How generative AI is boosting the spread of disinformation and ...
- The use of artificial intelligence in counter-disinformation: a world ...
BBC Broadcaster Philippa Thomas Is Joined By: Falsehoods, Fabrications, Fake
BBC Broadcaster Philippa Thomas is joined by: Falsehoods, fabrications, fake news – disinformation is nothing new. For centuries, people have taken deliberate action to mislead the public. In medieval Europe, Jewish communities were persecuted because people believed conspiracy theories suggesting that Jews spread the Black Death by poisoning wells. In 1937, Joseph Stalin doctored newspaper photog...
Bad Actors, Or Just Misinformed Ones, Can Now Share Whatever
Bad actors, or just misinformed ones, can now share whatever they want with whomever they want at an unprecedented scale. Thanks to generative AI tools, it’s now even cheaper and easier to create misleading audio or visual content at scale. This new, more polluted, information environment has real-world impact. For our institutions (however imperfect they may be), a disordered information ecosyste...
In The Same Way BBC Research & Development's Advisory Team
In the same way BBC Research & Development's Advisory team explored trends shaping the future of social media, we now look to the future of disinformation. We want to know how misinformation and disinformation are changing – and what technologies drive that change. Most importantly, we want to understand public service media’s role in enabling a healthier information ecosystem beyond our journalis...
But It’s Important To Understand Where Things Are Going, Not
But it’s important to understand where things are going, not just where they are today. Based on some preliminary expert interviews, a new picture is emerging: The Trusted News Initiative is a partnership, founded by the BBC, that includes organisations from around the globe including; AP, AFP, BBC, CBC/Radio-Canada, European Broadcasting Union (EBU), Financial Times, Information Futures Lab, Goog...
Project Origin Is Part Of A Wide Collaboration Between Media
Project Origin is part of a wide collaboration between media and tech organisations to develop signals that can be tied to media content to allow audiences to determine where content has come from and... The project was started in 2018 by the BBC with CBC/Radio Canada, The New York Times and Microsoft, born of a conviction that media publishers, working in concert with technology and civil society...