The Threat Of Deepfakes And Ai Generated Misinformation To Visual

Bonisiwe Shabane
-
the threat of deepfakes and ai generated misinformation to visual

Deepfakes and AI Misinformation: Can You Trust What You See? In the digital age, where information spreads at lightning speed, the line between reality and fabrication is becoming increasingly blurred. This blurring is largely due to the rise of sophisticated technologies like deepfakes, which leverage artificial intelligence to create incredibly realistic yet entirely fake videos and audio recordings. Deepfakes have evolved from clunky, easily detectable manipulations to highly convincing impersonations, capable of mimicking a person’s facial expressions, voice, and even mannerisms with astonishing accuracy. This evolution presents a grave challenge to our ability to discern truth from falsehood, raising profound questions about the future of trust and the integrity of information consumed by the public. The potential consequences of deepfakes extend far beyond mere entertainment or harmless pranks.

These AI-powered fabrications can be weaponized to spread misinformation, manipulate public opinion, and damage reputations. Imagine a deepfake video of a political candidate making inflammatory remarks or engaging in illicit activities surfacing just before an election. Such a scenario could drastically alter public perception and potentially sway the outcome of the election, undermining the democratic process. Beyond the political sphere, deepfakes can be used to harass individuals, extort money, or incite violence. The ease with which these convincing fakes can be created and disseminated poses a significant threat to individuals, organizations, and even national security. The technology underpinning deepfakes is known as Generative Adversarial Networks (GANs).

GANs consist of two neural networks: a generator that creates the fake content and a discriminator that attempts to identify the fake. These two networks are pitted against each other in a continuous feedback loop, with the generator striving to create ever more realistic fakes and the discriminator working to become better at detecting them. This adversarial process drives the rapid improvement in deepfake quality, making them progressively more challenging to identify. As the technology becomes more accessible and user-friendly, the proliferation of deepfakes is expected to increase exponentially, exacerbating the already rampant problem of online misinformation. Combating the spread of deepfakes requires a multi-pronged approach. Tech companies are investing in developing sophisticated detection tools that can identify subtle inconsistencies in deepfake videos, such as unnatural blinking patterns, inconsistent lighting, or irregularities in lip movements.

These detection tools leverage machine learning algorithms to analyze videos and flag potential deepfakes based on a variety of factors. However, as deepfake technology evolves, these detection methods must also adapt to keep pace. It’s a constant arms race between the creators of deepfakes and those working to detect them. You have full access to this open access article Deepfakes is a term for content generated by an artificial intelligence (AI) with the intention to be perceived as real. Deepfakes have gained notoriety in their potential misuse in disinformation, propaganda, pornography, defamation, or financial fraud.

Despite prominent discussions on the potential harms of deepfakes, empirical evidence on the harms of deepfakes on the human mind remains sparse, wide, and unstructured. This scoping review presents an overview of the research on how deepfakes can negatively affect human mind and behavior. Out of initially 1,143 papers, 28 were included in the scoping review. Several types of harm were identified: Concerns and worries, deception consequences (including false memories, attitude shifts, sharing intention, and false investment choices), mental health harm (including distress, anxiety, reduces self-efficacy, and sexual deepfake victimization),... We conclude that deepfake harm ranges widely and is often hypothetical; hence, empirical investigated of potential harms on human mind and behavior and further methodological refinement to validate current findings is warranted. Avoid common mistakes on your manuscript.

Digital content has long been prone to manipulation for various reasons such as advertisement, art, entertainment, as well as for nefarious motives such as deception, fraud, propaganda, and slander. Technological development accelerates the usability and quality of digital content creation and manipulation. Recent developments of artificial intelligence (AI) has led to easy to access tools which allow the generation of completely artificial digital media. Generative adversarial networks (GANs) are AI models consisting of a generating component and an adversarial discriminator component whose interplay continually refines the model’s ability to generate the desired output (Creswell et al. 2018). GAN-based models are able to produce synthetic content indistinguishable from real content, which has been colloquially known as “deepfake”, a portmanteau of deep (learning) and fake (Chadha et al.

2021; Lyu 2020; Westerlund 2019). Deepfakes are AI-generated content created to be recognized as real, and can appear as pictures, videos, text, or audio (Farid 2022; Khanjani et al. 2023). Along modalities, human ability to detect deepfakes is at chance level (Diel et al. 2024a). Although deepfakes can hypothetically depict any type of content, they are renown and notorious for their use in the recreation of real humans.

Using AI-based technologies such as face-swapping, deepfakes can be created by projecting one person’s face onto another in the target material (e.g., a video). While the initial use of deepfakes has often been humorous, severe misuse of deepfakes is found in pornography, political propaganda and disinformation, financial fraud and marketplace deception, and academic dishonesty (Albahar and Almalki 2019;... 2020; (Campbell, et al., 2022), Plangger, Sands, and Kietzmann 2021; Fink 2019; Hamed et al. 2023; Ibrahim et al. 2023). Deepfakes first gained public awareness due to their use in the creation of AI-generated pornography of celebrities in 2017 as the first use of the term ‘deepfake’ occurred in this context (Fido et al.

2022; Popova 2020; Westerlund 2019). The vast majority of deepfakes on the internet are pornographic in nature (Ajder et al. 2019). Cases of revenge- or exortion-based deepfake pornography have been reported including targeting minors (FBI 2024; Mania 2024). In South Kora, which accounts for about 99% of deepfake pornography content (Home Security Heroes 2023), a recent legislation criminalized possession or consumption of deepfake pornography (Jung-joo 2025). Similarly, the creation of deepfake pornography has been criminalized in the United Kingdom (Gov.uk 2025), the publication of nonconsensual sexual deepfakes has been prohibited by the United States’ Take It Down Act (US Congress...

The use of a target person’s face for the creation of a sexual deepfake is typically done without their consent, which may lead to considerable harm to the person (Blanchard and Taddeo 2023). When used intentionally to damage a person or their reputation, sexual deepfakes can be considered a type of image-based sexual abuse (IBSA; McGlynn and Toparlak 2025; Rigotti et al. 2024). While conventional IBSA often involves the sharing of sexually explicit material of a target taken in a private real-life environment (e.g., sex footage), deepfakes enable the generation of sexual content involving situations the target... Consequences of such novel forms of IBSA have so far not been investigated thoroughly. Nevertheless, such information is highly relevant for the estimation of personal harm and potential compensatory measures for victimization.

El inglés es el idioma de control de esta página. En la medida en que haya algún conflicto entre la traducción al inglés y la traducción, el inglés prevalece. Al hacer clic en el enlace de traducción se activa un servicio de traducción gratuito para convertir la página al español. Al igual que con cualquier traducción por Internet, la conversión no es sensible al contexto y puede que no traduzca el texto en su significado original. NC State Extension no garantiza la exactitud del texto traducido. Por favor, tenga en cuenta que algunas aplicaciones y/o servicios pueden no funcionar como se espera cuando se traducen.

Inglês é o idioma de controle desta página. Na medida que haja algum conflito entre o texto original em Inglês e a tradução, o Inglês prevalece. Ao clicar no link de tradução, um serviço gratuito de tradução será ativado para converter a página para o Português. Como em qualquer tradução pela internet, a conversão não é sensivel ao contexto e pode não ocorrer a tradução para o significado orginal. O serviço de Extensão da Carolina do Norte (NC State Extension) não garante a exatidão do texto traduzido. Por favor, observe que algumas funções ou serviços podem não funcionar como esperado após a tradução.

English is the controlling language of this page. To the extent there is any conflict between the English text and the translation, English controls. Associate professor in English and sociolinguistics, Malmö University Shaun Nolan does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. Malmö University provides funding as a member of The Conversation UK. A fake photo of an explosion near the Pentagon once rattled the stock market.

A tearful video of a frightened young “Ukrainian conscript” went viral: until exposed as staged. We may be approaching a “synthetic media tipping point”, where AI-generated images and videos are becoming so realistic that traditional markers of authenticity, such as visual flaws, are rapidly disappearing. In 2025, 70% of people struggle to trust online information, and 64% fear AI-generated content could influence elections. We are entering an era where seeing is no longer believing. Leadership Dinner – Boston – Abe & Louie's – December 4, 2025 Throughout the evening, you’ll engage in compelling conversations, gain practical insights, and expand your network with top professionals in the fiel...

From Data Silos to AI-Enabled Customer Engagement What 300+ business and IT leaders reveal about managing data for AI & CX Watch Now | Building Trusted AI Through Precision AI Agents

People Also Search

Deepfakes And AI Misinformation: Can You Trust What You See?

Deepfakes and AI Misinformation: Can You Trust What You See? In the digital age, where information spreads at lightning speed, the line between reality and fabrication is becoming increasingly blurred. This blurring is largely due to the rise of sophisticated technologies like deepfakes, which leverage artificial intelligence to create incredibly realistic yet entirely fake videos and audio record...

These AI-powered Fabrications Can Be Weaponized To Spread Misinformation, Manipulate

These AI-powered fabrications can be weaponized to spread misinformation, manipulate public opinion, and damage reputations. Imagine a deepfake video of a political candidate making inflammatory remarks or engaging in illicit activities surfacing just before an election. Such a scenario could drastically alter public perception and potentially sway the outcome of the election, undermining the demo...

GANs Consist Of Two Neural Networks: A Generator That Creates

GANs consist of two neural networks: a generator that creates the fake content and a discriminator that attempts to identify the fake. These two networks are pitted against each other in a continuous feedback loop, with the generator striving to create ever more realistic fakes and the discriminator working to become better at detecting them. This adversarial process drives the rapid improvement i...

These Detection Tools Leverage Machine Learning Algorithms To Analyze Videos

These detection tools leverage machine learning algorithms to analyze videos and flag potential deepfakes based on a variety of factors. However, as deepfake technology evolves, these detection methods must also adapt to keep pace. It’s a constant arms race between the creators of deepfakes and those working to detect them. You have full access to this open access article Deepfakes is a term for c...

Despite Prominent Discussions On The Potential Harms Of Deepfakes, Empirical

Despite prominent discussions on the potential harms of deepfakes, empirical evidence on the harms of deepfakes on the human mind remains sparse, wide, and unstructured. This scoping review presents an overview of the research on how deepfakes can negatively affect human mind and behavior. Out of initially 1,143 papers, 28 were included in the scoping review. Several types of harm were identified:...