Artificial Intelligence Deepfakes And Disinformation

Bonisiwe Shabane
-
artificial intelligence deepfakes and disinformation

You have full access to this open access article Deepfakes is a term for content generated by an artificial intelligence (AI) with the intention to be perceived as real. Deepfakes have gained notoriety in their potential misuse in disinformation, propaganda, pornography, defamation, or financial fraud. Despite prominent discussions on the potential harms of deepfakes, empirical evidence on the harms of deepfakes on the human mind remains sparse, wide, and unstructured. This scoping review presents an overview of the research on how deepfakes can negatively affect human mind and behavior. Out of initially 1,143 papers, 28 were included in the scoping review.

Several types of harm were identified: Concerns and worries, deception consequences (including false memories, attitude shifts, sharing intention, and false investment choices), mental health harm (including distress, anxiety, reduces self-efficacy, and sexual deepfake victimization),... We conclude that deepfake harm ranges widely and is often hypothetical; hence, empirical investigated of potential harms on human mind and behavior and further methodological refinement to validate current findings is warranted. Avoid common mistakes on your manuscript. Digital content has long been prone to manipulation for various reasons such as advertisement, art, entertainment, as well as for nefarious motives such as deception, fraud, propaganda, and slander. Technological development accelerates the usability and quality of digital content creation and manipulation. Recent developments of artificial intelligence (AI) has led to easy to access tools which allow the generation of completely artificial digital media.

Generative adversarial networks (GANs) are AI models consisting of a generating component and an adversarial discriminator component whose interplay continually refines the model’s ability to generate the desired output (Creswell et al. 2018). GAN-based models are able to produce synthetic content indistinguishable from real content, which has been colloquially known as “deepfake”, a portmanteau of deep (learning) and fake (Chadha et al. 2021; Lyu 2020; Westerlund 2019). Deepfakes are AI-generated content created to be recognized as real, and can appear as pictures, videos, text, or audio (Farid 2022; Khanjani et al. 2023).

Along modalities, human ability to detect deepfakes is at chance level (Diel et al. 2024a). Although deepfakes can hypothetically depict any type of content, they are renown and notorious for their use in the recreation of real humans. Using AI-based technologies such as face-swapping, deepfakes can be created by projecting one person’s face onto another in the target material (e.g., a video). While the initial use of deepfakes has often been humorous, severe misuse of deepfakes is found in pornography, political propaganda and disinformation, financial fraud and marketplace deception, and academic dishonesty (Albahar and Almalki 2019;... 2020; (Campbell, et al., 2022), Plangger, Sands, and Kietzmann 2021; Fink 2019; Hamed et al.

2023; Ibrahim et al. 2023). Deepfakes first gained public awareness due to their use in the creation of AI-generated pornography of celebrities in 2017 as the first use of the term ‘deepfake’ occurred in this context (Fido et al. 2022; Popova 2020; Westerlund 2019). The vast majority of deepfakes on the internet are pornographic in nature (Ajder et al. 2019).

Cases of revenge- or exortion-based deepfake pornography have been reported including targeting minors (FBI 2024; Mania 2024). In South Kora, which accounts for about 99% of deepfake pornography content (Home Security Heroes 2023), a recent legislation criminalized possession or consumption of deepfake pornography (Jung-joo 2025). Similarly, the creation of deepfake pornography has been criminalized in the United Kingdom (Gov.uk 2025), the publication of nonconsensual sexual deepfakes has been prohibited by the United States’ Take It Down Act (US Congress... The use of a target person’s face for the creation of a sexual deepfake is typically done without their consent, which may lead to considerable harm to the person (Blanchard and Taddeo 2023). When used intentionally to damage a person or their reputation, sexual deepfakes can be considered a type of image-based sexual abuse (IBSA; McGlynn and Toparlak 2025; Rigotti et al. 2024).

While conventional IBSA often involves the sharing of sexually explicit material of a target taken in a private real-life environment (e.g., sex footage), deepfakes enable the generation of sexual content involving situations the target... Consequences of such novel forms of IBSA have so far not been investigated thoroughly. Nevertheless, such information is highly relevant for the estimation of personal harm and potential compensatory measures for victimization. Published online by Cambridge University Press: 25 November 2021 Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by...

This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content.

In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce... We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem. This study aims at identifying the right approach to tackle the disinformation problem online with due consideration for ethical values, fundamental rights and freedoms, and democracy. While moderating content as such and using AI systems to that end may be particularly problematic regarding freedom of expression and information, we recommend countering the malicious use of technologies online to manipulate individuals. As considering the main cause of the effective manipulation of individuals online is paramount, the business model of the web should be on the radar screen of public regulation more than content moderation. Furthermore, we do support a vibrant, independent, and pluralistic media landscape with investigative journalists following ethical rules.

Manipulation of truth is a recurring phenomenon throughout history.Footnote 1 Damnatio memoriae, namely the attempted erasure of people from history, is an example of purposive distortion of reality that was already practiced in Ancient... Nevertheless, owing to the rapid advances in information and communication technologies (ICT) as well as their increasing pervasiveness, disingenuous information can now be produced easily and in a realistic format, and its dissemination to... The consequences are serious with far-reaching implications. For instance, the media ecosystem has been leveraged to influence citizens’ opinion and voting decisions related to the 2016 US presidential electionFootnote 2 and the 2016 UK referendum on leaving the European Union (EU)... In Myanmar, Facebook has been a useful instrument for those seeking to spread hate against Rohingya Muslims (Human Rights Council, 2018, para 74).Footnote 3 In India, rumors on WhatsApp resulted in several murders (Dixit... In France, a virulent online campaign on social media against a professor ended up with him being murdered (Bindner and Gluck, Reference Bindner and Gluck2020).

Conspiracy theories are currently prospering.Footnote 4 And presently in the context of the Covid-19, we are facing what has been called an infodemic Footnote 5 by the World Health Organization (WHO), with multiple adverse... As commonly understood, disinformation is false, inaccurate or misleading information that is shared with the intent to deceive the recipient,Footnote 6 as opposed to misinformation that refers to false, inaccurate, or misleading information that... Whereas new digital technology and social media have amplified the creation and spread of both mis- and disinformation, only disinformation has been considered by the EU institutions as a threat that must be tackled... The disinformation problem is particular in the sense that, firstly, the shared information is intentionally deceptive to manipulate people and, secondly, for achieving his or her goal, its author takes benefit from the modern... For these reasons, our analysis stays on the beaten path, hence the title of this article referring solely to the disinformation problem. It is also worth specifying that unlike “fake news,” a term that has been used by politicians and their supporters to dismiss coverage that they find disagreeable, the disinformation problem encompasses various fabricated information...

In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship. Artificial intelligence has turbocharged state efforts to crack down on internet freedoms over the past year. Governments and political actors around the world, in both democracies and autocracies, are using AI to generate texts, images, and video to manipulate public opinion in their favor and to automatically censor critical online... In a new report released by Freedom House, a human rights advocacy group, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.” The annual report, Freedom on the Net, scores and ranks countries according to their relative degree of internet freedom, as measured by a host of factors like internet shutdowns, laws limiting online expression, and... The 2023 edition, released on October 4, found that global internet freedom declined for the 13th consecutive year, driven in part by the proliferation of artificial intelligence.

“Internet freedom is at an all-time low, and advances in AI are actually making this crisis even worse,” says Allie Funk, a researcher on the report. Funk says one of their most important findings this year has to do with changes in the way governments use AI, though we are just beginning to learn how the technology is boosting digital... Deepfakes and AI Misinformation: Can You Trust What You See? In the digital age, where information spreads at lightning speed, the line between reality and fabrication is becoming increasingly blurred. This blurring is largely due to the rise of sophisticated technologies like deepfakes, which leverage artificial intelligence to create incredibly realistic yet entirely fake videos and audio recordings. Deepfakes have evolved from clunky, easily detectable manipulations to highly convincing impersonations, capable of mimicking a person’s facial expressions, voice, and even mannerisms with astonishing accuracy.

This evolution presents a grave challenge to our ability to discern truth from falsehood, raising profound questions about the future of trust and the integrity of information consumed by the public. The potential consequences of deepfakes extend far beyond mere entertainment or harmless pranks. These AI-powered fabrications can be weaponized to spread misinformation, manipulate public opinion, and damage reputations. Imagine a deepfake video of a political candidate making inflammatory remarks or engaging in illicit activities surfacing just before an election. Such a scenario could drastically alter public perception and potentially sway the outcome of the election, undermining the democratic process. Beyond the political sphere, deepfakes can be used to harass individuals, extort money, or incite violence.

The ease with which these convincing fakes can be created and disseminated poses a significant threat to individuals, organizations, and even national security. The technology underpinning deepfakes is known as Generative Adversarial Networks (GANs). GANs consist of two neural networks: a generator that creates the fake content and a discriminator that attempts to identify the fake. These two networks are pitted against each other in a continuous feedback loop, with the generator striving to create ever more realistic fakes and the discriminator working to become better at detecting them. This adversarial process drives the rapid improvement in deepfake quality, making them progressively more challenging to identify. As the technology becomes more accessible and user-friendly, the proliferation of deepfakes is expected to increase exponentially, exacerbating the already rampant problem of online misinformation.

Combating the spread of deepfakes requires a multi-pronged approach. Tech companies are investing in developing sophisticated detection tools that can identify subtle inconsistencies in deepfake videos, such as unnatural blinking patterns, inconsistent lighting, or irregularities in lip movements. These detection tools leverage machine learning algorithms to analyze videos and flag potential deepfakes based on a variety of factors. However, as deepfake technology evolves, these detection methods must also adapt to keep pace. It’s a constant arms race between the creators of deepfakes and those working to detect them.

People Also Search

You Have Full Access To This Open Access Article Deepfakes

You have full access to this open access article Deepfakes is a term for content generated by an artificial intelligence (AI) with the intention to be perceived as real. Deepfakes have gained notoriety in their potential misuse in disinformation, propaganda, pornography, defamation, or financial fraud. Despite prominent discussions on the potential harms of deepfakes, empirical evidence on the har...

Several Types Of Harm Were Identified: Concerns And Worries, Deception

Several types of harm were identified: Concerns and worries, deception consequences (including false memories, attitude shifts, sharing intention, and false investment choices), mental health harm (including distress, anxiety, reduces self-efficacy, and sexual deepfake victimization),... We conclude that deepfake harm ranges widely and is often hypothetical; hence, empirical investigated of potent...

Generative Adversarial Networks (GANs) Are AI Models Consisting Of A

Generative adversarial networks (GANs) are AI models consisting of a generating component and an adversarial discriminator component whose interplay continually refines the model’s ability to generate the desired output (Creswell et al. 2018). GAN-based models are able to produce synthetic content indistinguishable from real content, which has been colloquially known as “deepfake”, a portmanteau o...

Along Modalities, Human Ability To Detect Deepfakes Is At Chance

Along modalities, human ability to detect deepfakes is at chance level (Diel et al. 2024a). Although deepfakes can hypothetically depict any type of content, they are renown and notorious for their use in the recreation of real humans. Using AI-based technologies such as face-swapping, deepfakes can be created by projecting one person’s face onto another in the target material (e.g., a video). Whi...

2023; Ibrahim Et Al. 2023). Deepfakes First Gained Public Awareness

2023; Ibrahim et al. 2023). Deepfakes first gained public awareness due to their use in the creation of AI-generated pornography of celebrities in 2017 as the first use of the term ‘deepfake’ occurred in this context (Fido et al. 2022; Popova 2020; Westerlund 2019). The vast majority of deepfakes on the internet are pornographic in nature (Ajder et al. 2019).