The Dark Side Of Ai How Deepfakes And Disinformation Are Forbes
Every week, I talk to business leaders who believe they're prepared for AI disruption. But when I ask them about their defense strategy against AI-generated deepfakes and disinformation, I'm usually met with blank stares. The truth is, we've entered an era where a single fake video or manipulated image can wipe millions off a company's market value in minutes. While we've all heard about the societal implications of AI-generated fakery, the specific risks to businesses are both more immediate and more devastating than many realize. Picture this: A convincing deepfake video shows your CEO announcing a major product recall that never happened, or AI-generated images suggest your headquarters is on fire when it isn't. It sounds like science fiction, but it's already happening.
In 2023, a single fake image of smoke rising from a building triggered a panic-driven stock market sell-off, demonstrating how quickly artificial content can impact real-world financials. The threat is particularly acute during sensitive periods like public offerings or mergers and acquisitions, as noted by PwC. During these critical junctures, even a small piece of manufactured misinformation can have outsized consequences. The reputational risks are equally concerning. Today's deepfake technology can clone your senior executives' voices with frightening accuracy, creating fake speeches or interviews that could destroy years of carefully built trust in minutes. We're seeing an increasing number of cases where fraudsters use synthetic voices and deepfake videos to convince employees to transfer substantial sums to fake accounts.
The dark side of AI — from bias, misinformation, and cybersecurity threats to legal, societal, and environmental risks. AI is a game-changer poised to impact businesses and individuals significantly in the years ahead. Fueled by investor ambitions, business interests, and consumer enthusiasm, the pace of AI innovation and adoption is set to accelerate. Its importance and influence will grow as AI finds novel and unforeseen applications that transform industries, society, and government operations, delivering immense economic and societal value. AI will revolutionize healthcare, finance, manufacturing, transportation, education, and more. By 2030, AI-enabled autonomous systems, humanoid robots, and AI-driven decision-making will be prevalent across industries and applications.
Amid this promise and excitement, we must not overlook AI’s dark side — the limitations, risks, and societal harms it brings. As Cutter Fellow Steve Andriole aptly described in his 2018 article, AI is “good, disruptive, and scary.” Its unintended consequences can be alarming and genuinely harmful when implemented without caution or ethics. This article offers a forward-looking, balanced perspective on AI’s darker dimensions and potential impact. We explore the technical barriers, risks, and limitations associated with AI while proposing practical remedies. Emphasizing the urgent need for action, we call on all stakeholders (developers, users, governments, and regulatory bodies) to engage responsibly. By addressing these challenges now, we can steer AI toward a future that maximizes its benefits while minimizing its harms.
Some of AI’s key challenges and risks include: You have full access to this open access article Deepfakes is a term for content generated by an artificial intelligence (AI) with the intention to be perceived as real. Deepfakes have gained notoriety in their potential misuse in disinformation, propaganda, pornography, defamation, or financial fraud. Despite prominent discussions on the potential harms of deepfakes, empirical evidence on the harms of deepfakes on the human mind remains sparse, wide, and unstructured. This scoping review presents an overview of the research on how deepfakes can negatively affect human mind and behavior.
Out of initially 1,143 papers, 28 were included in the scoping review. Several types of harm were identified: Concerns and worries, deception consequences (including false memories, attitude shifts, sharing intention, and false investment choices), mental health harm (including distress, anxiety, reduces self-efficacy, and sexual deepfake victimization),... We conclude that deepfake harm ranges widely and is often hypothetical; hence, empirical investigated of potential harms on human mind and behavior and further methodological refinement to validate current findings is warranted. Avoid common mistakes on your manuscript. Digital content has long been prone to manipulation for various reasons such as advertisement, art, entertainment, as well as for nefarious motives such as deception, fraud, propaganda, and slander. Technological development accelerates the usability and quality of digital content creation and manipulation.
Recent developments of artificial intelligence (AI) has led to easy to access tools which allow the generation of completely artificial digital media. Generative adversarial networks (GANs) are AI models consisting of a generating component and an adversarial discriminator component whose interplay continually refines the model’s ability to generate the desired output (Creswell et al. 2018). GAN-based models are able to produce synthetic content indistinguishable from real content, which has been colloquially known as “deepfake”, a portmanteau of deep (learning) and fake (Chadha et al. 2021; Lyu 2020; Westerlund 2019). Deepfakes are AI-generated content created to be recognized as real, and can appear as pictures, videos, text, or audio (Farid 2022; Khanjani et al.
2023). Along modalities, human ability to detect deepfakes is at chance level (Diel et al. 2024a). Although deepfakes can hypothetically depict any type of content, they are renown and notorious for their use in the recreation of real humans. Using AI-based technologies such as face-swapping, deepfakes can be created by projecting one person’s face onto another in the target material (e.g., a video). While the initial use of deepfakes has often been humorous, severe misuse of deepfakes is found in pornography, political propaganda and disinformation, financial fraud and marketplace deception, and academic dishonesty (Albahar and Almalki 2019;...
2020; (Campbell, et al., 2022), Plangger, Sands, and Kietzmann 2021; Fink 2019; Hamed et al. 2023; Ibrahim et al. 2023). Deepfakes first gained public awareness due to their use in the creation of AI-generated pornography of celebrities in 2017 as the first use of the term ‘deepfake’ occurred in this context (Fido et al. 2022; Popova 2020; Westerlund 2019). The vast majority of deepfakes on the internet are pornographic in nature (Ajder et al.
2019). Cases of revenge- or exortion-based deepfake pornography have been reported including targeting minors (FBI 2024; Mania 2024). In South Kora, which accounts for about 99% of deepfake pornography content (Home Security Heroes 2023), a recent legislation criminalized possession or consumption of deepfake pornography (Jung-joo 2025). Similarly, the creation of deepfake pornography has been criminalized in the United Kingdom (Gov.uk 2025), the publication of nonconsensual sexual deepfakes has been prohibited by the United States’ Take It Down Act (US Congress... The use of a target person’s face for the creation of a sexual deepfake is typically done without their consent, which may lead to considerable harm to the person (Blanchard and Taddeo 2023). When used intentionally to damage a person or their reputation, sexual deepfakes can be considered a type of image-based sexual abuse (IBSA; McGlynn and Toparlak 2025; Rigotti et al.
2024). While conventional IBSA often involves the sharing of sexually explicit material of a target taken in a private real-life environment (e.g., sex footage), deepfakes enable the generation of sexual content involving situations the target... Consequences of such novel forms of IBSA have so far not been investigated thoroughly. Nevertheless, such information is highly relevant for the estimation of personal harm and potential compensatory measures for victimization. Deepfakes and AI Misinformation: Can You Trust What You See? In the digital age, where information spreads at lightning speed, the line between reality and fabrication is becoming increasingly blurred.
This blurring is largely due to the rise of sophisticated technologies like deepfakes, which leverage artificial intelligence to create incredibly realistic yet entirely fake videos and audio recordings. Deepfakes have evolved from clunky, easily detectable manipulations to highly convincing impersonations, capable of mimicking a person’s facial expressions, voice, and even mannerisms with astonishing accuracy. This evolution presents a grave challenge to our ability to discern truth from falsehood, raising profound questions about the future of trust and the integrity of information consumed by the public. The potential consequences of deepfakes extend far beyond mere entertainment or harmless pranks. These AI-powered fabrications can be weaponized to spread misinformation, manipulate public opinion, and damage reputations. Imagine a deepfake video of a political candidate making inflammatory remarks or engaging in illicit activities surfacing just before an election.
Such a scenario could drastically alter public perception and potentially sway the outcome of the election, undermining the democratic process. Beyond the political sphere, deepfakes can be used to harass individuals, extort money, or incite violence. The ease with which these convincing fakes can be created and disseminated poses a significant threat to individuals, organizations, and even national security. The technology underpinning deepfakes is known as Generative Adversarial Networks (GANs). GANs consist of two neural networks: a generator that creates the fake content and a discriminator that attempts to identify the fake. These two networks are pitted against each other in a continuous feedback loop, with the generator striving to create ever more realistic fakes and the discriminator working to become better at detecting them.
This adversarial process drives the rapid improvement in deepfake quality, making them progressively more challenging to identify. As the technology becomes more accessible and user-friendly, the proliferation of deepfakes is expected to increase exponentially, exacerbating the already rampant problem of online misinformation. Combating the spread of deepfakes requires a multi-pronged approach. Tech companies are investing in developing sophisticated detection tools that can identify subtle inconsistencies in deepfake videos, such as unnatural blinking patterns, inconsistent lighting, or irregularities in lip movements. These detection tools leverage machine learning algorithms to analyze videos and flag potential deepfakes based on a variety of factors. However, as deepfake technology evolves, these detection methods must also adapt to keep pace.
It’s a constant arms race between the creators of deepfakes and those working to detect them. Artificial Intelligence (AI) has transformed our lives in ways we couldn’t have imagined a decade ago. From virtual assistants that answer our questions to algorithms that recommend what we should watch next, AI has made life faster and more convenient. But just like any powerful technology, AI has a darker side, one that poses serious challenges to society. Three of the biggest concerns today are deepfakes, privacy issues, and misinformation. While AI brings innovation, these problems remind us why responsible usage and strict regulations are so important.
Deepfakes are AI-generated videos or images that look extremely real but are completely fake. Using advanced machine learning techniques, AI can replace a person’s face or voice with someone else’s, making it seem like they said or did something they never actually did. At first, deepfakes were seen as harmless fun. People used them to swap faces in movies or create memes. But over time, they’ve been misused for dangerous purposes — from creating fake celebrity scandals to spreading political propaganda. The scary part, these videos are becoming so realistic that it’s getting harder for ordinary people to tell what’s real and what’s fake.
This raises serious concerns about trust, especially in the age of social media, where information spreads instantly. Deepfakes represent a significant advancement in the realm of artificial intelligence, employing sophisticated technology that allows for the creation of highly realistic fake media. At their core, deepfakes utilize deep learning, a subset of machine learning, to generate convincing representations of people and events. This is primarily accomplished through a specific type of neural network known as Generative Adversarial Networks (GANs). GANs function by pitting two neural networks against each other: the generator, which creates synthetic data, and the discriminator, which evaluates that data’s authenticity. This adversarial process continues until the generator produces output indistinguishable from real images or videos.
People Also Search
- The Dark Side Of AI: How Deepfakes And Disinformation Are ... - Forbes
- The Dark Side of AI: Bias, Deepfakes & Misinformation
- The Dark Side of AI: Bias, Deepfakes, and Disinformation
- The harm of deepfakes: a scoping review of deepfakes' negative effects ...
- The Dark Side of AI: Deepfakes, Misinformation & Copyright Battles
- The Dark Side Of AI: Deepfakes, Misinformation, And The Need For Regulation
- The Threat of Deepfakes and AI-Generated Misinformation to Visual ...
- The Dark Side of AI: Deepfakes, Privacy, and Misinformation
- The Dark Side Of AI: How Deepfakes And Disinformation Are Becoming A ...
- The Dark Side Of AI: How Deepfakes Are Shaping Cyber Threat Landscapes
Every Week, I Talk To Business Leaders Who Believe They're
Every week, I talk to business leaders who believe they're prepared for AI disruption. But when I ask them about their defense strategy against AI-generated deepfakes and disinformation, I'm usually met with blank stares. The truth is, we've entered an era where a single fake video or manipulated image can wipe millions off a company's market value in minutes. While we've all heard about the socie...
In 2023, A Single Fake Image Of Smoke Rising From
In 2023, a single fake image of smoke rising from a building triggered a panic-driven stock market sell-off, demonstrating how quickly artificial content can impact real-world financials. The threat is particularly acute during sensitive periods like public offerings or mergers and acquisitions, as noted by PwC. During these critical junctures, even a small piece of manufactured misinformation can...
The Dark Side Of AI — From Bias, Misinformation, And
The dark side of AI — from bias, misinformation, and cybersecurity threats to legal, societal, and environmental risks. AI is a game-changer poised to impact businesses and individuals significantly in the years ahead. Fueled by investor ambitions, business interests, and consumer enthusiasm, the pace of AI innovation and adoption is set to accelerate. Its importance and influence will grow as AI ...
Amid This Promise And Excitement, We Must Not Overlook AI’s
Amid this promise and excitement, we must not overlook AI’s dark side — the limitations, risks, and societal harms it brings. As Cutter Fellow Steve Andriole aptly described in his 2018 article, AI is “good, disruptive, and scary.” Its unintended consequences can be alarming and genuinely harmful when implemented without caution or ethics. This article offers a forward-looking, balanced perspectiv...
Some Of AI’s Key Challenges And Risks Include: You Have
Some of AI’s key challenges and risks include: You have full access to this open access article Deepfakes is a term for content generated by an artificial intelligence (AI) with the intention to be perceived as real. Deepfakes have gained notoriety in their potential misuse in disinformation, propaganda, pornography, defamation, or financial fraud. Despite prominent discussions on the potential ha...