Disinformation Security Securing Trust In The Age Of Ai Forbes
Jason Crawforth is the Founder and CEO of Swear.com, a company working to restore confidence in digital media authenticity. As GenAI tools surge in accessibility and sophistication, a new era of cyber risk is emerging—one not defined by ransomware or phishing but by synthetic realities. In its Top Strategic Technology Trends for 2025, Gartner Inc. named disinformation security as a critical discipline. This recognizes the profound impact AI-generated falsehoods could have on organizations across sectors. The message is clear: Disinformation is not a future concern but an urgent, evolving threat.
Disinformation, the intentional spread of false or manipulated content, has evolved from a geopolitical tactic into a systemic risk for enterprises. Today, anyone with access to GenAI can fabricate hyperrealistic video, audio or images. Deepfakes and synthetic voice clones are now attack vectors. According to Gartner, while only 5% of enterprises had implemented disinformation safeguards as of 2024, that number is expected to rise to 50% by 2028. This growth underscores the fact that digital trust is becoming a cornerstone of operational resilience. Organizations must now view content authenticity as seriously as they do malware detection or video surveillance.
Disinformation presents risks that span reputational, legal and operational domains: Tamsin Gable is the head of PR at Municorn. The promise of the digital age was always a sense of permanence. And for over 30 years, things have looked that way. Email has been the backbone of business communication for anyone who entered the workforce since before 2000, but it seems that things are changing fast. There are many reasons for this, but most belong under the umbrella term of "cyber risks." Organizations of all stripes have to fend off exponential increases in phishing attacks, most of which began with...
These attacks not only undermine organizational effectiveness but can also be expensive, particularly if they lead to compromised security or regulatory noncompliance and the associated fines. Fortunately, there are solutions out there. I've closely followed the development of secure, AI-driven communication tools and see great potential in their adoption. It's interesting that when it comes to email, AI is now as much a part of the problem as it is of the solution. Email itself often lacks encryption, is susceptible to human error and can be weak in meeting compliance requirements, such as those contained in the European General Data Protection Regulation (GDPR). Failure to comply with the GDPR can result in "a fine of €20 million or 4% of global revenue, whichever is higher, plus compensation for damages." Just as with the GDPR, the U.S.
Health Insurance Portability and Accountability Act (HIPAA) recommends that communications be encrypted and provides other security standards to help ensure email compliance. Chad Angle, Head of ReputationDefender at Gen Digital | Expert in Growth Strategy, Online Reputation Management, & Executive Privacy. For years, executive security has focused on reputation management—mitigating negative press, countering misinformation and protecting personal brands. But a new threat is rapidly transforming the way digital security should be approached. AI: Every day you hear about companies around the globe discovering new ways to harness and maximize this powerful technology to improve services, products and even healthcare. The excitement surrounding AI is palpable, and there is no denying that it is transforming the world.
But for all the amazing opportunities that AI is opening up for visionaries everywhere, it’s not just the “good guys” benefitting. As AI becomes more sophisticated, so do scams, fraud and other schemes perpetrated by bad actors. And now, AI-driven threats go far beyond reputation and into a direct security risk for executives and high-profile individuals. From deepfake scams to AI-powered data harvesting, cybercriminals are exploiting AI to target executives at an unprecedented scale. The numbers tell a clear story: • Deepfake fraud cases have surged tenfold globally from 2022 to 2023, according to Sumsub.
• In 2022, 76% of threats were highly targeted spearphishing attacks focusing on credential theft. Jason Crawforth is the Founder and CEO of Swear.com, a company working to restore confidence in digital media authenticity. Read Jason Crawforth's full executive profile here. Disinformation, the intentional spread of false or manipulated content, has evolved from a geopolitical tactic into a systemic risk for enterprises. In an era of disinformation, manipulated media isn’t just a tool of crime; it’s a weapon against democracy, trust and truth itself. The future of AI depends on our ability to harness its power for good while mitigating its potential harms.
Rapid advancements in artificial intelligence and quantum computing present unprecedented challenges to digital authenticity. "Trust only what you see" is no longer a principle to live by nowadays, considering the many tools that can manipulate what we read, hear or see. Last week, OpenAI introduced Sora, a groundbreaking AI system capable of transforming text descriptions into photorealistic videos. Sora's development is built upon the foundation of OpenAI's existing technologies, including the renowned image generator DALL-E and the sophisticated GPT large language models. It can produce videos up to 60 seconds in length, utilizing either pure text instructions or a combination of text and images. Yet, the rise of such technologically advanced systems like Sora also amplifies concerns regarding the potential for artificial deepfake videos to exacerbate the issues of misinformation and disinformation, especially during crucial election years like...
Until now, text-to-video AI models have been trailing behind regarding realism and widespread accessibility. But Sora’s outputs are described by Rachel Tobac as an "order of magnitude more believable and less cartoonish" than its predecessors. And this brings a lot of cybersecurity risks for all of us. Even before Sora, the global incidence of deepfakes has skyrocketed, experiencing a tenfold surge worldwide from 2022 to 2023, with a 1740% increase in North America, 1530% in APAC, 780% in Europe (including the... Every week, I talk to business leaders who believe they're prepared for AI disruption. But when I ask them about their defense strategy against AI-generated deepfakes and disinformation, I'm usually met with blank stares.
The truth is, we've entered an era where a single fake video or manipulated image can wipe millions off a company's market value in minutes. While we've all heard about the societal implications of AI-generated fakery, the specific risks to businesses are both more immediate and more devastating than many realize. Picture this: A convincing deepfake video shows your CEO announcing a major product recall that never happened, or AI-generated images suggest your headquarters is on fire when it isn't. It sounds like science fiction, but it's already happening. In 2023, a single fake image of smoke rising from a building triggered a panic-driven stock market sell-off, demonstrating how quickly artificial content can impact real-world financials. The threat is particularly acute during sensitive periods like public offerings or mergers and acquisitions, as noted by PwC.
During these critical junctures, even a small piece of manufactured misinformation can have outsized consequences. The reputational risks are equally concerning. Today's deepfake technology can clone your senior executives' voices with frightening accuracy, creating fake speeches or interviews that could destroy years of carefully built trust in minutes. We're seeing an increasing number of cases where fraudsters use synthetic voices and deepfake videos to convince employees to transfer substantial sums to fake accounts. The hype around AI has reached a fever pitch, but skepticism is beginning to set in. As the conversation shifts from excitement to concern—“Is AI transforming the world?” to “Are we exposing ourselves to greater risk?”—I want to address the key challenges ahead.
Many organizations are now grappling with pressing questions: Could AI replace human ingenuity with soulless automation? Or even worse, is AI being weaponized against the enterprise? The dual narrative of AI—boundless optimism on one hand and undue fear on the other—leaves businesses caught in the middle, trying to work out hype from reality. As academic Kristina McElheran puts it: “The narrative is that AI is everywhere all at once, but the data shows it’s harder to do than people seem interested in discussing.” Delinea's latest research highlights the growing shift toward leveraging AI in organizations and the difficulties in implementing these technologies securely. An overwhelming 94% of respondents revealed they are already adopting or planning to adopt AI-driven identity technologies.
However, this surge exposes organizations to increased cyber threats, as modern attacks target both human and non-human identities to access critical systems. Research from EY also warns that rapid AI adoption can create vulnerabilities, emphasizing the need for proactive cybersecurity measures. U.S. leadership in the AI century will depend on whether democracies can secure machine intelligence fast enough to preserve the trust and resilience their systems rely on. Artificial intelligence (AI) is likely to greatly shape twenty-first century prosperity and security—but only if it can be trusted. The defining question is not how advanced AI becomes, but whether its systems can be secured enough to sustain institutional and public confidence.
Security failures in 2025 revealed that most organizations remain unable to safeguard AI effectively, widening the gap between its technical promise and operational viability. Most AI-related breaches in 2025 resulted in data compromises, and nearly one-third caused operational disruption. The EU Agency for Cybersecurity found that more than 80 percent of social engineering attacks relied on AI, underscoring how adversaries now innovate faster than defenders can respond. For the United States and its allies, securing AI systems is not a narrow technical concern but a test of whether democratic governance can sustain trust and leadership at machine speed. Three converging dynamics threaten to stall AI adoption: (1) Systemic vulnerabilities in AI models and infrastructure, (2) deployment that outpaces security controls, and (3) increasingly sophisticated adversaries weaponizing AI. Microsoft’s Digital Defense Report 2025 found that cyberattackers from China, Iran, North Korea, and Russia more than doubled their use of AI for cyberattacks and to spread disinformation.
Those actors achieved a 54 percent click-through rate with AI-automated phishing emails, compared with 12 percent for traditional methods, demonstrating that AI enhances adversary effectiveness as much as it augments defensive operations. Resolving those challenges is a prerequisite for accelerated and widespread adoption. The promise of AI-augmented development confronts a stark reality: the code and models enabling it are structurally insecure. A 2025 Veracode analysis found that nearly 45 percent of AI-generated code contained exploitable flaws. For enterprises evaluating adoption, such flaws turn productivity gains into liability risks. JFrog’s Software Supply Chain State of the Union 2025 report documented over twenty-five thousand exposed secrets and tokens in public repositories—a 64 percent year-over-year increase—of which 27 percent remained active and exploitable.
A summary of global news developments with CFR analysis delivered to your inbox each morning. Weekdays. Nick Raziborsky, co-founder of Sonoma Security. Cybersecurity innovator transforming identity management. Artificial intelligence (AI) is no longer a futuristic promise—it’s a core driver of business transformation, and cybersecurity is emerging as its most critical battleground. For tech entrepreneurs, the post-AI era presents a dual reality: AI empowers defenses while simultaneously providing cybercriminals with advanced tools for attack.
Understanding this dynamic is essential for developing a resilient security strategy. AI-driven threats are growing at an unprecedented pace. Cybercriminals are now leveraging generative AI to create highly persuasive phishing emails and deepfake media. Recent reports indicate a staggering 1,265% surge in AI-generated phishing attacks, making it easier for fraudsters to impersonate trusted executives and authorize unauthorized transactions. Deepfake technology further compounds these risks by enabling attackers to fabricate convincing video messages that can deceive even experienced professionals. The World Economic Forum’s Future of Jobs Report 2025 warns that nearly half of business leaders are increasingly concerned about adversarial AI techniques undermining trust in digital communications.
People Also Search
- Disinformation Security: Securing Trust In The Age Of AI - Forbes
- Beyond Emails: Rethinking Secure Communication In The Age Of AI - Forbes
- Digital Security In The Age Of AI: Why Executives Must Be ... - Forbes
- Jason Crawforth - Forbes Technology Council
- Zero-Trust Strategies In The Age Of AI-Generated Content - Forbes
- The Dark Side Of AI: How Deepfakes And Disinformation Are ... - Forbes
- Securing Identities: Harnessing AI Without Compromising Trust - Forbes
- Securing Intelligence: Why AI Security Will Define the Future of Trust
- Transforming Security In The Post-AI World: What Business ... - Forbes
- Enterprise Trust, Disinformation Security, IT Governance Drive AI ...
Jason Crawforth Is The Founder And CEO Of Swear.com, A
Jason Crawforth is the Founder and CEO of Swear.com, a company working to restore confidence in digital media authenticity. As GenAI tools surge in accessibility and sophistication, a new era of cyber risk is emerging—one not defined by ransomware or phishing but by synthetic realities. In its Top Strategic Technology Trends for 2025, Gartner Inc. named disinformation security as a critical discip...
Disinformation, The Intentional Spread Of False Or Manipulated Content, Has
Disinformation, the intentional spread of false or manipulated content, has evolved from a geopolitical tactic into a systemic risk for enterprises. Today, anyone with access to GenAI can fabricate hyperrealistic video, audio or images. Deepfakes and synthetic voice clones are now attack vectors. According to Gartner, while only 5% of enterprises had implemented disinformation safeguards as of 202...
Disinformation Presents Risks That Span Reputational, Legal And Operational Domains:
Disinformation presents risks that span reputational, legal and operational domains: Tamsin Gable is the head of PR at Municorn. The promise of the digital age was always a sense of permanence. And for over 30 years, things have looked that way. Email has been the backbone of business communication for anyone who entered the workforce since before 2000, but it seems that things are changing fast. ...
These Attacks Not Only Undermine Organizational Effectiveness But Can Also
These attacks not only undermine organizational effectiveness but can also be expensive, particularly if they lead to compromised security or regulatory noncompliance and the associated fines. Fortunately, there are solutions out there. I've closely followed the development of secure, AI-driven communication tools and see great potential in their adoption. It's interesting that when it comes to em...
Health Insurance Portability And Accountability Act (HIPAA) Recommends That Communications
Health Insurance Portability and Accountability Act (HIPAA) recommends that communications be encrypted and provides other security standards to help ensure email compliance. Chad Angle, Head of ReputationDefender at Gen Digital | Expert in Growth Strategy, Online Reputation Management, & Executive Privacy. For years, executive security has focused on reputation management—mitigating negative pres...