Zero Trust Strategies In The Age Of Ai Generated Content Forbes
"Trust only what you see" is no longer a principle to live by nowadays, considering the many tools that can manipulate what we read, hear or see. Last week, OpenAI introduced Sora, a groundbreaking AI system capable of transforming text descriptions into photorealistic videos. Sora's development is built upon the foundation of OpenAI's existing technologies, including the renowned image generator DALL-E and the sophisticated GPT large language models. It can produce videos up to 60 seconds in length, utilizing either pure text instructions or a combination of text and images. Yet, the rise of such technologically advanced systems like Sora also amplifies concerns regarding the potential for artificial deepfake videos to exacerbate the issues of misinformation and disinformation, especially during crucial election years like... Until now, text-to-video AI models have been trailing behind regarding realism and widespread accessibility.
But Sora’s outputs are described by Rachel Tobac as an "order of magnitude more believable and less cartoonish" than its predecessors. And this brings a lot of cybersecurity risks for all of us. Even before Sora, the global incidence of deepfakes has skyrocketed, experiencing a tenfold surge worldwide from 2022 to 2023, with a 1740% increase in North America, 1530% in APAC, 780% in Europe (including the... Cybersecurity leaders must focus on validating the content that circulates within and outside their enterprises, keeping an eye on the fluctuations in the online medium. Read more: https://hubs.li/Q02ps9nK0 Post written by Terence Jackson CISM, CDPSE, GRCP, CMMC-RP, Forbes Councils Member. The rapid increase in digital content creation, sharing and publishing in the age of generative AI raises important questions for businesses and consumers.
Can people tell when they encounter AI-generated content and, more importantly, misinformation? How do organisations and content creators ensure they are innovating responsibly when harnessing the benefits of AI? The implications of both these questions are far-reaching. After all, the rise of AI and the threat of misinformation can have a bearing on political outcomes, brand reputation, and the nature of human creativity. Adobe’s Chandra Sinnathamby went to the heart of the issue at the recent MAKE IT event. “In the AI era, trust is the number one factor that you've got to drive,” Chandra said.
There is also some way to go to restore trust. Adobe’s State of Digital Customer Experience research recently found that 52% of consumers believe that due to AI, they will receive misleading or incorrect information. Encouragingly, the desire to increase the transparency of AI-generated content and stem the tide of misinformation is growing. This includes the development of open standards that help pinpoint where content has come from and provide tamper-proof provenance. Kevan Yalowitz, Accenture Software and Platforms Industry Lead. The digital advertising market is undergoing a seismic shift.
As platforms compete for market share, AI-generated content has emerged as a key differentiator, revolutionizing how consumers engage with information and make decisions. The message is clear: Consumers not only appreciate AI-generated content—they also trust it and want more of it. For platforms, this isn’t just an opportunity; it’s a mandate to invest in AI-driven strategies to secure market leadership. Our research reveals that active users—those who engage with AI-generated content daily or weekly—are overwhelmingly positive about its impact. Per Ivey Business Journal, 77% of "active AI users trust AI-generated content," a signal of "a generational shift in trust, behaviour, and brand loyalty." Per our research, over 88% of users we surveyed "think... Consumers are positively responding to the speed, efficiency and personalization of AI content.
The impact of generative AI extends beyond convenience—it’s reshaping engagement and commerce. Younger generations, specifically Gen-Z and Millennials, are all-in with AI. Per Ivey Business Journal's research linked above, 85% of users "access video and social media content regularly." Baby Boomers’ activity, unsurprisingly, is lower, but still impressive: 62% are regularly accessing that content. Recent data from Accenture reveals that 50% of users who have adopted AI are very comfortable with it. From what I've seen in the industry, adoption is rapidly increasing, and this translates to revenue: Users are much more likely to make a purchase following interactions with AI-crafted material. AI content isn’t the future — it’s the now.
But as 78% of Americans say the internet has “never been worse” at helping them tell what’s real from what’s artificial, the rise of automation comes with new risks. That’s according to a Talker Research survey of 2,000 U.S. adults, as reported by StudyFinds. Businesses everywhere are using large language models (LLMs) like ChatGPT to generate blogs, product descriptions, emails, and social media posts. Since launching the AI detection platform in 2022, Baroud and his team have developed tools used by companies, educators, and institutions to identify AI-generated content, improve messaging, and promote transparency in digital communication. When OpenAI teased Sora, its latest text-to-video model, the internet was flooded with videos so realistic that even digital natives struggled to tell them apart from video recorded by humans behind a camera.
We use a third party service to embed video content that may collect data about your activity. Please review the details and accept the service to watch this video. Sora’s public launch in December 2024 displayed a leap in quality that was a stark contrast to the grainy, glitchy AI videos from just two years prior. What once required professional equipment and big budgets could now be generated by anyone at home with just a text prompt. Today, with approximately 34 million new AI images generated daily and more than 15 billion since 2022, the boundary between real and artificial is becoming more and more difficult to distinguish. Alongside this change, the question increasingly shifts from “was this made by AI?” to “can I trust the source that shared it?”
Seventy-eight percent of Americans now admit it’s nearly impossible to separate real from machine-generated content online, and three-quarters say they “trust the internet less than ever.” As generative AI becomes increasingly embedded in everything...
People Also Search
- Zero-Trust Strategies In The Age Of AI-Generated Content - Forbes
- Why Zero Trust Is the Only Strategy for the Age of AI
- Forbes Technology Council's Post - LinkedIn
- Deepfakes And The Erosion Of Digital Trust: Zero-Trust Strategies In ...
- 4 Strategies for Building Trust in Generative AI Experiences
- Restoring trust in AI-generated content requires many hands
- [Forbes Expert Panel] 17 Ways To Build Stakeholder Trust in the Age of AI
- AI-Generated Content: The Future Consumers Love, Trust And Demand - Forbes
- ZeroGPT Founder Shares 3 Strategies for Using AI Without Losing Brand Trust
- Trust as imperative in the age of AI - usercentrics.com
"Trust Only What You See" Is No Longer A Principle
"Trust only what you see" is no longer a principle to live by nowadays, considering the many tools that can manipulate what we read, hear or see. Last week, OpenAI introduced Sora, a groundbreaking AI system capable of transforming text descriptions into photorealistic videos. Sora's development is built upon the foundation of OpenAI's existing technologies, including the renowned image generator ...
But Sora’s Outputs Are Described By Rachel Tobac As An
But Sora’s outputs are described by Rachel Tobac as an "order of magnitude more believable and less cartoonish" than its predecessors. And this brings a lot of cybersecurity risks for all of us. Even before Sora, the global incidence of deepfakes has skyrocketed, experiencing a tenfold surge worldwide from 2022 to 2023, with a 1740% increase in North America, 1530% in APAC, 780% in Europe (includi...
Can People Tell When They Encounter AI-generated Content And, More
Can people tell when they encounter AI-generated content and, more importantly, misinformation? How do organisations and content creators ensure they are innovating responsibly when harnessing the benefits of AI? The implications of both these questions are far-reaching. After all, the rise of AI and the threat of misinformation can have a bearing on political outcomes, brand reputation, and the n...
There Is Also Some Way To Go To Restore Trust.
There is also some way to go to restore trust. Adobe’s State of Digital Customer Experience research recently found that 52% of consumers believe that due to AI, they will receive misleading or incorrect information. Encouragingly, the desire to increase the transparency of AI-generated content and stem the tide of misinformation is growing. This includes the development of open standards that hel...
As Platforms Compete For Market Share, AI-generated Content Has Emerged
As platforms compete for market share, AI-generated content has emerged as a key differentiator, revolutionizing how consumers engage with information and make decisions. The message is clear: Consumers not only appreciate AI-generated content—they also trust it and want more of it. For platforms, this isn’t just an opportunity; it’s a mandate to invest in AI-driven strategies to secure market lea...