Securing Identities Harnessing Ai Without Compromising Trust Forbes
The hype around AI has reached a fever pitch, but skepticism is beginning to set in. As the conversation shifts from excitement to concern—“Is AI transforming the world?” to “Are we exposing ourselves to greater risk?”—I want to address the key challenges ahead. Many organizations are now grappling with pressing questions: Could AI replace human ingenuity with soulless automation? Or even worse, is AI being weaponized against the enterprise? The dual narrative of AI—boundless optimism on one hand and undue fear on the other—leaves businesses caught in the middle, trying to work out hype from reality. As academic Kristina McElheran puts it: “The narrative is that AI is everywhere all at once, but the data shows it’s harder to do than people seem interested in discussing.”
Delinea's latest research highlights the growing shift toward leveraging AI in organizations and the difficulties in implementing these technologies securely. An overwhelming 94% of respondents revealed they are already adopting or planning to adopt AI-driven identity technologies. However, this surge exposes organizations to increased cyber threats, as modern attacks target both human and non-human identities to access critical systems. Research from EY also warns that rapid AI adoption can create vulnerabilities, emphasizing the need for proactive cybersecurity measures. Peter Barker, Chief Product Officer - Ping Identity. As digital experiences grow more distributed—spanning devices, apps, platforms and now autonomous agents—every interaction can either build or erode loyalty and trust.
Today’s users are increasingly discerning and security-aware, and far less tolerant of poor user experiences (UX). In fact, research shows that even when people love a product or company, 59% will abandon it after several bad experiences, and 17% after just one. AI is amplifying these expectations while simultaneously reshaping the threat landscape. Deepfakes, synthetic identities and AI-driven fraud have exposed the limits of traditional authentication methods, and consumer trust is at an all-time low as a result. In fact, fewer than one in five (17%) consumers have full trust in the organizations that manage their identity data. For employers, AI has made it far too easy for bad actors to impersonate job candidates or help desk agents, gaining access to sensitive information and systems.
As the boundary between human and machine interactions weakens, organizations must evolve beyond static security checkpoints toward stronger systems that can better recognize and respond to risk. The future of authentication in this environment lies in the continuous, contextual assurance of identity, also known as verified trust. Passwords, often considered the foundation of traditional authentication, are no longer strong enough to protect user identity and trust. They are difficult for users to manage, easy for attackers to exploit and increasingly irrelevant as the sophistication of cyberthreats outpaces legacy defenses. Even with tools like password managers, complexity requirements and multifactor authentication (MFA), human error and phishing—along with other attack methods driven by AI—continue to expose organizations to risk. Rohit Shirwadkar is a cybersecurity strategy leader at Equinix.
The NIST-800 security framework sets the tone of "never trust, always verify," emphasizing the concepts of least privilege and continuous monitoring. This becomes especially important and relevant in the AI landscape, where model integrity plays a big role, as applications can be accessed from any device or network. CISA has defined the following layers as important for zero trust: identity, device, network, workload and data. The data layer is the most critical for AI models and workloads that are spread across on-premises and cloud environments. Model training and inference rely on controlled and regulated datasets to ensure that there are no cases of misinformation, disclosure of sensitive information, copyright infringement or other ethical issues that must be addressed when... Model weights help businesses determine which models perform best with specific types of datasets.
Ensuring zero-trust principles and verifying that devices and GPUs are free from side-channel attacks is essential to maintaining model integrity. As AI models become more power- and computation-intensive, the scope of what needs to be secured has evolved, from CPUs to GPUs and now to entire AI farms that support these workloads. With this evolution, securing computational workloads is becoming more important, not just at rest but also during computation. This has driven demand for secure, or confidential, compute. Art Gilliland, CEO at Delinea. Read Art Gilliland's full executive profile here.
As organizations begin deploying these systems at scale, the implications for identity security are both profound and urgent. As AI evolves at breakneck speed, businesses must balance innovation with security. This balance is crucial, as risks can quickly overshadow AI’s benefits. Securing AI starts with protecting its foundation: data. It’ll take more than a polite request for CISOs to justify increased solutions and spend in their 2025 fiscal year IT budgets. Gaurav Aggarwal, Senior Vice President at Onix, Global Lead, Data & AI Solutions Engineering.
In a world increasingly shaped by generative AI, the metaverse and billions of connected devices, one thing remains constant: trust. Yet, as the attack surface expands, traditional identity and access management (IAM) systems struggle to keep pace. Static, rule-based frameworks can no longer protect dynamic, hyper-connected ecosystems. Enter adaptive identity—a transformative approach that redefines digital trust. By leveraging contextual intelligence, AI and real-time adaptability, adaptive identity helps organizations stay ahead of threats while ensuring seamless user experiences. In the age of generative AI and decentralized ecosystems, trust begins with securing identity at scale.
Adaptive identity is not just an upgrade—it’s a strategic imperative for the digital economy. But what makes adaptive identity revolutionary? And how can it prepare businesses to thrive in a rapidly changing digital landscape? Rajat Bhargava is an entrepreneur, investor, author and currently CEO and cofounder of JumpCloud. The world is witnessing a technological transformation on a scale we haven't seen in decades. The rise of AI, particularly what we call "agentic" or autonomous AI, promises to revolutionize how businesses operate, from automating complex workflows to creating entirely new products.
This isn't a distant future; it's a present reality with near-universal adoption. This immense potential comes with an equal responsibility. As these AI agents become more autonomous and deeply integrated into our systems, they introduce a profound security challenge that our traditional IT models are not equipped to handle. The question is no longer if we will use AI, but how we will secure it. The answer lies not in a new tool, but in a foundational shift in our security mindset, placing the principles of Identity and Access Management (IAM) at the forefront of the AI revolution. For decades, IT and security have focused on protecting human users.
We’ve built robust systems to manage employee logins, enforce password policies and grant access based on roles and departments. But what happens when the "user" is a sophisticated, autonomous AI agent designed to schedule meetings, analyze financial data or manage corporate social media accounts? Legacy security models are blind to these non-human identities. They often treat AI agents as simple applications or service accounts, assigning them broad, static credentials like API keys or embedded user credentials. This approach creates a massive, unmonitored blind spot, not to mention a huge risk. Vincent Danen is the Vice President of Product Security at Red Hat.
In the rapidly evolving world of artificial intelligence (AI), security is paramount. AI has the potential to transform industries, drive efficiencies and create competitive advantages. But as organizations increasingly rely on AI, the need for robust security mechanisms becomes more pressing. The foundation on which AI systems are built can make or break the effectiveness of AI security measures, highlighting the importance of addressing security from the ground up. AI security starts with a secure infrastructure. AI systems require vast amounts of data, significant computational power and complex algorithms to operate effectively.
This ecosystem, from cloud services and on-premise hardware to software environments, must be fortified against a growing spectrum of threats such as those on OWASP’s Top Ten list. Building AI applications on a secure foundation means addressing vulnerabilities at every layer: data integrity, secure computing environments and robust network defenses. A secure platform lays the groundwork for AI systems that can defend against threats, adapt to changing conditions and protect sensitive information. Infrastructure security helps ensure that vulnerabilities don't become weak entry points for adversaries. AI applications often rely on cloud providers that must follow standards like encryption, access controls and monitoring to safeguard data in transit and at rest. Industry reports indicate that cloud security remains a top concern for enterprises deploying AI solutions.
Research from FS Study shows that "cybersecurity is a principal concern for those tasked with delivering AI services. Factors such as AI-powered attacks, data privacy, data leakage and increased liability rank among the top AI security concerns." COEX Convention & Exhibition Center, South Korea Las Vegas Convention Center, United States Queen Sirikit National Convention Center, Thailand Taipei Nangang Exhibition Center Hall 2, Taiwan
Suntec Singapore Convention & Exhibition Centre, Singapore New guidance includes 10 questions that can help organizations build secure-by-design artificial intelligence. The business benefits of artificial intelligence — like enhanced customer experience, greater efficiency, and better risk management — are now part of many digital strategies. But when it comes to securing AI systems, many organizations are still playing catch-up. “People are trying to figure out how best to use AI, but few are thinking about the security risks that come with it from day one,” said Keri Pearlson, a senior lecturer and principal... “That’s the big problem right now.”
To help close that gap, Pearlson and Nelson Novaes Neto, an MIT Sloan research affiliate and CTO of Brazil-based C6 Bank, developed a framework to help technical executives and their teams ask the right... Their report, “An Executive Guide to Secure-by-Design AI,” condenses hundreds of technical considerations into 10 strategic questions aimed at identifying risks early and aligning AI initiatives with business priorities, ethical standards, and cybersecurity requirements. “The idea was to give technical executives a structured way to ask important questions early in the AI systems design process to head off problems later,” said Pearlson, who teaches the MIT Sloan Executive...
People Also Search
- Securing Identities: Harnessing AI Without Compromising Trust - Forbes
- The Future Of Authentication: Verified Trust In The Age Of AI - Forbes
- How To Secure AI Data Centers: Zero-Trust And Fractional GPU ... - Forbes
- Art Gilliland - Forbes Technology Council
- Adaptive Identity: Securing Trust In A Hyper-Connected World - Forbes
- From Risk To Trust: How IAM Helps Tackle The AI Security Challenge - Forbes
- Fortifying The Future: Building AI Security On A Solid Foundation - Forbes
- Securing Identities: Harnessing AI Without Compromising Trust
- Council Post: Securing Identities: Harnessing AI Without Compromising Trust
- This new framework helps companies build secure AI systems
The Hype Around AI Has Reached A Fever Pitch, But
The hype around AI has reached a fever pitch, but skepticism is beginning to set in. As the conversation shifts from excitement to concern—“Is AI transforming the world?” to “Are we exposing ourselves to greater risk?”—I want to address the key challenges ahead. Many organizations are now grappling with pressing questions: Could AI replace human ingenuity with soulless automation? Or even worse, i...
Delinea's Latest Research Highlights The Growing Shift Toward Leveraging AI
Delinea's latest research highlights the growing shift toward leveraging AI in organizations and the difficulties in implementing these technologies securely. An overwhelming 94% of respondents revealed they are already adopting or planning to adopt AI-driven identity technologies. However, this surge exposes organizations to increased cyber threats, as modern attacks target both human and non-hum...
Today’s Users Are Increasingly Discerning And Security-aware, And Far Less
Today’s users are increasingly discerning and security-aware, and far less tolerant of poor user experiences (UX). In fact, research shows that even when people love a product or company, 59% will abandon it after several bad experiences, and 17% after just one. AI is amplifying these expectations while simultaneously reshaping the threat landscape. Deepfakes, synthetic identities and AI-driven fr...
As The Boundary Between Human And Machine Interactions Weakens, Organizations
As the boundary between human and machine interactions weakens, organizations must evolve beyond static security checkpoints toward stronger systems that can better recognize and respond to risk. The future of authentication in this environment lies in the continuous, contextual assurance of identity, also known as verified trust. Passwords, often considered the foundation of traditional authentic...
The NIST-800 Security Framework Sets The Tone Of "never Trust,
The NIST-800 security framework sets the tone of "never trust, always verify," emphasizing the concepts of least privilege and continuous monitoring. This becomes especially important and relevant in the AI landscape, where model integrity plays a big role, as applications can be accessed from any device or network. CISA has defined the following layers as important for zero trust: identity, devic...