Securing Identities Harnessing Ai Without Compromising Trust
The hype around AI has reached a fever pitch, but skepticism is beginning to set in. As the conversation shifts from excitement to concern—“Is AI transforming the world?” to “Are we exposing ourselves to greater risk?”—I want to address the key challenges ahead. Many organizations are now grappling with pressing questions: Could AI replace human ingenuity with soulless automation? Or even worse, is AI being weaponized against the enterprise? The dual narrative of AI—boundless optimism on one hand and undue fear on the other—leaves businesses caught in the middle, trying to work out hype from reality. As academic Kristina McElheran puts it: “The narrative is that AI is everywhere all at once, but the data shows it’s harder to do than people seem interested in discussing.”
Delinea's latest research highlights the growing shift toward leveraging AI in organizations and the difficulties in implementing these technologies securely. An overwhelming 94% of respondents revealed they are already adopting or planning to adopt AI-driven identity technologies. However, this surge exposes organizations to increased cyber threats, as modern attacks target both human and non-human identities to access critical systems. Research from EY also warns that rapid AI adoption can create vulnerabilities, emphasizing the need for proactive cybersecurity measures. U.S. leadership in the AI century will depend on whether democracies can secure machine intelligence fast enough to preserve the trust and resilience their systems rely on.
Artificial intelligence (AI) is likely to greatly shape twenty-first century prosperity and security—but only if it can be trusted. The defining question is not how advanced AI becomes, but whether its systems can be secured enough to sustain institutional and public confidence. Security failures in 2025 revealed that most organizations remain unable to safeguard AI effectively, widening the gap between its technical promise and operational viability. Most AI-related breaches in 2025 resulted in data compromises, and nearly one-third caused operational disruption. The EU Agency for Cybersecurity found that more than 80 percent of social engineering attacks relied on AI, underscoring how adversaries now innovate faster than defenders can respond. For the United States and its allies, securing AI systems is not a narrow technical concern but a test of whether democratic governance can sustain trust and leadership at machine speed.
Three converging dynamics threaten to stall AI adoption: (1) Systemic vulnerabilities in AI models and infrastructure, (2) deployment that outpaces security controls, and (3) increasingly sophisticated adversaries weaponizing AI. Microsoft’s Digital Defense Report 2025 found that cyberattackers from China, Iran, North Korea, and Russia more than doubled their use of AI for cyberattacks and to spread disinformation. Those actors achieved a 54 percent click-through rate with AI-automated phishing emails, compared with 12 percent for traditional methods, demonstrating that AI enhances adversary effectiveness as much as it augments defensive operations. Resolving those challenges is a prerequisite for accelerated and widespread adoption. The promise of AI-augmented development confronts a stark reality: the code and models enabling it are structurally insecure. A 2025 Veracode analysis found that nearly 45 percent of AI-generated code contained exploitable flaws.
For enterprises evaluating adoption, such flaws turn productivity gains into liability risks. JFrog’s Software Supply Chain State of the Union 2025 report documented over twenty-five thousand exposed secrets and tokens in public repositories—a 64 percent year-over-year increase—of which 27 percent remained active and exploitable. A summary of global news developments with CFR analysis delivered to your inbox each morning. Weekdays. In today’s digital world, security isn't just a feature—it's the foundation of trust. As developers, understanding customer expectations around identity is critical.
We talked to 6,750 consumers and shared those findings in the Auth0 Customer Identity Trends Report 2025. This article details significant challenges impacting user experience and trust, explores underlying identity trends, and outlines actionable solutions you should consider for building secure and trustworthy systems as AI agents become more prevalent. As developers, you face a dual challenge of securing against escalating threats and meeting evolving user expectations. The Customer Identity Trends Report details key security issues for building trusted AI-powered applications as AI use increases: Identity attacks and user behaviors are actively eroding customer trust in digital platforms. This challenge is only amplified by emerging AI technologies.
Pervasive fraudulent signups and ATOs compromise user data, which only fuels security and privacy concerns for application users. Failing to secure digital identities is a threat to AI adoption. You are positioned to address these challenges by building robust identity layers that are critical for securing trust in the AI era. These user trends highlight core requirements developers must consider when designing, building, and deploying AI-powered applications: To address these challenges, developers should focus on integrating secure and user-friendly identity solutions: New guidance includes 10 questions that can help organizations build secure-by-design artificial intelligence.
The business benefits of artificial intelligence — like enhanced customer experience, greater efficiency, and better risk management — are now part of many digital strategies. But when it comes to securing AI systems, many organizations are still playing catch-up. “People are trying to figure out how best to use AI, but few are thinking about the security risks that come with it from day one,” said Keri Pearlson, a senior lecturer and principal... “That’s the big problem right now.” To help close that gap, Pearlson and Nelson Novaes Neto, an MIT Sloan research affiliate and CTO of Brazil-based C6 Bank, developed a framework to help technical executives and their teams ask the right... Their report, “An Executive Guide to Secure-by-Design AI,” condenses hundreds of technical considerations into 10 strategic questions aimed at identifying risks early and aligning AI initiatives with business priorities, ethical standards, and cybersecurity requirements.
“The idea was to give technical executives a structured way to ask important questions early in the AI systems design process to head off problems later,” said Pearlson, who teaches the MIT Sloan Executive... Artificial intelligence (AI) has arrived in financial services at full force, and unlike the slower adoption curve of cloud computing, banks and credit unions cannot afford to lag. Competitors are already embedding AI into their core operations, and cybercriminals are deploying it just as aggressively. The stakes are high: AI is transforming cybersecurity, fraud detection, and operational efficiency, yet adoption requires an equal commitment to governance and trust. For institutions that strike the right balance, AI will not just be a tool for efficiency but a foundation for long-term resilience. When it comes to security, AI is already delivering tangible value.
Institutions across the country are using AI to reinforce their defenses and protect customer assets. At Diebold Nixdorf’s recent annual Intersect Conference, we heard from several leaders from both large financial institutions and smaller community banks and credit unions speak about how they are implementing AI across their organizations. Thomaston Savings Bank, for example, has deployed AI to monitor risk, manage cybersecurity (like flagging fraudulent emails before they spread through employees’ inboxes), and is leveraging AI-driven tools to support vendor management, using tools... From my perspective at Diebold Nixdorf, AI is a natural extension of existing security infrastructure. It enables systems to identify anomalies faster, more consistently, and at a scale no human team can match. Whether flagging suspicious login attempts or monitoring ATM behavior, AI acts as a force multiplier for human teams, helping them detect and respond to threats before they cause real harm.
These aren’t experiments on the fringe. They are live, proven deployments showing how AI is actively safeguarding financial interests today. Also Read: CIO Influence Interview with Jim Dolce, CEO of Lookout With new tools come new risks. Agentic AI systems, where AI agents act autonomously or interact with one another, create new layers of uncertainty around where sensitive data flows. Without strong visibility, institutions risk customer information being ingested into unintended or ungoverned systems.
Banks are right to emphasize cyber protections that ensure customer data never slips outside their control. Similarly, it’s essential to be mindful of potential UDAAP violations that can occur if AI systems inadvertently steer customers toward unsuitable products. These concerns reflect a broader truth: compliance and ethics cannot be an afterthought. Leading data-driven organizations balance protection and access as AI powers ahead. Most organizations feel the imperative to keep pace with continuing advances in AI capabilities, as highlighted in a recent MIT Technology Review Insights report. That clearly has security implications, particularly as organizations navigate a surge in the volume, velocity, and variety of security data.
This explosion of data, coupled with fragmented toolchains, is making it increasingly difficult for security and data teams to maintain a proactive and unified security posture. Data and AI teams must move rapidly to deliver the desired business results, but they must do so without compromising security and governance. As they deploy more intelligent and powerful AI capabilities, proactive threat detection and response against the expanded attack surface, insider threats, and supply chain vulnerabilities must remain paramount. “I’m passionate about cybersecurity not slowing us down,” says Melody Hildebrandt, chief technology officer at Fox Corporation, “but I also own cybersecurity strategy. So I’m also passionate about us not introducing security vulnerabilities.” That’s getting more challenging, says Nithin Ramachandran, who is global vice president for data and AI at industrial and consumer products manufacturer 3M.
“Our experience with generative AI has shown that we need to be looking at security differently than before,” he says. “With every tool we deploy, we look not just at its functionality but also its security posture. The latter is now what we lead with.” Our survey of 800 technology executives (including 100 chief information security officers), conducted in June 2025, shows that many organizations struggle to strike this balance. Adopting generative AI (GenAI) introduces big opportunities—and real risks. Security leaders must move quickly but carefully to make the most of GenAI without compromising trust, privacy, or compliance.
A smart, identity-first strategy built on governance, technology controls, and adaptive security measures is key. Securing GenAI isn’t a one-time fix—it’s a steady, evolving process. The right framework helps leaders stay focused, prioritize the right actions, and mature their security posture over time. The first step in securing GenAI is strong governance. This means aligning AI use with company values, regulatory requirements, and ethical standards. Create a cross-functional governance group to guide projects, review tool usage, and track compliance with regional and global standards.
Set clear expectations for ethical AI use, and evaluate the broader impact of your AI systems—on your organization, your customers, and society. These insights can shape ongoing training to help teams use AI responsibly and stay ahead of emerging risks. Be ready to revise your governance model regularly to keep up with evolving technology and regulation. Security leaders need smart technical safeguards in place to defend GenAI systems. This includes robust logging to track how users interact with AI, and strong access controls to protect models and data.
People Also Search
- Securing Identities: Harnessing AI Without Compromising Trust - Forbes
- Securing Intelligence: Why AI Security Will Define the Future of Trust
- Securing Trust in AI: A Developer's Identity Guide - Auth0
- This new framework helps companies build secure AI systems
- Council Post: Securing Identities: Harnessing AI Without Compromising Trust
- Preserving Trust and Freedom in the Age of AI - AEI
- PDF Trust Without Compromise in AI - DigiCert
- Financial Institutions Can Harness AI Without Compromising Trust
- Delivering securely on data and AI strategy - MIT Technology Review
- Securing Generative AI: A Framework for Security Leaders
The Hype Around AI Has Reached A Fever Pitch, But
The hype around AI has reached a fever pitch, but skepticism is beginning to set in. As the conversation shifts from excitement to concern—“Is AI transforming the world?” to “Are we exposing ourselves to greater risk?”—I want to address the key challenges ahead. Many organizations are now grappling with pressing questions: Could AI replace human ingenuity with soulless automation? Or even worse, i...
Delinea's Latest Research Highlights The Growing Shift Toward Leveraging AI
Delinea's latest research highlights the growing shift toward leveraging AI in organizations and the difficulties in implementing these technologies securely. An overwhelming 94% of respondents revealed they are already adopting or planning to adopt AI-driven identity technologies. However, this surge exposes organizations to increased cyber threats, as modern attacks target both human and non-hum...
Artificial Intelligence (AI) Is Likely To Greatly Shape Twenty-first Century
Artificial intelligence (AI) is likely to greatly shape twenty-first century prosperity and security—but only if it can be trusted. The defining question is not how advanced AI becomes, but whether its systems can be secured enough to sustain institutional and public confidence. Security failures in 2025 revealed that most organizations remain unable to safeguard AI effectively, widening the gap b...
Three Converging Dynamics Threaten To Stall AI Adoption: (1) Systemic
Three converging dynamics threaten to stall AI adoption: (1) Systemic vulnerabilities in AI models and infrastructure, (2) deployment that outpaces security controls, and (3) increasingly sophisticated adversaries weaponizing AI. Microsoft’s Digital Defense Report 2025 found that cyberattackers from China, Iran, North Korea, and Russia more than doubled their use of AI for cyberattacks and to spre...
For Enterprises Evaluating Adoption, Such Flaws Turn Productivity Gains Into
For enterprises evaluating adoption, such flaws turn productivity gains into liability risks. JFrog’s Software Supply Chain State of the Union 2025 report documented over twenty-five thousand exposed secrets and tokens in public repositories—a 64 percent year-over-year increase—of which 27 percent remained active and exploitable. A summary of global news developments with CFR analysis delivered to...