Securing Intelligence Why Ai Security Will Define The Future Of Trust
U.S. leadership in the AI century will depend on whether democracies can secure machine intelligence fast enough to preserve the trust and resilience their systems rely on. Artificial intelligence (AI) is likely to greatly shape twenty-first century prosperity and security—but only if it can be trusted. The defining question is not how advanced AI becomes, but whether its systems can be secured enough to sustain institutional and public confidence. Security failures in 2025 revealed that most organizations remain unable to safeguard AI effectively, widening the gap between its technical promise and operational viability. Most AI-related breaches in 2025 resulted in data compromises, and nearly one-third caused operational disruption.
The EU Agency for Cybersecurity found that more than 80 percent of social engineering attacks relied on AI, underscoring how adversaries now innovate faster than defenders can respond. For the United States and its allies, securing AI systems is not a narrow technical concern but a test of whether democratic governance can sustain trust and leadership at machine speed. Three converging dynamics threaten to stall AI adoption: (1) Systemic vulnerabilities in AI models and infrastructure, (2) deployment that outpaces security controls, and (3) increasingly sophisticated adversaries weaponizing AI. Microsoft’s Digital Defense Report 2025 found that cyberattackers from China, Iran, North Korea, and Russia more than doubled their use of AI for cyberattacks and to spread disinformation. Those actors achieved a 54 percent click-through rate with AI-automated phishing emails, compared with 12 percent for traditional methods, demonstrating that AI enhances adversary effectiveness as much as it augments defensive operations. Resolving those challenges is a prerequisite for accelerated and widespread adoption.
The promise of AI-augmented development confronts a stark reality: the code and models enabling it are structurally insecure. A 2025 Veracode analysis found that nearly 45 percent of AI-generated code contained exploitable flaws. For enterprises evaluating adoption, such flaws turn productivity gains into liability risks. JFrog’s Software Supply Chain State of the Union 2025 report documented over twenty-five thousand exposed secrets and tokens in public repositories—a 64 percent year-over-year increase—of which 27 percent remained active and exploitable. A summary of global news developments with CFR analysis delivered to your inbox each morning. Weekdays.
Amid profound geopolitical and geo-economic realignments, and as rapid advances in technology reshape the global security landscape, economic resilience, societal trust, sustainability and cybersecurity are now part of the same equation. To confront these intertwined challenges, the World Economic Forum convened a joint session of the Annual Meeting of the Global Future Councils and the Annual Meeting on Cybersecurity, bringing more than 500 experts from... The gathering underscored the growing recognition that cyber resilience is no longer a technical concern, but a shared societal imperative. “Cybersecurity touches every facet of modern life, and this meeting brought together the Councils to see how it is reshaping each of our fields, from quantum transition to financial regulation,” said Akshay Joshi, Head... Cybersecurity has always been asymmetric. Attackers need only to find a single overlooked vulnerability; defenders must secure every possible point of entry across sprawling networks and complex supply chains.
The digital attack surface – encompassing every connected device, platform, and line of code – has expanded exponentially, creating a terrain so vast that even minor oversights can have cascading consequences. Leading data-driven organizations balance protection and access as AI powers ahead. Most organizations feel the imperative to keep pace with continuing advances in AI capabilities, as highlighted in a recent MIT Technology Review Insights report. That clearly has security implications, particularly as organizations navigate a surge in the volume, velocity, and variety of security data. This explosion of data, coupled with fragmented toolchains, is making it increasingly difficult for security and data teams to maintain a proactive and unified security posture. Data and AI teams must move rapidly to deliver the desired business results, but they must do so without compromising security and governance.
As they deploy more intelligent and powerful AI capabilities, proactive threat detection and response against the expanded attack surface, insider threats, and supply chain vulnerabilities must remain paramount. “I’m passionate about cybersecurity not slowing us down,” says Melody Hildebrandt, chief technology officer at Fox Corporation, “but I also own cybersecurity strategy. So I’m also passionate about us not introducing security vulnerabilities.” That’s getting more challenging, says Nithin Ramachandran, who is global vice president for data and AI at industrial and consumer products manufacturer 3M. “Our experience with generative AI has shown that we need to be looking at security differently than before,” he says. “With every tool we deploy, we look not just at its functionality but also its security posture.
The latter is now what we lead with.” Our survey of 800 technology executives (including 100 chief information security officers), conducted in June 2025, shows that many organizations struggle to strike this balance. As enterprises migrate more of their infrastructure to the cloud, the stakes for security and compliance have never been higher. Traditional models of perimeter defense are no longer sufficient in a world where remote work, hybrid cloud, and AI-driven automation define daily operations. Organizations today are embracing Zero Trust architectures and AI-powered threat detection as the new standard for resilience. Sulakshana Singh, a Senior IEEE member and an editorial board member at the International Journal of Emerging Trends in Computer Science and Information Technology, has been at the forefront of this shift.
With expertise spanning enterprise security, compliance automation, and scalable cloud services, she has consistently demonstrated how technical innovation can align with business impact. Across industries, the adoption of Zero Trust has accelerated, driven by the realization that legacy, perimeter-based approaches cannot contain modern cyber threats. Gartner estimates that by 2027, more than half of enterprises will have adopted Zero Trust strategies as the backbone of their security programs. Singh’s work reflects this transformation. At IBM, she contributed to the Security and Compliance Center (SCC)—a product designed to centralize and automate compliance across the IBM Cloud platform. Her contributions included developing new features for microservices, deploying them across disaster recovery regions, and addressing vulnerabilities to align with OWASP standards.
By upgrading services from JDK8 to JDK11 and optimizing infrastructure, Singh helped reduce operational cloud usage costs by an estimated $50 million annually, while simultaneously strengthening resilience. “This wasn’t just a cost-saving exercise,” she explains. “It was about building a platform that could protect enterprises at scale—without slowing them down.” The credibility of U.S. and allied leadership in the digital order will rest on whether they can embed trust into the architecture of machine intelligence itself, writes Vinh Nguyen. Serbia to implement carbon tax starting January 1
Local Turkish billet prices follow higher rebar prices, import interest temporarily low Daily iron ore prices CFR China - December 5, 2025 Ex-Brazil BPI trading improves with new sales to US China’s year-end expectations slow down Asia’s steel market tl;dr: As AI becomes central to enterprise operations, cybersecurity must evolve to protect massive, sensitive datasets and ensure trust. Dell’s John Roese and John Scimone discuss how organizations must take a holistic approach to risk management to unlock innovation while staying secure.
It’s impossible to have a conversation about technology today without talking about artificial intelligence. A question on many of our minds is: does security enhance AI, or does AI enhance security? As we navigate the third year of the generative AI cycle, the intersection of security and trust has become one of the most complex areas for organizations to master. To explore this vital topic during Cybersecurity Awareness Month, I had the pleasure of sitting down with my colleague, John Scimone, president and chief security officer, who leads all things security and resilience at... We discussed the evolving relationship between AI and security, how organizations can rethink their infrastructure for the AI era and why this moment represents a tremendous opportunity for innovation and progress. The following has been edited for length and readability.
People Also Search
- Securing Intelligence: Why AI Security Will Define the Future of Trust
- Securing the digital future: Cyber resilience and trust in the AI age
- Keeping AI innovation secure and sustainable - Federal News Network
- Securing AI: Navigating risks and compliance for the future
- Delivering securely on data and AI strategy - MIT Technology Review
- Securing the Future: How AI and Zero Trust Are Reshaping Enterprise ...
- Securing Intelligence: Why AI Security Will Define the Future of Trust ...
- Why the Future of AI Depends on Security and Education
- AI and Security: A Conversation on Trust and Resilience - Dell
U.S. Leadership In The AI Century Will Depend On Whether
U.S. leadership in the AI century will depend on whether democracies can secure machine intelligence fast enough to preserve the trust and resilience their systems rely on. Artificial intelligence (AI) is likely to greatly shape twenty-first century prosperity and security—but only if it can be trusted. The defining question is not how advanced AI becomes, but whether its systems can be secured en...
The EU Agency For Cybersecurity Found That More Than 80
The EU Agency for Cybersecurity found that more than 80 percent of social engineering attacks relied on AI, underscoring how adversaries now innovate faster than defenders can respond. For the United States and its allies, securing AI systems is not a narrow technical concern but a test of whether democratic governance can sustain trust and leadership at machine speed. Three converging dynamics th...
The Promise Of AI-augmented Development Confronts A Stark Reality: The
The promise of AI-augmented development confronts a stark reality: the code and models enabling it are structurally insecure. A 2025 Veracode analysis found that nearly 45 percent of AI-generated code contained exploitable flaws. For enterprises evaluating adoption, such flaws turn productivity gains into liability risks. JFrog’s Software Supply Chain State of the Union 2025 report documented over...
Amid Profound Geopolitical And Geo-economic Realignments, And As Rapid Advances
Amid profound geopolitical and geo-economic realignments, and as rapid advances in technology reshape the global security landscape, economic resilience, societal trust, sustainability and cybersecurity are now part of the same equation. To confront these intertwined challenges, the World Economic Forum convened a joint session of the Annual Meeting of the Global Future Councils and the Annual Mee...
The Digital Attack Surface – Encompassing Every Connected Device, Platform,
The digital attack surface – encompassing every connected device, platform, and line of code – has expanded exponentially, creating a terrain so vast that even minor oversights can have cascading consequences. Leading data-driven organizations balance protection and access as AI powers ahead. Most organizations feel the imperative to keep pace with continuing advances in AI capabilities, as highli...