Exclusive Some Ai Dangers Are Already Real Deepmind S Axios

Bonisiwe Shabane
-
exclusive some ai dangers are already real deepmind s axios

Some of the biggest dangers of AI, like attacks on infrastructure, are already real and need to be guarded against, Google DeepMind CEO Demis Hassabis said at Axios' AI+ Summit in San Francisco Thursday. Why it matters: The race to develop AI is changing society in real time, generally for good — but bad actors are taking advantage, too. The big picture: Hassabis in May had predicted AI that meets or exceeds human capabilities — artificial general intelligence, or AGI — could come by 2030. What they're saying: In an interview with Axios' Mike Allen, Hassabis assessed the risk from a number of "catastrophic outcomes" of AI misuse as the technology develops, particularly "energy or water cyberterror." The intrigue: AI experts talk often of a concept known as "p(doom)," or the probability of catastrophe happening due to AI. There’s no doubt about it, AI can be scary.

Anyone who says they aren’t at least a little bit worried is probably very brave, very stupid, or a liar. It makes total sense because the unknown is always frightening, and when it comes to AI, there are a lot of unknowns. How exactly does it work? Why can’t we explain certain phenomena like hallucinations? And perhaps most importantly, what impact is it going to have on our lives and society? Many of these fears have solidified into debates around particular aspects of AI—its impact on human jobs, creativity or intellectual property rights, for example.

And those involved often make it clear that the potential implications are terrifying. So here I will overview what I have come to see as some of the biggest fears. These are potential outcomes of the AI revolution that no one wants to see, but we can’t be sure they aren’t lurking around the corner… One of the most pressing fears, and perhaps the one that gets the most coverage, is that huge swathes of us will be made redundant by machines that are cheaper to run than human... DeepMind releases version 3.0 of its AI Frontier Safety Framework with new tips to stop bad bots. Generative AI models are far from perfect, but that hasn’t stopped businesses and even governments from giving these robots important tasks.

But what happens when AI goes bad? Researchers at Google DeepMind spend a lot of time thinking about how generative AI systems can become threats, detailing it all in the company’s Frontier Safety Framework. DeepMind recently released version 3.0 of the framework to explore more ways AI could go off the rails, including the possibility that models could ignore user attempts to shut them down. DeepMind’s safety framework is based on so-called “critical capability levels” (CCLs). These are essentially risk assessment rubrics that aim to measure an AI model’s capabilities and define the point at which its behavior becomes dangerous in areas like cybersecurity or biosciences. The document also details the ways developers can address the CCLs DeepMind identifies in their own models.

Google and other firms that have delved deeply into generative AI employ a number of techniques to prevent AI from acting maliciously. Although calling an AI “malicious” lends it intentionality that fancy estimation architectures don’t have. What we’re talking about here is the possibility of misuse or malfunction that is baked into the nature of generative AI systems. The updated framework (PDF) says that developers should take precautions to ensure model security. Specifically, it calls for proper safeguarding of model weights for more powerful AI systems. The researchers fear that exfiltration of model weights would give bad actors the chance to disable the guardrails that have been designed to prevent malicious behavior.

This could lead to CCLs like a bot that creates more effective malware or assists in designing biological weapons. Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Fortune's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08 Google DeepMind says in a new research paper that human-level AI could plausibly arrive by 2030 and “permanently destroy humanity.” In a discussion of the spectrum of risks posed by Artificial General Intelligence, or AGI, the paper states, “existential risks … that permanently destroy humanity are clear examples of severe harm.

In between these ends of the spectrum, the question of whether a given harm is severe isn’t a matter for Google DeepMind to decide; instead it is the purview of society, guided by its... Given the massive potential impact of AGI, we expect that it too could pose potential risk of severe harm.” The statements are contained in a 145-page paper outlining Google DeepMind’s approach to AI safety as it attempts to build advanced systems that may one day surpass human intelligence.The papers’ co-authors, who include DeepMind... And most of the paper is focused on the steps Google DeepMind thinks it and other AI labs should take to reduce the threat that AGI results in what the researchers called “severe harm.”Legg... Last month, Legg’s cofounder, DeepMind CEO Demis Hassabis told NBC News that he thought AGI would likely arrive in the next “five to 10 years,” putting 2030 at the earlier end of that range. The paper separates the risks of advanced AI into four major categories: misuse, which refers to people intentionally using AI for harm; misalignment, meaning systems developing unintended harmful behavior; mistakes, categorized as unexpected failures...

Eric Schmidt unpacks the complex US-China AI rivalry, revealing China’s strategic shift and America’s looming energy vulnerability. He emphasizes AI’s geopolitical threats and the urgent need for global diplomatic frameworks. Chip stocks plunge across the board as growing fears of an “AI bubble” loom. Investors are de-risking ahead of Nvidia’s highly anticipated earnings report, which could signal the industry’s trajectory. Google’s Gemini 3 launches with a “full stack” advantage, integrating directly into search to leverage its end-to-end control from research to applications. This move redefines the AI race, showcasing the company’s comprehensive ecosystem against agile rivals.

Two weeks after members of Congress questioned OpenAI CEO Sam Altman about the potential for artificial intelligence tools to spread misinformation, disrupt elections and displace jobs, he and others in the industry went public... Altman, whose company is behind the viral chatbot tool ChatGPT, joined Google DeepMind CEO Demis Hassabis, Microsoft's CTO Kevin Scott and dozens of other AI researchers and business leaders in signing a one-sentence letter... The stark warning was widely covered in the press, with some suggesting it showed the need to take such apocalyptic scenarios more seriously. But it also highlights an important dynamic in Silicon Valley right now: Top executives at some of the biggest tech companies are simultaneously telling the public that AI has the potential to bring about... The dynamic has played out elsewhere recently, too. Tesla CEO Elon Musk, for example, said in a TV interview in April that AI could lead to "civilization destruction." But he still remains deeply involved in the technology through investments across his sprawling...

Some AI industry experts say that focusing attention on far-off scenarios may distract from the more immediate harms that a new generation of powerful AI tools can cause to people and communities, including spreading...

People Also Search

Some Of The Biggest Dangers Of AI, Like Attacks On

Some of the biggest dangers of AI, like attacks on infrastructure, are already real and need to be guarded against, Google DeepMind CEO Demis Hassabis said at Axios' AI+ Summit in San Francisco Thursday. Why it matters: The race to develop AI is changing society in real time, generally for good — but bad actors are taking advantage, too. The big picture: Hassabis in May had predicted AI that meets...

Anyone Who Says They Aren’t At Least A Little Bit

Anyone who says they aren’t at least a little bit worried is probably very brave, very stupid, or a liar. It makes total sense because the unknown is always frightening, and when it comes to AI, there are a lot of unknowns. How exactly does it work? Why can’t we explain certain phenomena like hallucinations? And perhaps most importantly, what impact is it going to have on our lives and society? Ma...

And Those Involved Often Make It Clear That The Potential

And those involved often make it clear that the potential implications are terrifying. So here I will overview what I have come to see as some of the biggest fears. These are potential outcomes of the AI revolution that no one wants to see, but we can’t be sure they aren’t lurking around the corner… One of the most pressing fears, and perhaps the one that gets the most coverage, is that huge swath...

But What Happens When AI Goes Bad? Researchers At Google

But what happens when AI goes bad? Researchers at Google DeepMind spend a lot of time thinking about how generative AI systems can become threats, detailing it all in the company’s Frontier Safety Framework. DeepMind recently released version 3.0 of the framework to explore more ways AI could go off the rails, including the possibility that models could ignore user attempts to shut them down. Deep...

Google And Other Firms That Have Delved Deeply Into Generative

Google and other firms that have delved deeply into generative AI employ a number of techniques to prevent AI from acting maliciously. Although calling an AI “malicious” lends it intentionality that fancy estimation architectures don’t have. What we’re talking about here is the possibility of misuse or malfunction that is baked into the nature of generative AI systems. The updated framework (PDF) ...