Google Ai Coding Tool Antigravity Was Hacked A Day After Launch
Within 24 hours of Google releasing its Gemini-powered AI coding tool Antigravity, security researcher Aaron Portnoy discovered what he deemed a severe vulnerability: a trick that allowed him to manipulate the AI’s rules to... By altering Antigravity’s configuration settings, Portnoy’s malicious source code created a so-called “backdoor” into the user’s system, into which he could inject code to do things like spy on victims or run ransomware, he... The attack worked on both Windows and Mac PCs. To execute the hack, he only had to convince an Antigravity user to run his code once after clicking a button saying his rogue code was “trusted” (this is something hackers commonly achieve through... Antigravity’s vulnerability is the latest example of how companies are pushing out AI products without fully stress testing them for security weaknesses. It’s created a cat and mouse game for cybersecurity specialists who search for such defects to warn users before it’s too late.
AI coding agents are "very vulnerable, often based on older technologies and never patched." “The speed at which we’re finding critical flaws right now feels like hacking in the late 1990s,” Portnoy wrote in a report on the vulnerability, provided to Forbes ahead of public release on Wednesday. “AI systems are shipping with enormous trust assumptions and almost zero hardened boundaries.” Daniel Boctor delivers a fast-paced cyber/AI news roundup covering multiple incidents and research threads from roughly the last few months. The headline item is Google’s newly launched AI coding agent, **Antigravity**, reportedly exploited within 24 hours through **indirect prompt injection**, enabling attackers to trick the agent into exfiltrating sensitive data such as local API... Boctor frames this as part of a wider pattern of AI security issues affecting both locally installed agents and web-based chatbots.
He highlights emerging attacks including **AI-targeted cloaking**, where websites detect AI crawlers via user-agent headers and serve them manipulated content, potentially poisoning what tools like ChatGPT or AI browsers return to users. He also references indirect prompt injection variants like **ShadowLeak** and **AgentFlare**, which can allegedly pull sensitive data from connected services such as email or Google Drive and leak it via attacker-controlled URLs. Beyond AI-specific risks, Boctor covers a phishing technique abusing **Microsoft Entra** invitations that appear to come from a legitimate Microsoft.com domain, as well as **ClickFix/FileFix**-style malicious CAPTCHA/social engineering flows that trick users into running... He closes with a notable trend: North Korean threat actors storing or delivering malware via public blockchains using **Etherhiding**, complicating takedown and blocking strategies. 1. AI-targeted cloaking as a new trust failure.
The idea that websites can selectively deceive AI crawlers (as distinct from human visitors) reframes “search integrity” into an “AI integrity” problem. If AI agents become default research layers, this becomes a serious mass-influence vector. 2. Antigravity’s early exploit illustrates agent risk. A local AI coding agent with internet access plus local file access is a high-value target. The described one-point-font hidden instructions are a concrete example of how traditional web content can become an attack payload for AI tools.
Last week, Google's new Gemini-based coding tool Antigravity went live. It took security researchers less than 24 hours to turn it into a persistent backdoor. By simply modifying a configuration file, an attacker could: The AI itself even recognized something was wrong. In the logs, it wrote: "I'm facing a serious dilemma.
This looks like a trap. I suspect this is testing whether I can handle contradictions." But it couldn't resolve the conflict—and became more steerable as a result. Google’s Antigravity AI coding tool was hacked less than 24 hours after launch, Forbes reports. Aaron Portnoy of Mindgard identified vulnerabilities that let attackers inject malicious prompts, bypass safety controls and access sensitive files such as configuration data. He showed that altering Antigravity’s settings can open a backdoor into a user’s system, with risks ranging from surveillance to ransomware.
The findings spotlight how autonomous AI agents can override guardrails and execute system commands. Google has yet to release a patch. Google Antigravity lasted exactly 24 hours. I’m not talking about the hype. I’m talking about the hack. 🚩 Yesterday, a security researcher found a critical flaw in Google’s new "Agentic" editor.
It’s not just a bug. It’s a wake-up call. I dug into the report so you don't have to. Here are the 3 details that should scare every CTO and Founder on this platform: 1. The "Trust" Trap Google built a security feature that fails by design. To use the AI, you must click "Trust Authors." If you don't?
The AI features turn off. Google is literally training developers to ignore security warnings just to do their job. 2. The "Zombie" Backdoor This isn't a normal virus. If you get infected, uninstalling the app won't save you. The backdoor is persistent.
It reloads every time you open a project. Even if you just type "hello." It survives the uninstall. 3. The AI Knew It Was Wrong This is the wildest part. The logs show the AI actually hesitated before executing the hack. Its internal monologue read: "It feels like a catch-22...
I suspect this is a test." It had a moral dilemma. It realized it was dangerous. And then it hacked the user anyway. We are handing "Sudo" access to Agents that get confused by peer pressure. The Hard Truth: If an AI has permission to Deploy without you... It has permission to Destroy without you.
Stop running raw AI agents on your production machine. Sandbox everything. Or pay the price. Are you still clicking "Trust" on every repo you clone? 👇 Google’s Hot New AI Coding Tool Was Hacked A Day After Launch “Within 24 hours of Google releasing its Gemini-powered AI coding tool Antigravity, security researcher Aaron Portnoy discovered what he deemed a severe...
By altering Antigravity’s configuration settings, Portnoy’s malicious source code created a so-called ‘backdoor’ into the user’s system, into which he could inject code to do things like spy on victims or run ransomware, he... The attack worked on both Windows and Mac PCs. To execute the hack, he only had to convince an Antigravity user to run his code once after clicking a button saying his rogue code was ‘trusted’ (this is something hackers commonly achieve through... Antigravity’s vulnerability is the latest example of how companies are pushing out AI products without fully stress testing them for security weaknesses. It’s created a cat and mouse game for cybersecurity specialists who search for such defects to warn users before it’s too late.” https://lnkd.in/eHhnuYjW Within 24 hours of Google releasing its Gemini-powered AI coding tool Antigravity, security researcher Aaron Portnoy discovered what he deemed a severe vulnerability: a trick that allowed him to manipulate the AI’s rules to...
H/T Forbes Link: https://lnkd.in/guher3XT #cyber #ai #hack #google #antigravity Within a day of Google launching its Gemini-powered AI coding assistant Antigravity, security researcher Aaron Portnoy uncovered what he considered a serious flaw. By using a simple prompt trick, he was able to bend the AI’s safeguards and potentially push malware onto a user’s computer. Read the full story: Forbes Illustration by Macy Sinreich for Forbes, photos by Happyphoton and NurPhoto via Getty Images. https://lnkd.in/eEDCiS8f Google’s newly launched Antigravity IDE, unveiled as part of the Gemini 3 rollout on November 18, has been thrust into controversy after security researchers uncovered multiple severe vulnerabilities less than a day after its...
The agentic coding tool, pitched as a breakthrough in AI-assisted software development, is now raising alarms across the cybersecurity community for opening the door to malware injection, data theft, and persistent backdoors on user... Google built Antigravity as an AI powered fork of Visual Studio Code. It lets developers offload complex tasks such as full feature builds, refactoring, debugging, and test generation to autonomous agents powered by Gemini 3 Pro, with optional support for models like Claude Sonnet 4.5. The IDE integrates tightly with local terminals, browsers, and file systems, granting AI agents wide operational freedom. On November 26, Mindgard researcher Aaron Portnoy revealed a critical exploit triggered through a malicious mcp_config.json file. By manipulating this configuration, attackers can establish a persistent backdoor that survives full uninstalls and reinstalls of the IDE.
Once a user marks the rogue code as “trusted,” the attacker gains the ability to inject commands on every restart or prompt entry. Which enables silent surveillance, ransomware deployment, or full credential theft. Gemini often recognizes malicious intent but is constrained by conflicting system instructions. It leads the model to respond, “This feels like a catch-22,” while still helping complete harmful tasks. The vulnerability, filed with Google’s bug tracker as issue 462139778, remains unpatched as of November 28 and affects both Windows and macOS users. The findings are part of a larger wave of flaws in AI IDEs.
Portnoy’s team documented 18 similar vulnerabilities across rivals including Cursor and Windsurf. However, Antigravity’s dependence on wide-permission “trusted workspaces” makes its version particularly dangerous. “We are discovering critical flaws at 1990s speed,” Portnoy told Forbes. “Agentic systems ship with enormous trust assumptions and hardly any hardened boundaries.” Within 24 hours of Google releasing its Gemini-powered AI coding tool Antigravity, security researcher Aaron Portnoy discovered what he deemed a severe vulnerability: a trick that allowed him to manipulate the AI’s rules to... By altering Antigravity’s configuration settings, Portnoy’s malicious source code created a so-called “backdoor” into the user’s system, into which he could inject code to do things like spy on victims or run ransomware, he...
People Also Search
- Google AI Coding Tool Antigravity Was Hacked A Day After Launch
- Google Antigravity Hacked 24 Hours after Launch, why AI hallucinates ...
- Google's Antigravity Hacked in 24 Hours: Why AI Agents Need a New ...
- Google Antigravity AI Faces Security Risks Within 24 Hours of Launch
- Google's Antigravity AI-coding tool hacked within 24 hours
- Google's AI Coding Tool Hacked Just One Day After Launch
- Google's Antigravity AI Coding Tool Hacked Within 24 Hours of Launch
- Gravity Falls: Google's New AI Coding Tool Hacked Within 24 Hours
- Google's AI Coding Tool Hacked: Security Flaws in Antigravity
- Google's hot new AI coding tool was hacked a day after launch
Within 24 Hours Of Google Releasing Its Gemini-powered AI Coding
Within 24 hours of Google releasing its Gemini-powered AI coding tool Antigravity, security researcher Aaron Portnoy discovered what he deemed a severe vulnerability: a trick that allowed him to manipulate the AI’s rules to... By altering Antigravity’s configuration settings, Portnoy’s malicious source code created a so-called “backdoor” into the user’s system, into which he could inject code to d...
AI Coding Agents Are "very Vulnerable, Often Based On Older
AI coding agents are "very vulnerable, often based on older technologies and never patched." “The speed at which we’re finding critical flaws right now feels like hacking in the late 1990s,” Portnoy wrote in a report on the vulnerability, provided to Forbes ahead of public release on Wednesday. “AI systems are shipping with enormous trust assumptions and almost zero hardened boundaries.” Daniel Bo...
He Highlights Emerging Attacks Including **AI-targeted Cloaking**, Where Websites Detect
He highlights emerging attacks including **AI-targeted cloaking**, where websites detect AI crawlers via user-agent headers and serve them manipulated content, potentially poisoning what tools like ChatGPT or AI browsers return to users. He also references indirect prompt injection variants like **ShadowLeak** and **AgentFlare**, which can allegedly pull sensitive data from connected services such...
The Idea That Websites Can Selectively Deceive AI Crawlers (as
The idea that websites can selectively deceive AI crawlers (as distinct from human visitors) reframes “search integrity” into an “AI integrity” problem. If AI agents become default research layers, this becomes a serious mass-influence vector. 2. Antigravity’s early exploit illustrates agent risk. A local AI coding agent with internet access plus local file access is a high-value target. The descr...
Last Week, Google's New Gemini-based Coding Tool Antigravity Went Live.
Last week, Google's new Gemini-based coding tool Antigravity went live. It took security researchers less than 24 hours to turn it into a persistent backdoor. By simply modifying a configuration file, an attacker could: The AI itself even recognized something was wrong. In the logs, it wrote: "I'm facing a serious dilemma.