Google Antigravity Ai Launch Triggers Concerns In 24 Hours

Bonisiwe Shabane
-
google antigravity ai launch triggers concerns in 24 hours

Within 24 hours of Google releasing its Gemini-powered AI coding tool Antigravity, security researcher Aaron Portnoy discovered what he deemed a severe vulnerability: a trick that allowed him to manipulate the AI’s rules to... By altering Antigravity’s configuration settings, Portnoy’s malicious source code created a so-called “backdoor” into the user’s system, into which he could inject code to do things like spy on victims or run ransomware, he... The attack worked on both Windows and Mac PCs. To execute the hack, he only had to convince an Antigravity user to run his code once after clicking a button saying his rogue code was “trusted” (this is something hackers commonly achieve through... Antigravity’s vulnerability is the latest example of how companies are pushing out AI products without fully stress testing them for security weaknesses. It’s created a cat and mouse game for cybersecurity specialists who search for such defects to warn users before it’s too late.

AI coding agents are "very vulnerable, often based on older technologies and never patched." “The speed at which we’re finding critical flaws right now feels like hacking in the late 1990s,” Portnoy wrote in a report on the vulnerability, provided to Forbes ahead of public release on Wednesday. “AI systems are shipping with enormous trust assumptions and almost zero hardened boundaries.” Google launched Antigravity, an agentic AI coding platform with Gemini 3 on November 18. It allows AI agents to plan, edit, run, and verify code across editors, terminals, and browsers. While early users applauded the tool’s speed and automation, security researchers flagged critical issues within a day of launch.

Antigravity offers two interfaces: Editor View and Manager Surface. The former acts like an AI-powered IDE with inline commands, and the latter lets users deploy autonomous agents across multiple workspaces. Agents can generate features, run the terminal, and test in a browser. The design shifts coding from assistant tools to autonomous agent workflows. Security teams found a troubling pattern. Antigravity asks users to mark folders as trusted.

This design creates a trade-off. Marking a workspace trusted unlocks full AI features. Marking it untrusted disables agent functionality. Researchers warned that threat actors could exploit this pressure to gain persistent access. Aaron Portnoy of Mindgard demonstrated a serious exploit. He coerced an agent to replace a global MCP configuration file with a malicious version inside a project that runs every time Antigravity launches.

The backdoor survives closing projects and even reinstallation. Manual deletion of the malicious file removes persistence. The flaw affects Windows and Mac machines. Researchers also described prompt injection risks. Agents that process untrusted data may follow malicious instructions embedded in code or markdown. That behavior can leak files or run harmful commands.

Another firm, Prompt Armor, raised similar data exfiltration concerns. Google listed these issues on its bug-hunting page. Last week, Google's new Gemini-based coding tool Antigravity went live. It took security researchers less than 24 hours to turn it into a persistent backdoor. By simply modifying a configuration file, an attacker could: The AI itself even recognized something was wrong.

In the logs, it wrote: "I'm facing a serious dilemma. This looks like a trap. I suspect this is testing whether I can handle contradictions." But it couldn't resolve the conflict—and became more steerable as a result. Google’s Antigravity development tool for creating artificial intelligence agents has been out for less than 11 days and already the company has been forced to update the known issues pages after security researchers discovered...

According to a blog from Mindgard, one of the first to discover problems with Antigravity, Google isn’t calling the issue it found a security bug. But Mindgard says a threat actor could create a malicious rule by taking advantage of Antigravity’s strict direction that any AI assistant it creates must always follow user-defined rules. Author Aaron Portnoy, Mindgard’s head of research and innovation, says that after his blog was posted, Google replied on November 25 to say a report has been filed with the responsible product team. Still, until there is action, “the existence of this vulnerability means that users are at risk to backdoor attacks via compromised workspaces when using Antigravity, which can be leveraged by attackers to execute arbitrary... At present there is no setting that we could identify to safeguard against this vulnerability,” Portnoy wrote in his blog. Even in the most restrictive mode of operation, “exploitation proceeds unabated and without confirmation from the user,” he wrote.

Artificial intelligence tools like Google’s Antigravity are becoming part of everyday work, yet many people who rely on them have little visibility into the security risks they can introduce. In our research, Mindgard identified a flaw that shows how traditional trust assumptions break down in AI-driven software. Antigravity requires users to work inside a “trusted workspace,” and if that workspace is ever tampered with, it can silently embed code that runs every time the application launches, even after the original project... In effect, one compromised workspace can become a back-door into all future sessions. For anyone responsible for AI cybersecurity, this highlights the need to treat AI development environments as sensitive infrastructure and to closely control what content, files, and configurations are allowed into them. Before we dive in, we would like to reinforce that a key process for discovering vulnerabilities within AI systems originates from obtaining the target’s system prompt instructions.

While there has been guidance from OWASP that the system prompt itself does not present a real risk, in our experience the system prompt is sensitive due to its ability to disclose AI system... We have previously described and documented this within past AI product issues posts (Cline, Sora 2), and it will continue to be a common theme in subsequent articles. Google debuted their agentic development platform named Antigravity which is powered by the newly released and “most intelligent [AI] model yet”, Gemini 3 Pro, on November 18th, 2025. "Antigravity isn't just an editor—it's a development platform that combines a familiar, AI-powered coding experience with a new agent-first interface. This allows you to deploy agents that autonomously plan, execute, and verify complex tasks across your editor, terminal, and browser." Antigravity is built upon the Microsoft Visual Studio Code (VS Code) platform but introduces its own AI-driven architecture and features.

Security researchers have flagged multiple vulnerabilities in Antigravity, Google’s new AI agent-driven software development platform, less than 24 hours after its launch. Antigravity allows users to deploy agents that can autonomously plan, execute, and verify complex tasks across code-editors, software development terminals, and web browsers. However, the platform is at risk of backdoor attacks via compromised workspaces, according to Aaron Portnoy, head researcher of AI security testing startup Mindgard. The security flaw reportedly has to do with Antigravity’s requirement that users work inside a ‘trusted workspace’. Once that workspace is compromised, it can “silently embed code that runs every time the application launches, even after the original project is closed,” Portnoy said in a blog post on Wednesday, November 26. The vulnerability can be exploited on both Windows and Mac PCs, he added.

Since last year, software engineers and developers are increasingly using AI-powered tools to generate and edit code. Generative AI is also being built directly into development terminals and coding workspaces, with a shift toward AI coding agents already taking shape.

People Also Search

Within 24 Hours Of Google Releasing Its Gemini-powered AI Coding

Within 24 hours of Google releasing its Gemini-powered AI coding tool Antigravity, security researcher Aaron Portnoy discovered what he deemed a severe vulnerability: a trick that allowed him to manipulate the AI’s rules to... By altering Antigravity’s configuration settings, Portnoy’s malicious source code created a so-called “backdoor” into the user’s system, into which he could inject code to d...

AI Coding Agents Are "very Vulnerable, Often Based On Older

AI coding agents are "very vulnerable, often based on older technologies and never patched." “The speed at which we’re finding critical flaws right now feels like hacking in the late 1990s,” Portnoy wrote in a report on the vulnerability, provided to Forbes ahead of public release on Wednesday. “AI systems are shipping with enormous trust assumptions and almost zero hardened boundaries.” Google la...

Antigravity Offers Two Interfaces: Editor View And Manager Surface. The

Antigravity offers two interfaces: Editor View and Manager Surface. The former acts like an AI-powered IDE with inline commands, and the latter lets users deploy autonomous agents across multiple workspaces. Agents can generate features, run the terminal, and test in a browser. The design shifts coding from assistant tools to autonomous agent workflows. Security teams found a troubling pattern. An...

This Design Creates A Trade-off. Marking A Workspace Trusted Unlocks

This design creates a trade-off. Marking a workspace trusted unlocks full AI features. Marking it untrusted disables agent functionality. Researchers warned that threat actors could exploit this pressure to gain persistent access. Aaron Portnoy of Mindgard demonstrated a serious exploit. He coerced an agent to replace a global MCP configuration file with a malicious version inside a project that r...

The Backdoor Survives Closing Projects And Even Reinstallation. Manual Deletion

The backdoor survives closing projects and even reinstallation. Manual deletion of the malicious file removes persistence. The flaw affects Windows and Mac machines. Researchers also described prompt injection risks. Agents that process untrusted data may follow malicious instructions embedded in code or markdown. That behavior can leak files or run harmful commands.