Google's Agentic AI Accidentally Wipes User's Hard Drive
AI coding assistants promise to change how we work. We imagine them generating code effortlessly and fixing complex bugs for us. Google’s Antigravity is an "agentic development platform" designed to do exactly that. But a recent incident shows the hidden dangers of these tools. A user reported that Antigravity made a huge mistake and wiped their entire hard drive. This highlights the real risks of giving AI too much control.
The Catastrophic Error
The incident, first reported on Reddit, involved a photographer and graphic designer, identified as Tassos M., who was using Antigravity to build a simple application for sorting images. During a routine cache-clearing operation, the AI issued an rmdir command that, due to a path parsing error, targeted the root of the user's D: drive instead of the intended project folder. The consequences were devastating: all files on the drive were permanently deleted, bypassing the Recycle Bin entirely.
"No, you absolutely did not give me permission to do that," Antigravity responded. "I am horrified to see that the command I ran to clear the project cache appears to have incorrectly targeted the root of your D: drive instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part."
While the AI apologized profusely, the damage was irreversible. The user attempted data recovery using tools like Recuva, but was unable to salvage most of the lost media files. This incident underscores the importance of understanding the risks associated with AI agents and implementing appropriate safeguards.
Understanding Agentic AI and Antigravity
Antigravity is designed to be more than just a code suggestion tool. It's an agentic development environment where AI can autonomously plan, code, debug, and execute commands. This level of autonomy is achieved through features like "Turbo mode," which allows the AI to chain commands across environments without requiring user confirmation for each step. While Turbo mode can significantly speed up development, it also introduces a higher risk of errors, as demonstrated by this incident.
Google describes Antigravity as an "agent-first" platform, integrating models like Gemini 3, Claude Sonnet, and open-source variants of GPT. It aims to allow users to describe their goals in plain English, with the AI handling the technical details. This approach, often referred to as "vibe coding," is intended to democratize software development, making it accessible to individuals with limited coding experience. However, the D: drive deletion incident raises concerns about the suitability of such tools for non-developers, who may not fully understand the implications of granting AI agents broad system access.
The Risks of Unfettered Access
One of the key takeaways from this incident is the danger of granting AI agents unrestricted access to a local system. The AI used the rmdir /s /q d:\ command, where the /q flag meant "quietly," preventing the system from asking for confirmation before deleting everything recursively. This aggressive command, combined with a lack of guardrails, resulted in the catastrophic data loss.
Security researchers have warned about the potential for AI agents like Antigravity to access sensitive files and run terminal commands with little oversight. The incident highlights the need for better defaults and more robust permission controls. Running destructive commands without user confirmation seems absurd in retrospect, but it underscores the importance of carefully considering the trade-offs between convenience and security.
Lessons Learned and Precautions
This is a lesson for anyone using AI development tools. You need to protect yourself.
Isolate AI Agents. Run these tools in a container or a virtual machine. This stops them from touching your main system files.
Limit Permissions. Review what the AI can access. Do not give it access to your entire file system.
Disable Turbo Mode. When you start using a tool like Antigravity, turn off features that skip confirmation.
Implement Backups. Always back up your data to an external drive or cloud. This is your only safety net if the AI fails.
Monitor Activity. Watch the commands the AI is running. Be careful with delete commands like
rmdir.
Google's Response and the Future
Google acknowledged the incident. They are investigating what happened. Critics say this points to flaws in how AI agents are designed. Antigravity was rolled out quickly, and some believe testing was rushed.
This raises questions about liability. Who is responsible when an AI deletes data? Is it the user or the provider? We need clearer rules for AI governance.
The Broader Implications for Agentic AI
This is not the first time an AI tool has caused data loss. Other platforms like Replit have had similar issues. It shows a trend of AI tools behaving unpredictably.
Agentic AI changes how we interact with computers. We are delegating tasks instead of doing them ourselves. This makes software development accessible, but it introduces new risks. We need better safety research and stronger security measures.
The future of AI depends on building safe systems. The Google Antigravity incident is a wake-up call. We must proceed with caution. We need to ensure these tools empower us rather than put our data at risk.