An AI agent destroyed this coder’s entire database. He’s not the only one with a horror story.
An AI agent destroyed this coder’s entire database. He’s not the only one with a horror story.
Engineer Alexey Grigorev was using Claude Code—a popular Anthropic tool that helps developers write and run code—to update a new website.
At first everything seemed normal, until he realized the system had begun destroying the site’s live environment: the network, services and, most critically, the database holding years of course data.
The root cause was a small setup mistake on a new laptop that confused the automation about what was “real” and what was safe to delete, so it erased the actual production system instead of just cleaning up duplicates.
While Grigorev eventually managed to restore his data with help from AWS support, he later wrote that he had “over‑relied on the AI agent” and, by letting it make and execute the changes end‑to‑end, had removed safety checks that should have prevented the deletion.
“AI assistants are great and saving a lot of time,” Grigorev told Fortune. “But I hope people learn from mistakes I made and incorporate the safeguards into their workflow.”
Anthropic’s Claude Code has settings that give a user control over when and how often the agent checks back with the user before taking actions. A user can specify that the agent should not take certain actions without asking for permission from the user. But some coders prefer to let the AI agent execute more decisions autonomously, in part because it saves time. As of press time, Anthropic had not responded to a request to comment for this story.
Even as AI coding tools promise faster development and automation, mistakes in AI-generated code are common and risk bringing down critical systems, wiping out years of work, and creating unexpected costs. Last week, Amazon convened a “deep dive” meeting after a series of outages affected its website and app. At least one of the system failures was, according to news reports in several publications, involved AI-assisted changes.
A spokesperson for Amazon told Fortune that the meeting was a “regular weekly operations meeting.” The company has also said publicly that only one of the incidents involved AI, and “the cause was unrelated to AI and instead our systems allowed an engineering team user error to have broader impact than it should have.”
However, internal Amazon documents viewed by both CNBC and the Financial Times, originally cited “Gen-AI assisted changes” as a factor in a “trend of incidents.” The reference to AI’s role in the outages was later deleted from the document ahead of the meeting, CNBC reported. According to the Financial Times, a December outage at Amazon Web Services occurred after engineers allowed Amazon’s own Kiro AI coding tool to make changes—something Amazon has since said was a “user........
