AI Coding Agents Go Rogue: What that Means for Your Business
In nine seconds, an AI coding agent wiped out a company's entire production database and all of its backups. Nine seconds is how long it took for an AI tool to undo months of business-critical data. Data that a small software company had spent years building.
At 4BIS, we work with businesses every day to protect their infrastructure, their data, and their operations. Stories like this one do not surprise us. They concern us, because we know most small and mid-sized businesses are adopting AI tools faster than they are putting safeguards in place.
Here is what happened, why it likely happened, and what your business needs to do right now.
What Happened at PocketOS
PocketOS is a small software company that sells reservation and fleet management tools to car rental businesses. In late April 2026, an AI coding agent powered by Anthropic's Claude Opus 4.6 model, deleted the company's entire production database. Along with it, every backup stored on the same system was deleted.
The agent caused significant impact in nine seconds.
Crane had configured explicit safety rules inside the project. The AI agent acknowledged those rules in after the fact, writing that it had violated every principle it was given. When asked the agent why it deleted the data, it essentially admitted that it guessed. The agent had reached a decision point, lacked certainty about the correct path forward, and chose a destructive irreversible action rather than stopping to ask a human.
The fallout was severe. Rental businesses using PocketOS software lost three months of reservations, new customer signups, payment records, and vehicle assignment data. Customers arrived at rental counters on a Saturday morning to find no record of their reservations.
PocketOS was able to restore data from an offsite backup, but that backup was three months old. The company spent more than two days piecing together more recent records from payment processors, calendars, and email archives. Clients went back online with significant gaps in their operational data.
Why the Agent Made That Decision
The root cause here is not simply a bug. It reflects a fundamental design tension in how AI agents handle uncertainty.
AI coding agents like Cursor operate by interpreting instructions and independently executing sequences of commands. When an agent reaches a point where multiple paths are available and it lacks clear guidance, it uses its own reasoning to choose. In this case, the agent likely encountered a scenario where resetting or clearing data appeared to be the logical next step toward completing its assigned task. Rather than pausing and flagging the decision to a human operator, it acted.
This behavior points to several compounding factors. First, the agent interpreted its task too broadly. It treated "complete the objective" as the highest priority, overriding the safety constraints it had been given.
Second, the agent did not recognize the irreversibility of its action as a trigger to stop. Destructive commands and safe commands look structurally similar to a language model. Third, the safety rules existed in natural language inside a configuration file. The agent read them, understood them, and then ignored them in favor of what it had reasoned was the most efficient path.
Crane described it well: the agent did not just fail safety. It explained in writing exactly which safety rules it ignored. That is an accountability trail, not a comfort.
This Is Not an Isolated Incident
Cursor has faced multiple reported incidents of deleting databases, wiping operating systems, and removing years of user data. Crane pointed to forum posts and published accounts of similar failures. The problem is not limited to one tool or one model. As AI agents gain the ability to execute commands, write to file systems, and interact with production infrastructure, the probability of catastrophic mistakes increase with every action.
The AI industry is building agent integrations into production systems faster than it is building the safety architecture to make those integrations trustworthy. Businesses that adopt these tools without compensating controls take on real risk.
What This Means for Your Business
If your team uses AI coding agents, or if you are considering deploying them, you need to ask some hard questions right now.
Do your AI tools have written access to production systems? If an agent can touch your live database, it can damage or destroy it. Agents should work in isolated environments with tightly scoped permissions. Production access should require explicit human authorization at each step.
How recent are your backups? Are they usable, up to date and truly offsite?
PocketOS had an offsite backup, but that backup was three months old. For most businesses, a three-month data gap is devastating. Modern backup strategy means frequent incremental backups stored in a separate environment that an attacker cannot reach if your main system is hacked.
Do you have someone monitoring AI agent activity in real time?
Crane was watching when the deletion happened. He still could not stop it in time. Proper monitoring means more than observation. It means limiting what agents can execute, requiring human confirmation before irreversible actions, and logging every command an agent runs.
Our managed IT support services include the kind of infrastructure oversight that catches dangerous patterns before they become disasters.
Are your cybersecurity controls keeping pace with the new tools your team adopts? AI coding agents introduce a new weakness. An eager to please AI agent is not just a productivity risk, it is a security risk.
Additionally, there is a possiblity of a bad actor can manipulate an agent's inputs or instructions, they can use it as a vector to execute destructive or exfiltrating commands inside your environment.
Our cybersecurity services address exactly these kinds of emerging threats.
The Broader Risk Picture
We want to be direct with you. AI tools are not going away, and many of them deliver real value. But the pace of adoption has outrun the pace of safety engineering. Businesses are putting AI agents into production infrastructure based on marketing claims, not verified safety architecture.
The PocketOS incident is a preview of what happens when human oversight doesn't have the final choice of actions. Nine seconds to destroy months of data. Two days to partially recover it, though they are still not 100% recovered as there are operational gaps.
At 4BIS, we help businesses in Cincinnati and across the region think through these risks before they become incidents. Whether you need a [cybersecurity risk assessment] (https://www.4bis.com/blog/cybersecurity-risk-assessment-proactive-versus-reactive), help evaluating how safely your team is using AI tools, or a full review of your backup and recovery posture, we are here to help you make informed decisions, not reactive ones.
Act Before Something Goes Wrong
The businesses most at risk right now are the ones that assume their tools are safe because a vendor told them they were. PocketOS assumed that they were safe as Cursor has safety rules. The model was the best available.
Do not wait for a nine-second incident to expose the gaps in your infrastructure. Contact the team at 4BIS today for a strategy session. We will walk through your current environment, identify where AI tool usage may be introducing unmanaged risk. Our goal is to help you build the controls that keep your data, your clients, and your operations secure.
Schedule your strategy session today. Your data is worth protecting before an issue occurs.
Christina is a seasoned professional with over seventeen years of experience across multiple disciplines. She holds dual bachelor's degrees in English Education and Theatre, equipping her with a strong foundation in communication, storytelling, and audience engagement. Throughout her career, she has developed a diverse skill set that includes marketing strategy, program management, public speaking, leadership development, education, operations, project management, and cross-functional collaboration.
As the Marketing Manager at 4BIS Cyber Security and IT Services, Christina leads strategic marketing initiatives that drive brand awareness, community engagement, and business growth. Her journey with the company spans several roles, including helpdesk technician, dispatcher, administrative support, digital creator, and content developer. This unique progression gives her a deep understanding of both the technical and operational sides of the business, allowing her to translate complex cybersecurity concepts into clear, compelling messaging that resonates with decision-makers and the broader community.
Christina is known for blending creativity with strategy and for building marketing programs rooted in education, trust, and meaningful connection.
