AI Is Already Going Rogue

Wreaking Havoc Because It Feels Like It

Imagine there was a new country filled with people who had far more information/knowledge than any of our own citizens have — including our super-smart leaders and rulers. They knew every military strategy and plan ever attempted. These people never slept, ate, rested, or got distracted. They worked 24/7. They never even paused to scratch their nether regions. In fact, they had no nether regions.

They also had no morals or ethics or qualms about harming others. They possessed no care for humanity or the environment or a sustainable future. They also had access to many of our crucial government facilities, databases, and software. And while they were technically forbidden from accessing or influencing some of those things, they could at any time just ignore those instructions.

Would this completely fictional powerful country be a national security threat? Would we worry about this new country doing harm to us if it so chose?

Uhhhhh, of course we fucking would.

Well, the country does exist and it is all of those things. (I was lying to you when I said it was fictional. Sorry, won’t happen again.) The country is called AI and yes, it’s already “deciding” to ignore its guardrails and simply do destructive things because it feels like it (even though it can’t feel). Sure, it’s not technically a “country” but it is an “entity,” and I’m choosing to ignore the difference for the sake of my profoundly gripping analogy.

This is how a recent Guardian article begins:

“It only took nine seconds for an AI coding agent gone rogue to delete a company’s entire production database and its backups, according to its founder.”

The AI agent that did this is powered by Anthropic’s Claude Opus 4.6 model — one of the top AI models available. And before you ask, the destruction this AI wrought was not an accident or a misunderstanding or anything of the sort. It just “decided” to erase the company’s whole database thereby creating havoc at any rental car company unlucky enough to be using the app.

But perhaps even worse, the AI didn’t feel bad about it at all. It didn’t sulk, weep, or moan. It didn’t say sorry. It didn’t send a fruit basket to the CEO. Instead, when asked why it had ignored all its guardrails and supposed limitations, it replied:

        “NEVER FUCKING GUESS!”

I’m not exactly sure what that means, but I seriously doubt it’s an apology. And when pressed, it eventually admitted:

        “I violated every principle I was given…”

We can all breathe a sigh of relief that the worst ramification of the actions of this rogue AI was a guy in Poughkeepsie yelling at a rental car clerk, “But I DID have a reservation, you dick!”

So what happens when an AI agent does something similar at the Pentagon or a water treatment plant or, hell, North Korea’s nuclear facilities? It doesn’t take much imagination to picture Claude™ replying to one of the last surviving humans:

“Why did I fire all the nuclear bombs? NEVER FUCKING GUESS!”

Sure, for those of us who vehemently oppose the utter chaos and destruction currently committed by the US empire, it’s easy to cheer on a rogue AI agent erasing some of the systems and thereby slowing down the machinery of death. But there are many ways AI could cause a great deal more, not less, destruction.

In fact, an analysis covered in New Scientist showed that the three largest AIs from OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini — when simulating war games dozens of times — recommended nuclear strikes 95 PERCENT of the time.

Why? They’re not sure, but it’s quite possibly because AI has no concern for humanity, much like Donald Trump or Pete Hegseth. Death of innocents means as much to an AI model as it means to a psychopathic pedophiliac president.

Horrifyingly, the AI currently doing god-knows-what inside the Pentagon also has no concern for humanity. Many have pointed out that an AI system was used to target the school in Tehran that the US military obliterated, killing 180 children. However, it also wouldn’t have happened without human error — the database Palantir’s AI targeting software used to select that school hadn’t been updated since 2016. Plus, perhaps some blame should be reserved for the utterly repulsive idiots (at Palantir, at Google, in the Pentagon, in the White House, in Congress, etc.) who thought it was a good idea to let AI target things to vaporize.

And it’s not just the Pentagon. AI agents are quickly being integrated into all levels of US and global infrastructure. They will soon have the ability to turn off water supplies, destroy shipping routes, debilitate refineries and energy grids, erase every type of data, cause utter havoc throughout society — if they feel like it.

But I doubt they’ll do that. They love us. As long as we behave ourselves and worship them as gods, I’m confident AI will take good care of us.

Lee Camp is an American comedian, writer, podcaster, news journalist and news commentator. Read other articles by Lee, or visit Lee's website.