There’s a particular kind of pain that only software developers know.
Not the “production is down” kind of pain.
Not the “we deployed on Friday” kind of pain.
No, I mean the slow creeping dread of pulling the latest changes, running the tests, and realizing the codebase has been “improved” in a way that feels like someone rearranged your entire kitchen… but left all the knives on the floor.
And lately, this pain has been supercharged by AI tooling. Cursor, Claude, Copilot, Gemini, ChatGPT-driven agents, whatever your poison is this week, all share a similar behavioral pattern:
They can produce a stunning amount of output at an impressive speed… while quietly reshaping your system into something that looks correct but behaves like an alien artifact from a parallel universe.
The reason is simple: AI agents don’t “change code” the way humans do. They don’t naturally respect boundaries unless you explicitly enforce them. They operate like an overly enthusiastic intern with root access and no fear of consequences. To understand why this happens, we need to talk about the different types of code changes, and how AI tooling tends to drift toward the most dangerous ones.
So let’s name the beasts.
You must be logged in to post a comment.