AI Does Not Work
Dierk König Portrait

Unsupervised AI does not work

Recent experiences with using AI agents as replacements for software engineers have backfired spectacularly and the infamous Amazon outage is only the tip of the iceberg.

No matter how much we blame the agent, he takes no shame. He doesn't get fired. He takes no responsibility, and he will never be able to.

The suggested solution is to make a person responsible for the agent's mistakes: senior engineers must sign off all agent work. Eh - really?

Supervised AI does not work

Have you ever tried to make developers responsible for other people's code? If so, you know their reaction:

And while this reaction might sound strange to non-technical personnel, it is absolutely justified. Building the code is a totally different mental process than any analysis can provide. You know all the ins and outs and hidden assumptions and covered-up weak points.

So, for making someone responsible, you better let them write the code themselves.

But we have someone to blame

A pull request for 5 lines of code gets many suggestions for improvement. A 500 line pull request gets none. But the whole purpose of AI agents is to produce more code, which means more and larger pull requests.

Wading through lots of code is tiring. A tired developer is more likely to miss unobvious defects.

The whole activity has no prospect of being rewarding. There is no opportunity to shine. There is nothing to be proud of. Everybody just wants to be done with it.

Supervising junior code has at least the benefit that one can see the junior learn and grow. No such prospect comforts the AI supervisor.

The to-be-expected net outcome is that AI code will at most be "shrugged off" into production. It is still essentially unsupervised, but now you have someone to blame: the poor engineer that you first deprived of his profession, then assigned a burdensome task, to finally throw him under the bus when convenient.