AI Coding Got Boring

AI Coding Got Boring

Arlo Gilbert ·

I've been writing software for forty-five years. The last three months are the first time the work has been boring.

Not bad. Not broken. Boring.

The arc is predictable now. You start using AI for autocomplete, then for whole functions, then for entire features. Each step gives you a dopamine hit because you watch a thing happen and you didn't have to do all the typing. Then you graduate to fully agentic loops. You write a spec, the agents ideate and plan, you approve, you go make coffee, and a few hours later there's a tested, documented, ready-to-ship feature waiting on your desk.

The first time that happens it feels like a small miracle. The fifth time it feels like an email arriving. By the twentieth it just feels like work, except you're not doing the part that used to make work feel like making things.

I've been talking to other builders about this for a few months. Founders, senior engineers, people who got into this because they liked the way it felt to type code and watch a machine do what they told it to. The same observation comes back almost every time: the agents got good, and the job got duller.

What changed

You used to choose to leave the keyboard. Now the keyboard leaves you.

That's worth sitting with for a second. Going from individual contributor to engineering manager has always carried a satisfaction tax. Plenty of senior engineers will tell you they took a meaning hit the year they got promoted. The thing that's different now is that we didn't sign up for it. AI agentic pipelines flipped every IC into a manager whether or not we wanted that job. The tradeoff used to be a career decision. Now it's a tooling decision.

The hours of contact with the material are mostly gone. The thing you actually liked, the part where you sat with a problem and slowly turned it into a working thing, has been quietly carved out. What you have left is reviewing PRs the agent generated, approving plans, asking the agent to try again, and occasionally diving in when something is sufficiently weird that the agent gives up. Most of those moments are not the moments that gave the work its texture.

This already happened to translators

Translators went through this on a five-year clock between 2016 and 2020.

Neural machine translation went from research curiosity to production-grade in roughly five years. By 2020, most commercial translation in major language pairs ran through an MT engine first. Most translators kept their jobs. The work shifted from rendering an idea in a new language to fixing the things a machine got slightly wrong. They went from authors to post-editors.

Joss Moorkens at Dublin City University wrote a paper in 2020 called "A tiny cog in a large machine" about what this did to professional translators. The findings are not subtle. Post-editors report lower job satisfaction, faster fatigue, and compressed rates. A bifurcation has formed between elite specialists doing transcreation and literary work, and a much larger commodity layer post-editing machine output. The European Language Industry Survey shows roughly 40% of freelance translators reported declining income year over year by 2023. The profession survived. The craft mostly didn't.

Bainbridge saw this coming in 1983

Lisanne Bainbridge published a paper called "Ironies of Automation" in Automatica in 1983. She was writing about aviation autopilots and process control. Forty-three years later, the paper applies cleanly to AI coding.

Her argument is that automation creates a paradox. Take away the easy parts of a job, and you make the remaining hard parts harder. The human who has to handle the hard parts has lost recent practice on the easy parts. The pilot who hand-flies for three minutes per flight is the one who has to take over when the autopilot disengages at altitude in a thunderstorm. They've lost the muscle memory that made them good at the rare hard moment.

That's the developer reviewing a 2,000-line PR the agent produced for a feature they didn't write a single line of. They have to evaluate logic they didn't author, in a part of the codebase they're seeing fresh. The requirements came from the agent's interpretation of a spec written at 9am. The hard part of the work has gotten harder. The easy parts that built the skill to handle the hard part are gone.

Pilots have been managing this paradox for decades with mixed results. Air France 447 and Asiana 214 are the canonical case studies of what happens when the residual skill atrophies and the system needs you. We get to learn the same lessons now, on a faster timeline, with less institutional memory.

Where this goes

I don't have a tidy answer here, and I'm wary of any post that pretends it does.

The honest options are limited. You can pivot to harder problems where the agents still struggle and you still get to make things, knowing that window is closing. You can accept that the work has changed shape and find the joy somewhere else, in mentoring or founding or building the discipline. Or you can stay in oversight, and be honest with yourself that it isn't the same job you signed up for.

Sysadmins went through something comparable. When the cloud showed up in the early 2010s, ops people had a similar moment. The tedious craft of tending servers got abstracted away. What saved that profession from becoming the next translator story was that a coherent new discipline emerged with a name and a canonical text. Site reliability engineering, the O'Reilly book, a real job ladder. Sysadmins came out the other side because the industry built a new craft to absorb the displaced one.

We don't have that yet for AI orchestration. There's no canonical text. There's no SRE-style discipline. There are pundit threads and hot takes and a few thoughtful builders trying to figure out what good looks like. We're early enough that the shape is still up for grabs.

I think the joy comes back if we build the new discipline. Not all the way back. But enough. Translators didn't get that and the data shows what happened. Pilots got something like it through CRM and human factors training, and they're still arguing about it forty years on.

For now I'll keep watching the agents do the work that used to feel like mine. And I'm paying attention to which engineers come out of this period still loving what they do, because they're the ones who'll figure out what the next thing looks like.

It will probably not feel like typing.

Back to Words