When cameras were first introduced, French painter Paul Delaroche reportedly opined “From today, painting is dead.” That was nearly two hundred years ago, and painting is doing just fine. But the nature of painting changed. Photography took over the work of faithful reproduction, and painting became something else: more expressive, more conceptual, more human. I think something similar is happening with code.

LLMs change Software Engineering But Do Not Replace It

I’ve been using coding agents like Claude Code to do most of my development these days, with overall great success. This is a stark contrast to even a year ago, when LLM output was still prone to hallucinations around formatting and typos that broke the code it generated. But I can’t remember the last time Claude Code output didn’t at least run.

One of the things I’ve come to appreciate about coding with LLMs is I have greater control over how I want the code to work and be designed. Before LLMs I would have to spend weeks or more implementing a design decision, and maybe weeks or months if I had to refactor one. Now I can just tell the LLM to develop using web components, or via a classless framework, and it will happily go off and build these out. And if it was too ambitious, or not ambitious enough, or I changed my mind about something, changing the code is generally just a prompt away. The mechanical cost of trying an idea has collapsed. The thinking behind which idea to try has not.

And while I think this greater control over the design of software is a good thing, I recognize it is not automatic. When you prompt an LLM to generate a codebase, it will produce something. But the thing it produces is often an amateur version. It may throw in features you didn’t ask for, make assumptions about your architecture, or solve problems you don’t have. These additions can occasionally be helpful, but more often they’re overkill or simply noise.

As the recent Claude Code system prompt leak showed, LLMs are happy to create functions that are hundreds or thousands of lines long. And without guidance, they will keep growing these functions to be larger and larger. No human would ever write a single function that long. A human would feel the weight of it, the unreadability, and break it apart. LLMs don’t have that instinct. They just keep going.

LLMs still work best when there is a skilled human in the loop who can evaluate, critique, and guide what the LLM is doing. The time and cost of code generation has been dramatically reduced, but the skill required to produce quality has not gone away. Even if you prompt an LLM for “good design” or have it roleplay as a senior engineer, the decisions it makes still require a large amount of taste to assess whether it is making reasonable propositions or over-engineering for what you are trying to do.

The Future Of Software Engineering

Maybe that won’t always be the case. Maybe a year from now models will be that much better and all the decisions and thinking that engineers do when producing code will be a more natural part of the LLM output.

But just because we all have cameras in our smartphones, it does not make everyone a photographer. It also means one does not need a photographer to take a photo. Photography didn’t kill painting. It changed what painting was for. I suspect coding agents will do the same thing to software engineering.