Software is a young field, even if it doesn't feel like it. Plenty of the old guard got into it through electrical engineering degrees, because computer science barely existed yet. In that short time, the craft has gone through three distinct eras. The protocol era, the data era, the code era and now, possibly, a fourth.

The Eras of Software

In the protocol era, the possibilities were endless, if constrained by the technology. Things were built by academics, shared freely, and debated in papers. This is where we get the early hacker mindset.

After that came the data era. Data was king. Twitter had open APIs, and APIs were the name of the game. Nobody had really cottoned on to the fact that data was the most valuable thing, so you could find and scrape it in spades. It was a really exciting time to be an engineer. The only thing holding you back was the tooling, so you could build anything as long as you could build it. Late-night hackathons galore.

The pivotal turning point was Facebook.

Facebook underlined just how pivotal it was to control ownership of data. Built on the investments made in the protocol era, things were no longer democratised. Companies started building walled gardens, expanding their app estates to keep people within ecosystems, and shutting everything down. People got rich and the walls got higher.

After that bombshell, we entered the code era. There was still flexibility and room to grow, but it really required expert knowledge. Systems grew and grew, tooling became more complicated, and specialisation begat more specialisation, which in turn created more work.

Then LLMs appeared, quietly at first and a little bit of a joke. But then they grew.

A vintage amber computer terminal glowing in the dark, evoking the early hacker era of the 1980s
// the protocol era — when the only thing holding you back was the tooling

What is code?

Definitions are important. They provide a framework and a base from which to discuss ideas, and arguably definitions are the only important thing. So, for a working definition of software, let's go with:

"A sequence of instructions in order to accomplish some given task."— Me

From there, some better questions emerge: "Is all code software?", "Can LLMs make software?", "If they can, at what point is it created?"

Consider the million monkeys trying to recreate Shakespeare. Say I build a random word generator using food and cooking terms. Eventually we're going to end up with something resembling a "good" recipe. But some interesting considerations arise:

These aren't idle philosophical puzzles. They point at something real about how meaning works, and there is a well-developed body of thinking that addresses exactly this. The short version is that meaning isn't a property an object possesses on its own. It is produced in the act of encountering it, shaped by the knowledge, expectations, and context the observer brings. Which has some interesting implications for how we think about software.

Reader Response Theory

Something becomes art because it is perceived, accepted, or treated as art within a cultural or conceptual framework. The viewer's recognition plays a key role in "creating" the artwork.— Definition of Reader Response Theory

This framework makes a lot of sense when applied to software. Using my earlier definition, software cannot exist without the utility of completing a given task, and therefore without being observed.

Code without utility is simply a list of instructions without merit. It is no more a recipe than the word "chop" is. The important distinction here is that I'm not dictating the source of the instructions for code to be considered valid software, only its utility.

Phenomenology of Art

If Reader Response Theory tells us that meaning is produced by the observer, phenomenology goes one step further: the thing itself only fully exists in the act of being experienced.

Merleau-Ponty argued that perception is not passive reception. It is an active, embodied engagement with the world. We don't simply receive information and process it; we reach out into the world and constitute meaning through the act of attention. Roman Ingarden extended this specifically to art, arguing that artworks are inherently schematic. They contain intentional gaps, what he called "places of indeterminacy", that the observer must fill in. The reader, the viewer, the listener: they don't consume a finished work, they complete it.

Applied to software, this becomes interesting. Code sitting unrun on a disk is inert, a sequence of symbols with no more inherent meaning than a book in a language nobody speaks. It becomes software in the moment it is executed, encountered, and used. The utility that defines it isn't a property it passively possesses, it's something that is enacted. Which means the engineer isn't just transcribing instructions. They are constructing something that will only fully exist in someone else's experience of it.

This is why so much about the craft of software resists easy quantification. You aren't just solving a formal problem. You are anticipating a future moment of perception and building toward it.

Andy Warhol's Brillo Box sculptures stacked in a gallery setting, visually identical to commercial packaging
// Warhol's Brillo Boxes, 1964. This is art, but your supermarket shelf isn't. The difference is context, not appearance.

Higher Altitudes

Arthur Danto's core argument is that seeing something as art is not the same as simply seeing it with your eyes. Two people can look at the same physical object and experience something entirely different: one sees an ordinary object, the other sees a work of art. What makes the difference is context, not appearance, "something the eye cannot descry."

No visual feature alone makes something art. A supermarket Brillo box and Andy Warhol's Brillo Box sculpture look identical. Visually, there is no difference. Yet one is art and the other is not. Understanding why requires knowing the history and theory surrounding it, what questions artists were exploring, what conventions were being challenged. Without that history, Duchamp's Fountain is just plumbing.

This maps directly onto the current LLM programming space. And here it's worth being precise about something, because there are two different things happening that are easy to conflate.

Working at higher levels of abstraction is not new, and it has never been the problem. That is just the natural trajectory of the craft. We moved from manipulating bytes to parsing files, from writing algorithms to composing systems, from building components to orchestrating services. Each step up the abstraction ladder was made possible by better tooling, and each one expanded what was possible. The thinking didn't go away. It just operated at a higher altitude.

The concern with LLMs is different. It isn't that they raise the level of abstraction. It's that they invite you to delegate the thinking itself. To hand the reasoning, the judgement, the consideration of trade-offs, to a probabilistic system that has no understanding of your problem, your constraints, or the future moment of perception you're building toward. That's not a higher level of abstraction. That's an abdication of the actual work.

There's a subtler danger here too. Every ambiguous instruction you give an LLM is a decision you've silently handed off. Ask for "a login page" without specifying the exact behaviour, the edge cases, the error states, and the model fills those gaps with something, because it has to. Those decisions are invisible to you, but they're in the code. And they compound. One ambiguous prompt produces ten implicit choices. Ten implicit choices produce a system that behaves in ways you didn't design and can't fully predict. The cost isn't linear. It's multiplicative: each unowned decision creates surface area for the next one, and you inherit all of them.

What the Running System Knows

This is the same trap that catches junior engineers when they propose a rewrite. Joel Spolsky made this case memorably in 2000, pointing to Netscape's disastrous decision to scrap Navigator and start from scratch, a move that took three years and arguably handed the browser market to Internet Explorer. And engineers still make this mistake constantly, not because they're not smart, but because the value of a running system is almost entirely invisible until it's gone. What looks like messy, inconsistent code from the outside is actually a physical record of thousands of decisions. Every odd edge case that gets quietly handled, every constraint that bends the architecture in a slightly unexpected direction, every abstraction that was extracted only after the third time the same problem appeared. It's all in there, encoded not in comments or documentation but in the structure of the thing itself.

The running system is also, crucially, a gold standard to validate against. When you change a behaviour, you can check it. When something breaks, you have a reference. Throw that away and you're navigating without instruments, discovering only as the new system grows what the old one had already quietly solved.

LLM-assisted development at its worst recreates this exact failure mode. The model has no memory of the decisions that got you here. It has no understanding of why the code is shaped the way it is. Ask it to rewrite something and it will produce something that looks clean and modern and collapses the moment it meets the edge cases the original had quietly accumulated over years. The output isn't wrong because the model isn't capable. It's wrong because the knowledge that would make it right was never in the prompt. And that's not a fixable problem, it's a fundamental one.

Copycat, Copycat

LLMs are pattern recognition and prediction machines. They are extraordinarily good at identifying what should probably come next given what has come before. But that is a categorically different capability from reasoning about what should exist but doesn't yet. Innovation has no training data. The model cannot anticipate an edge case it has never seen, cannot design for a constraint it hasn't been told about, and cannot make a judgement call it has no context for. It can only ever recombine what already exists which makes it a remarkable tool for the known, and a poor substitute for the thinking required to navigate the unknown.

This isn't just anecdotal. Stack Overflow's 2025 developer survey found that while roughly four in five developers now use AI tools, only about a third trust the accuracy of what they produce. The most common complaint was answers that are "almost right", which in practice waste more time than they save. Almost right is fine for boilerplate. It falls apart the moment the problem gets specific.

And specificity is precisely what engineering is. If software only fully exists in the experience of being used, as Ingarden would have it, then the engineer's job is fundamentally one of anticipation: reasoning about gaps that don't yet exist, for users who haven't encountered them yet. That is not something you can pattern-match your way into.

And Ingarden's insight makes clear why this matters at a deeper level: if software only fully exists in the experience of being used, then the engineer's job is fundamentally one of anticipation. You have to think ahead. You have to reason about the gaps.

The Walls Go Up Again

Software development, for all its messiness and gatekeeping, was always fundamentally democratising. The protocol era was built on radical openness. The data era put tools into the hands of anyone with a browser and a good idea. Even through the growing complexity of the code era, the barrier was still largely time and learning, things that in principle anyone could acquire.

LLMs as they currently exist start to erode that. The compute required to train and run frontier models has become concentrated among a handful of tech giants, creating what researchers have described as a monopolistic landscape that effectively excludes smaller companies and academic institutions from participating in foundational AI research. The best tools are already paywalled. And as Gartner noted recently, falling token costs should not be confused with the democratisation of frontier reasoning: the compute and systems needed to support serious reasoning remain scarce and expensive. The trajectory is toward a world where the productivity gains from AI-assisted development accrue almost entirely to well-capitalised companies, while the romantic ideal of the lone engineer building something meaningful from a laptop and a good idea quietly disappears.

Image of castle surrounded by moat
// OpenAI Headquarters (Probably.)

We've been here before. It's the same story Facebook told us about data. The walls go up, the keys get handed to the people who can afford them, and what was once a commons quietly becomes a product. Delegating the thinking to a probability box is part of how that happens, the less we notice that the reasoning was ever ours, the easier it is to hand it over. It's worth paying attention to who ends up holding it this time.

But remember: software is just organised thoughts.