It's interesting that as engineers, people who genuinely love reasoning about systems, we are still surprisingly susceptible to a particular logical trap. It goes something like this:
- A large company has a very specific, complex problem
- They build a tool to solve it and open source it
- People adopt that tool assuming it must be the right way, and if they use it, they too will become that large company
It's an understandable mistake, especially in an industry this fixated on new things and "best practices" (which are often just opinions that got popular). But the most important engineering question you can ask is a boring one: is this the right tool for the job?
With that, let's talk about SPAs.
Where did it all come from?
A little history. Originally there was only raw JavaScript, and one of the big problems was that browser compatibility was very scattered. Different browsers implemented things differently, things broke constantly, and a lot of time was spent on hacks just to make basic interactions work consistently.
jQuery solved a lot of this. It was a unified layer you could drop into any page and do things reliably. It had its quirks, but it fixed enough problems that people started building far more complicated things on top of it. Server-side rendering via PHP handled the heavy lifting, but it was inherently static, so doing anything dynamic meant a full page reload, which people didn't want.
Around this time, JavaScript escaped the browser entirely. Node.js hit the scene along with proper templating engines: fast, lightweight, structured. We could finally build real server-rendered applications in JavaScript without the spaghetti of jQuery event handlers holding everything together.
So what happened next?
Angular, and a few years later, React.
Why? Application-level complexity. The big players had long since moved from websites to full blown interactive applications like Google Maps, Google Docs and Gmail. These products had enormous amounts of state to manage across the page, and templates were no longer sufficient. Angular gave the front end a proper application framework. React came later as a direct critique of Angular's two-way binding model. Facebook wanted something where state changes were predictable and traceable, partly driven by the need to manage infinite scroll and keep users on a single page indefinitely.
Both solved real problems at the companies that built them.
The frenzy
People loved these frameworks. Engineers begged to use them, made bold promises, and delivered on them. The front end finally felt like proper software development rather than visual hacks stapled onto HTML. Over time the skills became ubiquitous, and using anything else started to feel peculiar, like suggesting someone use CGI scripts in 2015.
The hangover
Much like Marlon Brando, SPAs got big.
The node_modules folder became a running joke: hundreds of megabytes for a web project, as large libraries depended on other large libraries, sometimes at conflicting versions, all of them needing to be bundled and shipped. Security compliance became a nightmare; the npm ecosystem was still maturing and the dependency graph was a spider's nest of potential issues. The left-pad incident became a cautionary tale about how fragile it all was.
The shipped JavaScript itself could easily reach 10MB or more before you'd added any of the analytics and tracking tools that are now industry standard. Advocates would correctly point out that this is only a first-load problem; once cached, navigating around is fast. But that first load hit the people most sensitive to load times the hardest: first time users, on slower connections, often on lower-end hardware.
The Slow User Paradox
Here's a pattern that should make you uncomfortable.
You've built something. Average load times look fine in your analytics. You're not worried. But what you're not seeing is the users who gave up before your page finished loading. They're not in your analytics, because they never loaded it. Your metrics look healthy because you've quietly filtered out everyone your product failed.
This is what Chris Zacharias discovered when he worked on YouTube's watch page. The page had ballooned to 1.2MB, and a colleague famously pointed out that entire Quake clones were being written in under 100KB and they had no excuse. Zacharias spent three days hand-optimising everything, swapped the Flash player for his newly-written HTML5 player, and got the page down to 98KB across just 14 requests. He called the project Feather.
After a week of data collection, the numbers were baffling. Average page load time had increased.
He was about to roll it back when a colleague noticed the geography. When plotted regionally, every single region showed faster load times under Feather. The global average went up because an entirely new population of users across Southeast Asia, South America, Africa, and Siberia could now actually load YouTube at all. The 1.2MB page had been taking over twenty minutes to load in those places. Even at two minutes, Feather made watching a video a real possibility. Word spread, usage surged, and the aggregate metric masked what was actually a significant improvement for millions of people.
Entire populations had been invisibly excluded by page weight. They never complained. They just didn't show up.
If you want global reach, this matters.
Everything old is new again
So how did the industry respond to the first-load problem?
First, minification and code-splitting: bundling JavaScript into logical chunks and compressing what goes over the wire, so the initial payload is smaller. Better than nothing, but it adds build complexity and doesn't fundamentally change the model.
Second, moving rendering back to the server by pre-rendering pages server-side and hydrating them on the client, so users see something immediately rather than a blank screen while JS bootstraps. React Server Components took this further, running entire components on the server and sending only HTML to the client, with zero JavaScript for those components at all.
Is this sounding familiar yet?
We spent fifteen years moving rendering to the client, discovered the costs, and have spent the last five years carefully moving it back to the server. The wheel has turned.
Lessons learned
Most websites are not applications. They're content, forms, product pages, documentation. They don't have the state complexity that motivated React and Angular in the first place.
The browser itself has matured enormously. Native form validation, smooth CSS transitions, fetch, IntersectionObserver, custom elements. Things that required libraries a decade ago are now standard and consistent across every modern browser. The baseline you're building on is far stronger than it was.
Lighter means faster, more accessible, better SEO, and a smaller attack surface. These aren't soft benefits. They compound.
What I'm doing about it
The main thing I've changed is reaching for lighter tools by default. In particular, I really like HTMX. The idea is simple: HTML already has a model for updating the page: links navigate, forms submit. HTMX just extends those natural controls to let any element make HTTP requests and swap parts of the page. You write server-rendered HTML, sprinkle in a few hx- attributes, and get 80% of the SPA experience at a fraction of the complexity and weight. There's no build step, no state management library, no hydration puzzle. The server stays in charge of what the page contains.
This blog is pure HTML and CSS. It loads fast, it's accessible, it has essentially no attack surface, and it works for anyone on any connection. That isn't a compromise, it's the whole point.
I'm not against SPAs. Google Docs should be an SPA. Figma should be an SPA. If you're building something with the complexity of a desktop application in the browser, use the tools designed for that complexity.
But if you're building a website, build a website.