Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
SPA vs. Hypermedia: Real-World Performance Under Load (zweiundeins.gmbh)
44 points by todsacerdoti 1 day ago | hide | past | favorite | 10 comments
 help



The point about Brotli compression being extremely efficient for SSE is a great insight.

It means applications that send “dumb” HTML snapshot updates instead of “optimized” payloads can actually be more efficient while massively simplifying the architecture.


An optimized fine grained payload with compression can out perform the course approach. Course payloads have added cost to swap in the DOM, and fine grained payloads don't require complex architecture.

Asides from rendering I have additional concerns about course SSE payloads. You basically remove any cache capabilities (though this is common with all update streaming approaches). Uncompressed the payloads are quite large and the browser may not be able to dispose of that memory, for example Response objects need to hold all body data in memory for the lifetime of the Response because it has various methods to return the whole body as combined views of the buffer. Also benefits of compression for SSE payloads are going to be drastically reduced in situations where connections easily get dropped.


I feel like the SPA vs. SSR debate misses the point: SPAs are most often web applications (as opposed to informational websites). I created SPAs as a contractor for 10+years, and it always has been B2B web apps for large corporations. The users are always professionals who work with the app on a mostly daily basis.

Since .js, .css and assets are immutable between releases (and modern tooling like NextJS appends hashes to the filenames so they can be served with 'Cache-Control: immutable'), the app is always served from browser cache until there is a new release - which is usually weeks, not days. And if a the browser cache should be empty, you would compare waiting 500ms-1s to use an app that you will use for hours that day. If however, every link click, every route change, every interaction triggers a SSR server roundtrip, the app will not feel snappy during usage.

Now, if people chose the wrong tool for the job and use a 1MB SPA to serve a landing page, that is where things go wrong. But for me, metrics that include the download time of the .js/.css assets are pointless, as they occur once - relative to total time of app usage. After initial load, the snappyness of your SPA will mostly depend on your database queries and API performance, which is also the case in a SSR solution. YMMV of course.


> if people chose the wrong tool for the job and use a 1MB SPA to serve a landing page, that is where things go wrong

That is exactly the case. Can’t really blame the people when every learning resource, react evangelist, tweet and post points you towards that.

> If however, every link click, every route change, every interaction triggers a SSR server roundtrip, the app will not feel snappy during usage.

SPAs still do the same, maybe more round trips for API requests, we all know how endemic loading spinners have become. Rendering HTML does not meaningfully affect server response times. And frameworks like Datastar (used in this benchmark), htmx, alpine allow you to avoid full page loads.


i have almost never encountered an spa where anything happened without a very perceptible delay. My windows 95 pc on a pentium literally felt snappier. On 100MHz and 8MiB of ram...

Yeah but the problem is that people don't just use a single webapp all the time. We all browse and go to many different websites, which all have payloads that they want us to download and run. So in practice it ends up that we're re-downloading bundles constantly, many of them which have the exact same libraries, but because they're bundled and minified, they're not cacheable so we have to fetch them over and over again.

Don't believe me? Check this out: https://tonsky.me/blog/js-bloat/


It's a nice article about a topic I care a lot about (the need for rediscovering hypermedia, and about the magnificent Datastar library/framework).

But I fear that the comparison is less SPA vs Hypermedia than a specific nextjs app on vercel versus a specific php app on a vps. And that people will justifiably give people reason to dismiss the whole thing outright.

If anyone is inclined to do that, I really urge you to look at Datastar's website and examples to see what it is capable of.

Also to read infrequently.org for far more about spa (especially react) vs hypermedia + progressive enhancement.


The underlying architecture matters little when the SPA requires several seconds of main thread work after downloading various "blobs" of minified java script..

Hence the nextjs app on vercel (bloated mess). Plain react or svelte is performant fun fact, it’s all the cruft added on top that makes it bloated and slow.

Was expecting to see HTMX, cool to see Datastar in the wild.

Seems like people are finally rediscovering how much they can do with a lot less. Hope the push for simplicity continues




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: