We’ve been adding mountains of JS to “feel” fast, while making everything slower.
I disagree that that is a bad thing. I’m not providing evidence here, so disagree if you wish. In my opinion, users don’t care if something is fast. They don’t take out stopwatches to time the page transitions. They care if something feels fast. They want the web page to react immediately when they issue an action, and have the impression that they’re not waiting for too long. Feeling fast is way more important than being fast, even if the feeling comes at a performance hit.
It takes about 120ms for a human to detect (not react to) a stimulus [1] (For the gamers: That’s 8FPS). So if you page responds to a user action in that time frame, it feels instantaneous.
If you want to try it yourself, paste this code into your browser console. Then click anywhere on the page and see if the delay of 125ms feels annoying to you.
let isBlack = true; document.body.addEventListener('mousedown', () => { setTimeout(() => { document.body.style.backgroundColor = isBlack ? 'red' : 'black' isBlack = !isBlack }, 125) })
I dunno. As a user, if you throw me a code that takes 120ms for my CPU to execute, and tell me there is an alternative that takes 2x-3x less time, I would look for someone who provides that instead.
deleted by creator
No i want it to actually be fast, and stop using unholy amounts of ram for basic tasks. I am not pulling out a stopwatch you are correct but when I try to use one of my still perfectly functional but older systems I can feel the effects of all this JavaScript bullshit.
What would happen on unsupported browsers?
Hopefully the page would load just as well, but the transition would be less smooth.
If that allows the website to render fast, without JS, on all browsers then it’s more than worth it.
Yes, firefox users would just miss the transitions until it’s added to the browser. It’s absolutely worth it, as JavaScript frameworks use have gone out of hand as the article says…
If only “fullstack” or “typescript” devs weren’t so scared of CSS. They can optimize a weird join, they know the Big O notation of a function that operates on a list that’ll never ever exceed a size of like 1000 items so who the fuck cares, but as soon as you ask them about grid, layers, container queries, or even what things like houdini can hopefully do one day, they just collectively shit themselves.
What corporations and enterprise software development practices have done to the web makes me crash out, sorry. Very few people who actually care about CSS are making a living wage making things with it as their job.
Maybe I missed it, but how do I share state between page transitions? For instance, I’m currently displaying a user list. How do I carry the user over to the details page without re-fetching it, and more interestingly, without re-instantiating the User instance from the data?
I imagine (though I’m the first to admit that I don’t know every modern API by heart) I would have to move the user to local storage before allowing the browser to navigate. That sounds annoying, since I’m either persisting the whole user list, or I need to check which link the user clicked, prevent the navigation, store the relevant user, and then navigate manually.
With an SPA the user list just lives in a JS variable. When I’m on the user’s details page, I just find the relevant user in that list without even making another HTTP request.
How do I carry the user over to the details page without re-fetching it
why are you making this restriction?
without re-instantiating the User instance from the data
same here, why are you making this restriction? You’re arbitrarily limiting your design to make sure that an SPA is the right choice. If you designed with a static page in mind, re-retrieving the user list would be instantaneous. You’d have the list partially cached or fully cached on the server and a retrieval would be instant, faster than any SPA would switch contexts. Why does re-instantiating the User matter at all? Don’t store the entire state in the browser and you don’t need to reinstantiate so much.
Talking about users as the data that is displayed as well as the user who visits the site gets confusing. So I’m changing the viewed data to be customers for this discussion.
How do I carry the [customer[ over to the details page without re-fetching it
why are you making this restriction?
Because re-fetching data that the client already has is wasteful.
re-retrieving the [customer] list would be instantaneous
Nothing that goes over the wire is ever instantaneous. Even if I hit the cache, then I’d still have a round-trip to confirm that.
faster than any SPA would switch contexts
For the apps I develop, latency is usually about 20ms. So you’re assuming that (given 1 million instructions per second, which is on the very low end) an SPA would need more than 20 million instructions to switch context?
Why does re-instantiating the [customer] matter at all?
Because it is the frontend’s responsibility to display data, not the backend’s. The backend will, for instance, only store the customer’s birthday, but users might be interested in their age. It is not the backend’s responsibility to mangle the data and swap out the birthday for their age.
This is why my customers aren’t just data objects, but have methods to get, for instance, their age. They might also have a method to check how much they should pay for a product, given the ir other data.
If I weren’t writing an SPA, then showing the expected cost for buying a product would require displaying a form (that was always there, but hidden via CSS) and having the user enter the product. Then they’d send off that form (refreshing the page in the process, which means downloading 90% unchanged HTML again for no reason). This refresh cannot even be sensibly cached or prefetched, because there are over 200 products to choose from. Confirming the booking would refresh the page again (though this time prefetching is an option).
If the user wants to go back to the customer list, pick a different customer, and do the same process again, we’re at 4 more requests that only download data the client should already have.
Also notice that the backend had to render a <select> with 200 options just in case the user wanted to book a product. How would you facilitate searching this on a static page? Refresh the page again each time the user presses a button?
Compare this to an SPA:
The client downloads the instructions on how to draw the customer list, and the customer data. Then the client downloads the instructions on how to draw the customer details (without all the forms, because most of them won’t be needed, which saves bandwidth).
Then the user clicks the ‘buy product’ form, which triggers two requests: one for the instructions on how to render the form, one for the products list. The client can then compute the price themselves (using the smart customer object).
If the user confirms, all that needs to be sent is a tiny “book this” message. The server doesn’t even need to respond with data because the client can compute all the changes itself.
If the user wants to do this process on another customer, or 100 customers, then it only needs to re-send the ‘book this’ messages for each customer, because all the rest is cached.
The customer list is a table with 2000 entries, sortable by 10 different columns. How would sorting work? Because the client doesn’t have access to the ‘raw’ data, only the HTML, it couldn’t sort that itself. Each sorting operation would need to be a request to the server and a re-render of the whole page. Do you expect the client to pre-load all 20-factorial sorting schemes?
Because re-fetching data that the client already has is wasteful.
from your point of view. From the point of view of a user with a device with <2gb of memory I guarantee they think your app is being incredibly wasteful. For users that have a bunch of tabs open I guarantee they think your app is being incredibly wasteful. Multiply that by how many users you have and I guarantee you’re wasting much much much more energy than you’re saving, because you’ve now distributed your wasteful habits across all your users rather than your singular server or kubernetes cluster or whatever you’re running.
Nothing that goes over the wire is ever instantaneous. Even if I hit the cache, then I’d still have a round-trip to confirm that.
you can cache with localstorage, or cookies, or at the cdn layer, or a thousand different solutions. You’re thinking very narrowly.
For the apps I develop, latency is usually about 20ms. So you’re assuming that (given 1 million instructions per second, which is on the very low end) an SPA would need more than 20 million instructions to switch context?
latency is 20ms on your machine. A developer machine, most likely running on a fiber connection, maybe ethernet, not mobile or mobile hotspot.
It’s also probably running with that latency with next to 0 tabs open, likely measured with a profiler turned on that frees up memory from the rest of your tabs to make sure to run with clean measurement.
Because it is the frontend’s responsibility to display data, not the backend’s. The backend will, for instance, only store the customer’s birthday, but users might be interested in their age. It is not the backend’s responsibility to mangle the data and swap out the birthday for their age. …
this has nothing to do with being an SPA. You can do the same with simple javascript.
If I weren’t writing an SPA, then showing the expected cost for buying a product would require displaying a form (that was always there, but hidden via CSS) and having the user enter the product. Then they’d send off that form (refreshing the page in the process, which means downloading 90% unchanged HTML again for no reason). This refresh cannot even be sensibly cached or prefetched, because there are over 200 products to choose from. Confirming the booking would refresh the page again (though this time prefetching is an option).
no it wouldn’t. You’re confusing “SPA” with “no javascript”. You have plenty of options that do not require 50mb of javascript that still allow all of the features you’re listing here.
Also notice that the backend had to render a <select> with 200 options just in case the user wanted to book a product. How would you facilitate searching this on a static page? Refresh the page again each time the user presses a button?
yeah it’s very clear you have confused “static” with “no javascript”. My own personal website is static. It has javascript. “Static” != “no javascript” and “No SPA” != “static”.
If the user wants to go back to the customer list, pick a different customer, and do the same process again, we’re at 4 more requests that only download data the client should already have.
no it wouldn’t. Your browser already knows how to cache this, it occurs on almost every request you make every day on non-SPA websites.
The client downloads the instructions on how to draw the customer list, and the customer data. Then the client downloads the instructions on how to draw the customer details (without all the forms, because most of them won’t be needed, which saves bandwidth).
except you’ve now downloaded 50mb of data that the user isn’t using. They simply needed the customer data and for it to be displayed and selectable.
I am not really gonna respond to the rest of your message, because I read over it and it’s more misunderstanding what an SPA is and what a ‘normal’ website might do (hint: you can use all the normal web technologies and still not be an SPA, including things like WS, SSE, AJAX, whatever you want).
Thanks for the response though, it really did help me understand why so many developers continually choose SPAs over just building a normal website.
SPAs are just fine.
Web dev is a never-ending cycle of people “well actuallying” constantly to invent new shit, which will be “well actuallyied” 2 weeks later.
in your case the user list would be rendered by the server and the client wouldn’t care, it would receive a new rendered copy when you changed pages.
it seems like their argument was all just sites that should have been fully static to begin with, and for some reason have made it sound like that’s the main use of SPAs. It’s a silly article and I wouldn’t change anything I’m doing based on it. if your site is a content based site(showing docs/articles/etc.) then you shouldn’t be using an SPA, especially just for page transitions. otherwise you have a valid use for an SPA and should continue regardless of these new APIs
I get what you are asking for but I don’t think it is even necessary to have a list of users on the client in the first place using the MPA approach. I would guess the list of users is just available to the server, which pre-renders HTML out of it and provides it to the client.
So we’re back to fully static pages and rendering HTML on the server? That sounds like a terrible idea. I don’t want to preload 10 different pages (for opening various filtering forms, creation forms, more pages of the user list, different lengths of the user list, different orderings of the list, and all combinations of the above) just in case a user needs one of them, which they mostly don’t.
Huh, wouldn’t just plain sql be perfectly fine for that use case? Make a get request with your filter/sort params, server-cache the result, return the data with a client-cache header. I’ve been serving up customized svg files like that in a personal project of mine and its been so much faster and cleaner than styling my svgs within the jsx.
That would work, but it demands co-operation by the backend for the additional validation. It also means re-transmitting 8MB of customer data just so they can be arranged differently in the frontend. I don’t really want to move presentation logic that far to the back. If you want to display more data, you need to touch both the table-drawing logic and the sort-validation logic.
…i mean, I get it, I’ve written some very scuffed JS in my time because using .filter() right before displaying my data felt easier than getting the right data to begin with. Especially if the backend is in a different git repo and uses some whacky ORM library I’m not familiar with, while the Product Owner n e e d s everything deployed today.
But you can’t tell me that applying filter/sort to 8MB of data in the frontend is anything but mega scuffed. Imagine you need to debug that and don’t even have an intermediate step to see whether the wrong data is arriving or whether filter/sort logic is wrong for a specific case, or whether its just the rendering. You’d always need a person understanding all three just to debug that one part.
Not to even mention how that would run on low-power devices with bad internet connection. Or what that does to your SEO.filter() right before displaying my data felt easier than getting the right data to begin with.
The issue here is that you don’t know what the right data is to begin with. SAP does what you’re suggesting. They demand you set the filters correctly before requesting any data, which is a terrible user experience.
Imagine you need to debug that and don’t even have an intermediate step to see whether the wrong data is arriving or whether filter/sort logic is wrong for a specific case, or whether its just the rendering.
That’s a strawman. Why would I not know what data arrives in the frontend. That’s what the network debugger is for. That’s what a breakpoint before the filter is for.
But you can’t tell me that applying filter/sort to 8MB of data in the frontend is anything but mega scuffed.
Personally, I find re-transmitting those 8MB of data for every different sorting request way worse. Remember that this isn’t even cacheable, because the data is different for different sorting requests.
Maybe we have different types of frontend and different levels of user proficiency in mind. In my case, I cannot possibly ask the user to know how they want a list sorted and filtered before seeing the list and the options. They’d throw the frontend in my face. If you have very knowledgable users that know what they want from the get-go, then it might be possible to show them a form to sort and filter and only request the data when the user sends the form.
Not to even mention how that would run on low-power devices with bad internet connection.
I don’t see how ‘bad connection’ is an argument in favor of re-requesting data just for it to be displayed in a different order. I’ve made this back-of-the-envelope calculation in another comment. For a good connection, latency is about 20ms. In that time a 1GHz processor can to 20 million operations. Take 10 operations for each comparison (to adjust for more complicated comparisons), and you can use 2 million comparisons to sort the list in the time it takes to re-fetch it. (Keep in mind that the act of rendering the HTML itself is also computationally expensive for tables). 2 million comparisons sorts a list of 120,000 entries.
This would’ve been a much more exciting article, if it was actually supported across the web and not just in Chromium…
It’s already on Safari and Firefox Nightly too.