Data shows that, by default, frameworks are quite expensive:
Of course it
is possible to make SPAs fast, but they aren't fast out of the box, so we need to account for the time and effort required to make and keep them fast. It's probably going to be easier by choosing a lightweight baseline performance cost early on. So how do we choose a framework? It’s a good idea to consider at least the total cost on size + initial execution times before choosing an option; lightweight options such as Preact, Inferno, Vue, Svelte, Alpine or Polymer can get the job done just fine. The size of your baseline will define the constraints for your application’s code.
noted by Seb Markbåge, a good way to measure start-up costs for frameworks is to first render a view, then delete it and then render again as it can tell you how the framework scales. The first render tends to warm up a bunch of lazily compiled code, which a larger tree can benefit from when it scales. The second render is basically an emulation of how code reuse on a page affects the performance characteristics as the page grows in complexity.
12-point scale scoring system by exploring features, accessibility, stability, performance, package ecosystem, community, learning curve, documentation, tooling, track record, team, compatibility, security for example. Perf Track tracks framework performance at scale. ( Large preview)
You can also rely on data collected on the web over a longer period of time. For example,
Perf Track tracks framework performance at scale, showing origin-aggregated Core Web Vitals scores for websites built in Angular, React, Vue, Polymer, Preact, Ember, Svelte and AMP. You can even specify and compare websites built with Gatsby, Next.js or Create React App, as well as websites built with Nuxt.js (Vue) or Sapper (Svelte).
A good starting point is to choose a
good default stack for your application. Gatsby (React), Next.js (React), Vuepress (Vue), Preact CLI, and PWA Starter Kit provide reasonable defaults for fast loading out of the box on average mobile hardware. Also, take a look at web.dev framework-specific performance guidance for React and Angular ( thanks, Phillip!).
And perhaps you could take a slightly more refreshing approach to building single-page applications altogether —
<body>, and merges its
<head>, all without incurring the cost of a full page load. You can check
quick detils and full documentation about the stack (Hotwire). CPU and compute performance of top-selling phones (Image credit: and Jason's and Addy's write-up on Modern Front-End Architectures. The overview below is based on their stellar work. Full Server-Side Rendering (SSR) In classic SSR, such as WordPress, all requests are handled entirely on the server. The requested content is returned as a finished HTML page and browsers can render it right away. Hence, SSR-apps can’t really make use of the DOM APIs, for example. The gap between First Contentful Paint and Time to Interactive is usually small, and the page can be rendered right away as HTML is being streamed to the browser.
This avoids additional round-trips for data fetching and templating on the client, since it’s handled before the browser gets a response.
However, we end up with
Thus, we can display a landing page quickly and then prefetch a SPA-framework for subsequent pages. Netflix has adopted this approach decreasing loading and Time-to-Interactive by 50%. Server-Side Rendering With (Re)Hydration (Universal Rendering, SSR + CSR) We can try to use the best of both worlds — the SSR and the CSR approches. With hydration in the mix, the HTML page returned from the server also contains a script that loads a fully-fledged client-side application. Ideally, that achieve a fast First Contentful Paint (like SSR) and then continue rendering with (re)hydration. Unfortunately, that's rarely the case. More often, the page does look ready but it can't respond to user's input, producing rage clicks and abandonments.
With React, we can
use , and then call the
ReactDOMServer module on a Node server like Express
renderToString method to render the top level components as a static HTML string.
With Vue.js, we can
use the vue-server-renderer to render a Vue instance into HTML using
renderToString. In Angular, we can
use to turn client requests into fully server-rendered HTML pages. A fully server-rendered experience can also be achieved out of the box with
Next.js (React) or Nuxt.js (Vue).
The approach has its downsides. As a result, we do gain full flexibility of client-side apps while providing faster server-side rendering, but we also end up with a
longer gap between First Contentful Paint and Time To Interactive and increased First Input Delay. Rehydration is very expensive, and usually this strategy alone will not be good enough as it heavily delays Time To Interactive. Streaming Server-Side Rendering With Progressive Hydration (SSR + CSR) To minimize the gap between Time To Interactive and First Contentful Paint, we render multiple requests at once and send down content in chunks as they get generated. So we don’t have to wait for the full string of HTML before sending content to the browser, and hence improve Time To First Byte.
In React, instead of
renderToString(), we can use
renderToNodeStream() to pipe the response and send the HTML down in chunks. In Vue, we can use renderToStream() that can be piped and streamed. With React Suspense, we might use asynchronous rendering for that purpose, too.
On the client-side, rather than booting the entire application at once, we
boot up components progressively. Sections of the applications are first broken down into standalone scripts with code splitting, and then hydrated gradually (in order of our priorities). In fact, we can hydrate critical components first, while the rest could be hydrated later. The role of client-side and server-side rendering can then be defined differently per component. We can then also defer hydration of some components until they come into view, or are needed for user interaction, or when the browser is idle.
For Vue, Markus Oberlehner has published a guide on reducing Time To Interactive of SSR apps
With Next.js, we can use
static HTML export by prerendering an app to static HTML. In Gatsby, an open source static site generator that uses React, uses
renderToStaticMarkup method instead of
renderToString method during builds, with main JS chunk being preloaded and future routes are prefetched, without DOM attributes that aren’t needed for simple static pages.
For Vue, we can use
Vuepress to achieve the same goal. You can also use prerender-loader with Webpack. Navi provides static rendering as well.
The result is a better Time To First Byte and First Contentful Paint, and we reduce the gap between Time To Interactive and First Contentful Paint. We can’t use the approach if the content is expected to change much. Plus, all URLs have to be known ahead of time to generate all the pages. So some components might be rendered using prerendering, but if we need something dynamic, we have to rely on the app to fetch the content.
Full Client-Side Rendering (CSR) All logic, rendering and booting are done on the client. The result is usually a huge gap between Time To Interactive and First Contentful Paint. As a result, applications often feel sluggish as the entire app has to be booted on the client to render anything.
server-side rendering will usually be a better approach in case not much interactivity is required. If it's not an option, consider using The App Shell Model.
SSR is faster than CSR. Yet still, it’s a quite frequent implementation for many apps out there.
So, client-side or server-side? In general, it’s a good idea to
limit the use of fully client-side frameworks to pages that absolutely require them. For advanced applications, it’s not a good idea to rely on server-side rendering alone either. Both server-rendering and client-rendering are a disaster if done poorly.
Whether you are leaning towards CSR or SSR, make sure that you are rendering important pixels as soon as possible and minimize the gap between that rendering and Time To Interactive. Consider prerendering if your pages don’t change much, and defer the booting of frameworks if you can.
Stream HTML in chunks with server-side rendering, and implement progressive hydration for client-side rendering — and hydrate on visibility, interaction or during idle time to get the best of both worlds. The spectrum of options for client-side versus server-side rendering. Also, check Jason’s and Houssein’s talk at Google I/O on . (Image source: Jason Miller) ( Large preview) AirBnB has been experimenting with progressive hydration; they defer unneeded components, load on user interaction (scroll) or during idle time and testing show that it can improve TTI. ( Large preview) How much can we serve statically? Whether you're working on a large application or a small site, it's worth considering what content could be served statically from a CDN (i.e. JAM Stack), rather than being generated dynamically on the fly. Even if you have thousands of products and hundreds of filters with plenty of personalization options, you might still want to serve your critical landing pages statically, and decouple these pages from the framework of your choice.
There are plenty of
static-site generators and the pages they generate are often very fast. The more content we can pre-build ahead of time instead of generating page views on a server or client at request time, the better performance we will achieve.
With JAMStack used on large sites these days, a new performance consideration appeared: the
build time. In fact, building out even thousands of pages with every new deploy can take minutes, so it's promising to see incremental builds in Gatsby which improve build times by 60 times, with an integration into popular CMS solutions like WordPress, Contentful, Drupal, Netlify CMS and others. Incremental static regeneration with Next.js. (Image credit: , Nicola Goutay explains the differences between CSR, SSR and everything-in-between, and shows how to use a more lightweight approach — along with a GitHub repo that shows the approach in practice. Consider using PRPL pattern and app shell architecture. Different frameworks will have different effects on performance and will require different strategies of optimization, so you have to clearly understand all of the nuts and bolts of the framework you’ll be relying on. When building a web app, look into the PRPL pattern and application shell architecture. The idea is quite straightforward: Push the minimal code needed to get interactive for the initial route to render quickly, then use service worker for caching and pre-caching resources and then lazy-load routes that you need, asynchronously. PRPL stands for Pushing critical resource, Rendering initial route, Pre-caching remaining routes and Lazy-loading remaining routes on demand. An A difference between REST and GraphQL, illustrated via a conversation between Redux + REST on the left, an Apollo + GraphQL on the right. (Image source: ) (, as well as edge-side includes, which assemble static and dynamic parts of pages at the CDN’s edge (i.e. the server closest to the user), and other tasks. Also, check ( thanks, Barry!). CDNPerf measures query speed for CDNs by gathering and analyzing 300 million tests every day. ( Large preview)
When choosing a CDN, you can use these
comparison sites with a detailed overview of their features: CDN Comparison, a CDN comparison matrix for Cloudfront, Aazure, KeyCDN, Fastly, Verizon, Stackpach, Akamai and many others. CDN Perf measures query speed for CDNs by gathering and analyzing 300 million tests every day, with all results based on RUM data from users all over the world. Also check DNS Performance comparison and Cloud Peformance Comparison. CDN Planet Guides provides an overview of CDNs for specific topics, such as Serve Stale, Purge, Origin Shield, Prefetch and Compression. Web Almanac: CDN Adoption and Usage provides insights on top CDN providers, their RTT and TLS management, TLS negotiation time, HTTP/2 adoption and others. (Unfortunately, the data is only from 2019). Table Of Contents Getting Ready: Planning And Metrics Setting Realistic Goals Defining The Environment Assets Optimizations Build Optimizations Delivery Optimizations Networking, HTTP/2, HTTP/3 Testing And Monitoring Quick Wins Everything on one page Download The Checklist (PDF, Apple Pages, MS Word) Subscribe to our email newsletter to not miss the next guides.