5 steps to making a Node.js frontend app 10x faster

How we made GoSquared 10x faster

Making GoSquared Dashboard faster

Over the last couple of months we’ve done a huge amount to make Dashboard (the node.js application that powers the Now, Trends and Ecommerce front-ends) much faster. Here’s a brief summary of our story in making that happen.

What it used to be like

Back in November, loading any dashboard would take anywhere upwards of 30 seconds. Simply loading the HTML page itself would take a minimum of 10 seconds, then the application would request several other JavaScript and CSS files, each with a response time averaging 5 seconds.

Clearly this was not acceptable, so we set about doing everything we could think of to make things faster.

Step 1: Parallelize Everything

In order to render the HTML page for any dashboard, the node.js application needs to retrieve a lot of data for the dashboard in question.

At minimum this means it needs to retrieve the data from the user’s current browsing session to check they’re logged in and it needs to pull in data about the user (e.g. the user’s name, which sites they have access to, their API key and the parameters of their GoSquared subscription), and about the site in question for the dashboard (site name, unique token etc).

In order to retrieve this data, the application needed to make several calls to internal API functions, many of which could take up to 2 seconds to complete. Each request was made by a separate Express middleware, which meant they were running in series. Each request would wait for the previous one to complete before starting.

Since node.js is perfectly suited to running multiple asynchronous functions in parallel, and since a lot of these internal API requests didn’t depend on each other, it made sense to parallelize them — fire off all the requests at once and then continue once they’ve all completed. We achieved this with the aid of the (incredibly useful) async module:

So instead of:

app.use(getUser);
app.use(getSiteList);
app.use(getCurrentSite);
app.use(getSubscription);

… we could do something like this:

function parallel(middlewares) {
  return function (req, res, next) {
    async.each(middlewares, function (mw, cb) {
      mw(req, res, cb);
    }, next);
  };
}

app.use(parallel([
  getUser,
  getSiteList,
  getCurrentSite,
  getSubscription
]));

Straight away this cut our average response time down from 10 seconds to roughly 1.5 seconds. But we knew we could still do better.

Step 2: Cache, Cache, Cache

Even once we’d parallelized all of our internal data-fetching, loading a dashboard was still pretty slow. The reason for this was because not only was the application fetching all this data for the initial page load, it was also fetching it for a lot of subsequent JavaScript requests (at this point we were still limiting widgets in the dashboard based on GoSquared plan, so we needed to restrict who had access to which resources). And every one of these subsequent requests also had an average response time of about 1.5 seconds.

The solution to this was to cache any fetched data that wasn’t likely to change. A user isn’t going to upgrade or downgrade their GoSquared subscription in the 2 seconds it takes for the dashboard to load its JS, so there’s no point fetching subscription data again if we’ve already fetched it once.

So, we went ahead and cached all the data we could, cutting response times down from 1.5 seconds to about 500ms on any requests which already had the data cached.

Step 3: Intelligent JS and CSS loading on the front-end

The front-end of the dashboard application has a lot of interconnected components. The JavaScript for the application falls into three main parts: libraries (such as jQuery, D3 etc.), the main application core, and widgets (each widget in the application is modularised and has its own code). Code in each of these parts is edited in very different ways: libraries are barely touched and are updated at most once a month; the core is changed several times a day; widgets can vary from receiving several changes in a day to not being touched in weeks.

Originally we bundled all our libraries into the core application bundle (which was included via a script tag on the page), and all of the widgets into a secondary bundle which was loaded dynamically. This meant that even with good cache control, any tiny change we made to the core code would mean browsers would have to download all of the (unchanged) library code, or any change to one widget would require downloading all of the widgets again.

One way around this problem would be to break each individual component into its own file and include them all individually — that way any files that don’t get changed frequently can sit in the browser’s HTTP cache and not be requested. The problem with this, though, is that there would be a lot of files, some of them incredibly small. And (especially on mobile browsers), the overhead of loading that many individual resources vastly outweights the extra overhead we had before of re-downloading unchanged content.

We eventually came up with a compromise solution based on Addy Osmani’s basket.js, using a combination of server-side script concatenation and localStorage for caching. In a nutshell, the page includes a lightweight loader script, which figures out which JS and CSS it has already cached and which needs to be fetched. The loader then requests all the resources it needs from the server in one request, and saves all the resources into localStorage under individual keys. This gives us a great compromise between cutting down the number of HTTP requests while still being able to maintain cacheability, and not re-downloading code unnecessarily when it hasn’t changed. Addtionally, after running a few benchmarks, we found that localStorage is (sometimes) actually faster than the native HTTP cache, especially on mobile browsers.

Along with this, we also switched all of our static (JS and CSS) asset loading to be served through CloudFront, Amazon Web Service’s content delivery network. This means content is served from the nearest possible geographic location to the user, cutting down request latency from as high as 2500ms (in places such as Singapore) to tens of milliseconds.

We also introduced some optimizations to prevent loading or storing duplicate code. For example, the Languages widget uses exactly the same code in Now, Trends and Ecommerce. By de-duplicating the caching and requests based on a digest of each resource’s contents, we were able to cut out unnecessary requests and storage.

With these intelligent changes to resource loading we were able to cut down the total number of HTTP requests necessary to render the dashboard to one (just the page itself), which meant that for users quickly switching between dashboards for different sites, each dashboard would load within a few seconds.

But we could do even better.

Step 4: Cut out the middle-man for fetching data

All the user, site and subscription data described in the first two steps was being fetched via a secure internal HTTP API to our internal account system, which at the time was written in some old, clunky, slow PHP. As part of our extensive rewrite of that whole system from PHP to Node, we were also able to cut out the internal HTTP component completely, instead including a node module directly in the dashboard application and requesting our databases directly. This allowed us much finer-grained control over exactly what data we were fetching, as well as eliminating a huge amount of overhead.

With this significant change, we were able to reduce our average response time (even without the caching described in Step 2), to 25ms.

Step 5: Do More on the Client

Thanks to all the changes we’d made up to this point, all that was different between different dashboards for different sites was a config object passed to the loader on initialization. It didn’t make sense, therefore, to be reloading the entire page when simply switching between sites or between Now and Trends, if all of the important resources had already been loaded. With a little bit of rearranging of the config object, we were able to include all of the data necessary to load any of the dashboards accessible to the user. Throw in some HTML5 History with pushState and popState, and we’re now able to switch between sites or dashboards without making a single HTTP request or even fetching scripts out of the localStorage cache. This means that switching between dashboards now takes a couple of hundred milliseconds, rather than several seconds.

What else?

So far all this has been about reducing load times and getting to a usable dashboard in the shortest time possible. But we’ve also done a lot to optimise the application itself to make sure it’s as fast as possible. In summary:

  • Don’t use big complex libraries if you don’t have to — for example, jQuery UI is great for flexibility and working around all manner of browser quirks, but we don’t support a lot of the older browsers so the code bloat is unnecessary. We were able to replace our entire usage of jQuery UI with some clever thinking and 100-or-so lines of concise JS (we also take advantage of things like HTML5’s native drag-and-drop).

  • Even respectable libraries have their weak spots — for example we use moment with moment-timezone for a lot of our date and time handling. However moment-timezone is woefully inefficient (especially on mobile) if you’re using it a lot. With a little bit of hacking we added a few optimizations of our own and made it much better for our use-case.

  • Slow animations make everything feel slow — a lot of studies have been posted about this in the past, and it really makes a difference. Simply reducing some CSS transition times from 500ms to 250ms, and cutting others out entirely, made the whole dashboard feel snappier and more responsive.

  • Instant visual feedback — one of the big things we found when using Trends was that switching between time frames just felt slow. It took under a second, but because there was a noticeable delay between clicking on the timeframe selector and anything actually happening, things felt broken. Fetching new data from our API is always going to take some time — it’s not going to be instant. So instead we introduced the loading spinner on each widget. Nothing is actually any faster, but the whole experience feels more responsive. There is immediate visual feedback when you click the button, so you know it’s working properly.

  • Flat design is actually really handy for performance — it may well just be a design trend, but cutting out superficial CSS gradients and box shadows does wonders for render performance. If the browser doesn’t have to use CPU power to render all these fancy CSS effects, you get an instant boost to render performance.

Now dashboard in action

What next?

Even after all these optimizations and tweaks, we’re well aware that there’s still plenty of room for improvement. Especially on mobile, where CPU power, memory, rendering performance, latency and bandwidth are all significantly more limited than they are on the desktop. We’ll continue improving everything we can, so stay tuned for further updates!

Never miss a post

  • Özgür

    Thanks for great post. Helped to me so much. Thanks.

  • Punith DG

    Great post!! Really boosted my confidence and helped me a lot..

  • Jehandad Kamal

    Guys, Awesome Post!

    I am really interested in knowing how clunky was PHP opposed to NodeJS. Based on what u have written it seems like the page loading decreased from 5s to 1s because of caching, parallelising, not download cached files. Is that correct?

  • Anand Nadar

    Great article and really useful for me, I was here for getting some clue to design home automation dashboard, this gave me a idea and confidence.
    Aanand
    Independent IT consultant on AWS (Certified)
    Skype: anand7007

  • Awesome and very helpful. Thanks for writing this post.

  • Gonchar Denys

    According to my knowledge, if we use app.use(), those functions run every time user made request. Based on that, step one don’t feel so useful. Correct me if i am wrong.

    • Matthew Merkes

      It’s true that app.use() will run every time you make a request, but what he does in step 1 is turn 4 app.use() into 1 by making the 4 requests at the same time in parallel instead of synchronously.

  • I have just implemented the async middleware stack on a client’s website and WOW what a performance boost that gave. Thanks a lot. http://chrisrich.io

  • Gustavo Mickiewicz

    tell me if i wrong but i think it would be better if you explain async/promise concepts outside app.use because its like you are fixing express when really you are fixing your syncronic business logic. its that right? thanks for sharing.

  • Awesome article thank you JT !

  • שחר טייט

    I think this is really misleading. This is a specific improvement when going through multiple middlewares for one client http call. So for example an “authenticate” endpoint could go through multiple middlewares (getting user, unhashing password and whatever else). But in the case of multiple calls to multiple endpoints, which is usually the case in dashboards (get me info about X, Y, Z) these will all be async http calls to different endpoints which are already paralleled by the node V8 if you implemented correct async code use on your node app, and so these cannot be improved in this manner.

    • JT

      I think it’s a combination of things, and perhaps I could’ve worded it a bit better. As you say, the main benefit comes from correctly and efficiently managing the different async middlewares to make sure you’re not doing any unnecessary waiting. And since some of these middlewares involve external HTTP requests, it really doesn’t matter what system you have serving those requests (ours just happened to be PHP and it happened to be slow and clunky).

  • Axmed

    really great article. I liked it. Is concise JS really like jquery-ui, i’m working with it and it feels very slow. But whenever i try other libraries like yahooUI i feel jqueryUI is better.

  • Glen Birkbeck

    Great post, very interesting and filed away for future reference