How to save precious bytes on a Node.js server

Micro-optimisations can sometimes make a (relatively) big difference

Every day GoSquared’s tracking infrastructure processes billions of HTTP requests. We’ve spent a lot of time making the servers that process these requests as efficient as possible. We try to reduce the average payload size of those requests as much as possible to keep bandwidth low.

A typical tracking request

The average response in one of our tracking requests looks something like this:

_gs('_4');

That’s ten bytes. Now, you might think that’s pretty damn small but there’s a problem. Here are the actual bytes that get sent over the wire from one of our servers to serve that request:

HTTP/1.1 200 OK\r\n
Connection: keep-alive\r\n
Transfer-Encoding: chunked\r\n
Content-Type: text/javascript\r\n
Date: Wed, 18 Feb 2015 17:44:12 GMT\r\n
\r\n
10\r\n
_gs('_4');\r\n
0\r\n
\r\n

Ouch! That’s quite a lot of extra stuff added to the actual message body. Those headers add up. If you’re unfamiliar with the behaviour of the Transfer-Encoding header, you may also be wondering what’s up with that 10\r\n...\r\n0\r\n\r\n wrapped around our response payload. It’s called chunked encoding and is a way of sending data over HTTP when you don’t know in advance exactly how big the payload will be.

Content-Length vs. Transfer-Encoding: chunked

When a browser (or any other client making a HTTP request) processes the response from a server, it needs to know when it’s finished. Essentially, the client needs to be able to tell the difference between the server having finished sending its response data versus there being a brief pause in bytes coming in over the wire. There are two ways this can happen:

The Content-Length header

This is the simplest way for a server to tell the browser when it’s done sending data. When the response is sent, the header section will include a line of the format Content-Length: XYZ. The browser then knows that after the header section, there are XYZ bytes of body to read, after which the response is complete.

However, there’s a problem with this. The server might want to start sending the response before knowing how big it’s going to be. For example, if it’s reading a large file from disk, or proxying a request from elsewhere, it doesn’t want to have to load the whole response into memory in order to read its byte length and then send it out over the wire. The solution to this problem is chunked encoding.

Transfer-Encoding: chunked

Chunked encoding is signified by the presence of a Transfer-Encoding: chunked header in the response. The server then sends the response in “chunks” which take the format LENGTH\r\nPAYLOAD\r\n. Finally, it then finished the response with a zero-length chunk, as can be seen above. This is great for servers that don’t need to know payload size ahead-of-time. It’s not-so-great for situations like ours. We send billions of teeny-tiny responses every day and we’re counting every single byte.

What we did

Prior to a few days ago, all responses sent from our tracking endpoints used chunked encoding  because that’s the default behaviour for a HTTP server created in Node.js. By explicitly setting a Content-Length header, we effectively disabled chunked encoding and saved a bunch of bytes in the process.

Here’s the line that we added:

response.setHeader('Content-Length', Buffer.byteLength(body));

This changed the response payload from what you saw above, to this:

HTTP/1.1 200 OK\r\n
Connection: keep-alive\r\n
Content-Length: 9\r\n
Content-Type: text/javascript\r\n
Date: Wed, 18 Feb 2015 17:44:12 GMT\r\n
\r\n
_gs('_4')

Wahey, that saved us a whole pile of bytes. You’ll notice we also removed the trailing semicolon from the payload. This actually saves us 2 bytes because it reduced the content length from 10 to 9. I call that a win!

All in all, this line of code reduced our outbound data transfer by 21 bytes per request. That may not seem like much, but it’s more than 10% when you’re saving 21 bytes on a 160-byte payload.

So there you have it. We saved more than 10% on our data throughput by adding one line of code. I think it’s safe to say we’re a bit of an edge-case as far as this optimisation is concerned – not many other people will be sending billions of teeny-tiny responses where a difference of one or two bytes can make a high percentage difference. But still, not bad.

If you like reading about Node.js dev ops, please join our engineering mailing list. Expect occasional engineering posts and news for developers using GoSquared.

Never miss a post