HTTP protocol compression in WebServer 5.0

2009-02-09, 08:53 by Martin Karlgren

In Roxen WebServer 5.0, a new feature will be introduced in the HTTP protocol module that should improve both web site visitors' perception of performance and reduce the bandwidth needed for the site's servers: gzip compression. The code is still considered experimental, although it's been briefly tested with some pretty good results.

What it does

HTTP protocol compression is what it sounds like: compression of request data at the protocol level. The HTTP 1.1 RFC defines a few content encodings that can be used for compression: gzip, deflate and compress. The client may announce that it supports one or more of these content encodings by setting the "Accept-Encoding" header, and the server may then decide to encode the response data before it's returned to the client -- the client is told that the content is encoded via the "Content-Encoding" header. We'll use the gzip encoding/compression method since it's the most widely supported.

Text file formats used in web site technology that aren't compressed in themselves, such as HTML, CSS and JavaScript usually have a quite high potential for protocol level compression. Yahoo states, for example, that gzip compression reduces the HTTP response size for such file types by 70% on average.

In tests performed by myself with the Roxen CMS Content Editor, I've seen that the HTML can be compressed by as much as a staggering factor 20 in some cases. That's probably an extreme scenario though, and such impressive compression will probably not be observed on "real" web sites very often. If you or your users are frequent Content Editor users and also are on a slow connection to the server, you may however find that information interesting.

How it works

Since the compression takes place in the HTTP protocol layer, higher level modules won't notice any difference at all between compressed and uncompressed requests.

"Well, I'm responsible for a high-traffic web site where the servers can't take the increased load from compressing each and every request" you might say. The good news is that not every request needs to be compressed -- thanks to the HTTP protocol cache.

The compression is implemented in such a way that requests that are cacheable in the protocol cache will be compressed, if they meet the criteria, prior to the caching and then stored in the compressed form. Whenever there's an incoming request that can be served from the protocol cache, the compressed data will be served as-is without any need for recoding, as long as the client announces that it supports gzip encoding. If it doesn't, the data is decompressed on the fly.

Once again, according to Yahoo, approximately 90% of the Internet traffic is handled by browsers that support gzip compression. That means that in nine cases out of ten, once the compressed data is stored into the protocol cache, it can be served directly to the client without any performance hit on the server. Some requests need to be decompressed on the fly, but decompression is usually more light-weight than compression anyways.

Another upside of all this is that since data will be cached in the compressed form, you can fit more entries into your protocol cache with the same amount of RAM.

Configuration

Since the compression code is considered experimental, you need to start your 5.0 server with the -DHTTP_COMPRESSION define to enable it. Once you've done that, a new tab will appear in the main section of each site in the Administration Interface.

There are a few configuration options available, among them a master switch (enable/disable compression per site), a MIME type setting (which content types to compress), a file size restriction and a setting to choose the gzip compression level (trade-off between CPU power and compression ratio.)

There's also a setting to enable or disable the compression of dynamic requests. When enabled, even requests that don't make it to the protocol cache (i.e. dynamic) will be compressed. Compressing each and every request means a slight performance overhead, although visitors may gain performance if the bandwidth available is a limiting factor.

When it pays off

Let's try a calculation example: I downloaded a real-world web page that's 141128 bytes of HTML. Compressing it using medium level compression reduced it to 28812 bytes -- saving 112316 bytes. The compression takes about 20 ms on my machine. That means that if you have bandwidth enough to transfer roughly 112 kB in less than 20 ms, the compression will slow your request down. So, how fast is that? 112316/0.02 equals 5615800 bytes per second. That means that our break-even point (when an uncompressed request is outperforming a compressed dito thanks to transfer speed vs. compression overhead) is at about 5 Mbit/s. Keep in mind that this applies to dynamic requests -- if the response can be served from the protocol cache the compression will be "free".

Also keep in mind that a TCP flow won't get up to speed until quite a few packets have been passed through, especially if there's latency, so enabling compression while setting a max file size limit (to compress average sized requests while leaving large requests untouched) probably makes sense even if all clients are on a 10 Mbit/s connection. Also, compressing small files probably doesn't make that much sense as the overhead becomes a substantial factor compared to the time it takes to transfer those few extra bytes, especially if the HTTP response can fit into a single TCP/IP packet.

All this is, of course, very theoretic. To really draw some useful conclusions about the benefits of compression for your specific installation, you probably need to perform some tests of your own. Also, if your servers are balancing on the edge to become overloaded, you might want to think twice. In most cases the compression should make some good results with the default settings, though.

 

You need to log in to post comments.

 

1   Matthew Hardy

2009-02-10 16:36

Is this quite different from auto_gzip.pike module in previous versions?

2   Martin Jonsson

2009-02-10 16:59

Yep, auto_gzip.pike assumes that there is a gzipped file next to the original file in the filesystem already. This extension works on the fly and is also protocol cache aware.

3   Matthew Hardy

2009-02-11 13:40

of course, yes, true. Has there been any trouble with IE? I have seen IE simply say 'no' to gzip delivery using the previous auto_gzip.pike module. thx.

4   Peter Jönsson

2009-02-11 22:11

Excellent! Sounds great to have this in the prot-cache. Will there be any way to see the ratio for when the gz-files need to be decompressed or not before being served? Very well written article!

5   Martin Jonsson

2009-02-12 09:01

Thanks Peter! No, there are currently no statistics implemented at all. Good that there's room for improvement so we don't run out of things to do. ;-)

6   Martin Jonsson

2009-12-09 09:34

Update: The HTTP compression has been in use on the http://www.roxen.com servers for almost half a year now and seems to work well.

Nov 25, 2017

Categories

Community Update (1)
Customers (0)
Development (10)
New sites (1)

Latest comments

Update: The HTTP compression has been in use on the www.roxen.com servers for almost half a year now and seems to work well.
Thanks Peter! No, there are currently no statistics implemented at all. Good that there's room for improvement so we don't run out of things to do. ;-)
Excellent! Sounds great to have this in the prot-cache. Will there be any way to see the ratio for when the gz-files need to be decompressed or not before being served? Very well written article!
of course, yes, true. Has there been any trouble with IE? I have seen IE simply say 'no' to gzip delivery using the previous auto_gzip.pike module. thx.
Yep, auto_gzip.pike assumes that there is a gzipped file next to the original file in the filesystem already. This extension works on the fly and is also protocol cache aware.