GlassFish Response GZIP Compression in Production

A lot has been written about this and this basically should be common knowledge, but talking to different people out there and looking at the efforts Google takes to improve page speed it seems to me that the topic is worth a second and current look.

The basics

HTTP compression, otherwise known as content encoding, is a publicly defined way to compress textual content transferred from web servers to browsers. HTTP compression uses public domain compression algorithms, like gzip and compress, to compress XHTML, JavaScript, CSS, and other text files at the server.

This standards-based method of delivering compressed content is built into HTTP 1.1, and most modern browsers that support HTTP 1.1 support ZLIB inflation of deflated documents. In other words, they can decompress compressed files automatically, which saves time and bandwidth.

But that’s simple. What are the problems?

In order to get your stuff compressed, you have to do this somewhere between the responding server and the client. Looking into this a little deeper you find a couple of things to take care of:

It should:

1) …be fast
2) …be proven in production
3) …not slow down your appserver
4) …be portable and not bound to an appserver

Let’s go and have a more detailed look at what you could do in order to speed up your GlassFish a bit.


I am trying to run this with a simple test-page. This is the “Edit Network Listener” page in GlassFish’s Admin Console (http://localhost:4848/web/grizzly/networkListenerEdit.jsf?name=admin-listener&configName=server-config). The basic response times (uncompressed) for this page on my little machine captured with Firebug:

Type # Requests Size (kb) time (ms)
css 11 120 125
js 12 460.7 130
html 3 324.3 727
all 52 1126.4 1380

GlassFish built-in compression

If you are running a GlassFish 3.x server, the most obvious thing is to look what he has to offer. You could simply “Enable HTTP/1.1 GZIP compression to save server bandwidth” (“Edit Network Listener” => HTTP => middle). You simply add the compressible mime types (defaults plus: text/css,text/javascript,application/javascript) you would like and set a compression minimum size (in this case 1024bytes). You do have to restart your instance in order to let the changes take effect.

Type # Requests Size (kb) time (ms) change

% size


% time

css 11 24.9 185 -79,25 48,00
js 12 122,2 55 -73,48 -57,69
html 3 22.6 1470 -93,03 102,20
all 52 272,4 2350 -75,82 70,29
-80,39 40,70

Looking at the results you see, that you have an average of 80% to save on bandwidth using compression but you also see that it takes longer to serve compressed content in general. What I also realize is, that you have to play around with the settings for your mime types. It’s helpful to check for single files what mime type they actually have.

Apache mod_deflate

If you are not willing to have additional load on your application server (which is quite common) you can dispatch this to someone who knows how to handle http. This is true for Apache’s httpd. The module you are looking for is called mod_deflate and you can simply load it along with your configuration. I assume you have something like mod_proxy in place to proxy all the requests against GlassFish through your httpd. Comparing starts getting a bit tricky here. Having mod_proxy in place means your response times drop a lot. So it would not be valid to compare against a direct request onto GlassFish. In fact, what I did is, that I compare the average response time against a not deflated response via Apache, the size is compared against GlassFish compression.

Type # Requests Size (kb) time (ms) change

% size


% time

css 11 24.9 551 -79,25 -5,97
js 12 122,2 55 -73,48 0,76
html 3 22.6 1470 -93,62 -1,29
all 52 272,4 2350 -75,97 -5,65
-80,58 -3,04

Not a big surprise, right? They are both using gzip compression and this is a quite common and well known algorithm. So I did not expect any changes in compression effectiveness. But what you see is, that you have an unlike faster compression compared to running it on GlassFish. With an average overhead of roughly 3% you hardly can feel any change. That’s a plus! Another plus is, that you can change the compression level with mod_deflate. Setting it from Zlib#s default to highest (9) gives you an extra bit of compression but it’s not likely you see this higher than 1% overall which also could be a measuring inaccuracy.

Google mod_pagespeed

Yeah. That would have been a good additional test. But: I only have a Windows box running and the binaries are still only supported on some flavors of Linux. So, I need to skip it today.

Compression Filter

There are a lot of compression servlet filters out there. Back in the days, even BEA shiped one with their WebLogic. I guess as of today I would not use anything like this in production for stability reasons. I strongly believe, that there is not a single reason to let your appserver do any compression at all. Compressing content on-the-fly uses CPU time and being on an application server this is better spend onto other workload. Especially because you usually don’t have a bandwidth problem between your appserver and your DMZ httpd.

Reference: Response GZIP Compression with GlassFish in Production from our JCG partner Markus Eisele at Enterprise Software Development with Java.

Related Articles :

Want to know how to develop your skillset to become a Java Rockstar?

Join our newsletter to start rocking!

To get you started we give you our best selling eBooks for FREE!


1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

6. Spring Interview Questions

7. Android UI Design


and many more ....


Receive Java & Developer job alerts in your Area

I have read and agree to the terms & conditions


Markus Eisele

Markus is a Developer Advocate at Red Hat and focuses on JBoss Middleware. He is working with Java EE servers from different vendors since more than 14 years and talks about his favorite topics around Java EE on conferences all over the world. He has been a principle consultant and worked with different customers on all kinds of Java EE related applications and solutions. Beside that he has always been a prolific blogger, writer and tech editor for different Java EE related books. He is an active member of the German DOAG e.V. and it's representative on the iJUG e.V. As a Java Champion and former ACE Director he is well known in the community. Follow him on Twitter @myfear.
Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Inline Feedbacks
View all comments
Back to top button