Core Java

Starting out with jHiccup

After writing my post on ‘How to detect and diagnose slow code in production’ I was encouraged by a reader to try out jHiccup from Azul systems.

Last year I went to a talk by jHiccup’s creator Gil Tene on the correct way to measure latency, where, amongst other things, he introduced us to jHiccup. It had been on my todo list of products to investigate and this gave me the impetus to finally get on with my investigation.

JHiccup measures the system latency over and above your actual program code. For examples GC times and other OS and hardware events that add latency spikes to the smooth running of your program.  It is critical to understand these because your program can never have better latencies than the underlying environment on which it is running.

To cut a long story short, I love the tool and think that it’s going to be really useful to me now that I’ve started using it.  This post is not about teaching you everything about jHiccup, I refer you to the documentation for that. This post is a ‘starting out with jHiccup guide’, to show you how I got it running and hopefully whet your appetite to try it out in your own code.

Step 1: Download product

Download the code, it’s open source and you can get it from here. Unzip the file and you will see the jHiccup.jar which we will use in the next step.

Step 2: Run your program with jHiccup

There are a number of way for running jHiccup but this is how I did it.  All you need to do is add this vm option to your command line:

-javaagent:jHiccup.jar="-d 0 -i 1000 -l hiccuplog -c"

There are lots of configurations, the ones chosen here mean:

  • -d The delay before which to start recording latencies – this allow for ignoring any code warm up period. (default after 30s)
  • -i Interval data, how often the data is recorded. (default every 5s)
  • -l The name of the log file to which data is recorded.
  • -c Launches a control JVM and records data to logFile.c.  Super useful to compare against actual program to see if the was a global event on the machine.

Step 3: Process the log file

Run this command on your log file (you can process both the program log file and the .c control log file).

jHiccupLogProcessor -i hiccuplog -o myhlog

This produces two files, we are interesting in the one call myhlog (not myhlog.hgram) which we will use in the final step.

Step 4: Produce graph in Excel

Now for the really nice bit.  Open the spreadsheet jHiccupPlotter.xls and make sure you enable the macros.

You will see a sheet like this:

Screen Shot 2015-02-20 at 12.19.38

Just select your file from step 3 and choose a chart title (this is a really useful feature when you come to comparing your graphs) and in a matter of seconds you will have a your latency distribution graphs.

Example

I had a program (not particularly latency sensitive) and wanted to understand the effects that the different garbage collects had on latency.

All I had to do was to run my program with different garbage collector settings and compare the graphs. Of course this was a slightly manufactured example I happened to have to hand but you get the point, it’s easy to change jvm settings or code and get comparable results. The control program is also critical to understand what else is happening on your machine that might be affecting the latency of your program.

These are my results: It’s interesting to see how the different GCs effect the latency and this is demonstrated beautifully using jHiccup.

Using the serial collector:

Screen Shot 2015-02-20 at 12.32.46

Using the G1 collector:

Screen Shot 2015-02-20 at 12.36.42

Using the G1 collector – max pause set to 1ms:

Screen Shot 2015-02-20 at 12.41.42

Using the CMS collector:

Screen Shot 2015-02-20 at 12.46.16

Using the parallel GC:

Screen Shot 2015-02-20 at 12.49.25

Reference: Starting out with jHiccup from our JCG partner Daniel Shaya at the Rational Java blog.

Daniel Shaya

Daniel has been programming in Java since it was in beta. Working predominantly in the finance industry he has created real time trading and margin risk applications. He is currently a director at OpenHFT where we are building next generation Java low latency products.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button