Core Java

JLBH Examples 2 – Accounting for Coordinated Omission

In this post:

  • Running JLBH with and without accounting for coordinated omission
  • An example to in numbers the effects of coordinated omission
  • A discussion about flow control

 
 
 
 
 
 
This is the example I used when describing what it would be like if you measured without accounting for coordinated omission:

Let’s imagine you are waiting for a train and get delayed in the station for an hour because the train in front of you was late.  Let’s then imagine you get on the train an hour late and the train takes it’s usual half an hour to reach it’s destination. If you don’t account for coordinated omission you will not consider yourself to have suffered any delay as your journey took exactly the correct amount of time even though you waited for an hour at the station before departing!

But that’s exactly what you do when you run a micro benchmark. You time each ‘journey’ and not the waiting time.

And the truth is, that’s absolutely fine for a micro benchmark. But it is not fine when you want to measure the latency of an application.

By default JLBH measures end to end times accounting for coordinated omission although you do have a setting to measure it without accounting for coordinated omission.

I wrote this simple benchmark to show just how dramatic an effect accounting for coordinated omission can have.

In this example after every 10k iterations we add in a millisecond latency:

package org.latency.spike;

import net.openhft.chronicle.core.Jvm;
import net.openhft.chronicle.core.jlbh.JLBH;
import net.openhft.chronicle.core.jlbh.JLBHOptions;
import net.openhft.chronicle.core.jlbh.JLBHTask;

/**
 * A simple JLBH example to show the effects od accounting for co-ordinated omission.
 * Toggle the accountForCoordinatedOmission to see results.
 */
public class SimpleSpikeJLBHTask implements JLBHTask {
    private int count = 0;
    private JLBH lth;

    public static void main(String[] args){
        JLBHOptions lth = new JLBHOptions()
                .warmUpIterations(40_000)
                .iterations(1_100_000)
                .throughput(100_000)
                .runs(3)
                .recordOSJitter(true)
                .accountForCoordinatedOmmission(true)
                .jlbhTask(new SimpleSpikeJLBHTask());
        new JLBH(lth).start();
    }

    @Override
    public void run(long startTimeNS) {
        if((count++)%10_000==0){
            //pause a while
            Jvm.busyWaitMicros(1000);
        }
        lth.sample(System.nanoTime() - startTimeNS);
    }

    @Override
    public void init(JLBH lth) {
        this.lth = lth;
    }
}

If you set coordinatedOmission(false) then you get this profile – as expected the millisecond delay can only be seen on the highest percentiles, from the 99.99th percentile upwards. Or put it this way it only affects one in every 10k iterations – not really surprising.

Warm up complete (40000 iterations took 0.046s)
-------------------------------- BENCHMARK RESULTS (RUN 1) -----------
Run time: 11.593s
Correcting for co-ordinated:false
Target throughput:100000/s = 1 message every 10us
End to End: (1,100,000)                         50/90 99/99.9 99.99/99.999 - worst was 0.11 / 0.13  0.20 / 0.33  999 / 999 - 1,930
OS Jitter (14,986)                              50/90 99/99.9 99.99 - worst was 8.4 / 15  68 / 1,080  3,210 - 4,330
----------------------------------------------------------------------
-------------------------------- BENCHMARK RESULTS (RUN 2) -----------
Run time: 11.49s
Correcting for co-ordinated:false
Target throughput:100000/s = 1 message every 10us
End to End: (1,100,000)                         50/90 99/99.9 99.99/99.999 - worst was 0.11 / 0.13  0.16 / 0.28  999 / 999 - 999
OS Jitter (13,181)                              50/90 99/99.9 99.99 - worst was 8.4 / 12  36 / 62  270 - 573
----------------------------------------------------------------------
-------------------------------- BENCHMARK RESULTS (RUN 3) -----------
Run time: 11.494s
Correcting for co-ordinated:false
Target throughput:100000/s = 1 message every 10us
End to End: (1,100,000)                         50/90 99/99.9 99.99/99.999 - worst was 0.11 / 0.13  0.16 / 0.26  999 / 999 - 1,030
OS Jitter (13,899)                              50/90 99/99.9 99.99 - worst was 8.4 / 13  42 / 76  160 - 541
----------------------------------------------------------------------
-------------------------------- SUMMARY (end to end)-----------------
Percentile   run1         run2         run3      % Variation
50:             0.11         0.11         0.11         0.00  
90:             0.13         0.13         0.13         0.00  
99:             0.20         0.16         0.16         3.31    
99.9:           0.33         0.28         0.26         3.88   
99.99:        999.42       999.42       999.42         0.00  
99.999:       999.42       999.42       999.42         0.00    
worst:       1933.31       999.42      1032.19         2.14   

----------------------------------------------------------------------

But if you set coordinatedOmission(true)you see the true effect of this delay.

Warm up complete (40000 iterations took 0.044s)
-------------------------------- BENCHMARK RESULTS (RUN 1) -----------
Run time: 11.0s
Correcting for co-ordinated:true
Target throughput:100000/s = 1 message every 10us
End to End: (1,100,000)                         50/90 99/99.9 99.99/99.999 - worst was 0.11 / 0.17  385 / 1,930  4,590 / 5,370 - 5,370
OS Jitter (13,605)                              50/90 99/99.9 99.99 - worst was 8.4 / 15  68 / 1,080  5,110 - 5,900
----------------------------------------------------------------------
-------------------------------- BENCHMARK RESULTS (RUN 2) -----------
Run time: 11.0s
Correcting for co-ordinated:true
Target throughput:100000/s = 1 message every 10us
End to End: (1,100,000)                         50/90 99/99.9 99.99/99.999 - worst was 0.12 / 0.18  42 / 901  999 / 999 - 1,030
OS Jitter (13,156)                              50/90 99/99.9 99.99 - worst was 8.4 / 13  38 / 68  209 - 467
----------------------------------------------------------------------
-------------------------------- BENCHMARK RESULTS (RUN 3) -----------
Run time: 11.0s
Correcting for co-ordinated:true
Target throughput:100000/s = 1 message every 10us
End to End: (1,100,000)                         50/90 99/99.9 99.99/99.999 - worst was 0.12 / 0.18  46 / 901  999 / 999 - 999
OS Jitter (13,890)                              50/90 99/99.9 99.99 - worst was 8.4 / 14  44 / 80  250 - 1,870
----------------------------------------------------------------------
-------------------------------- SUMMARY (end to end)-----------------
Percentile   run1         run2         run3      % Variation   
50:             0.11         0.12         0.12         0.00       
90:             0.17         0.18         0.18         0.00       
99:           385.02        41.98        46.08         6.11       
99.9:        1933.31       901.12       901.12         0.00       
99.99:       4587.52       999.42       999.42         0.00       
99.999:      5373.95       999.42       999.42         0.00       
worst:       5373.95      1032.19       999.42         2.14       

----------------------------------------------------------------------

In fact one in a hundred (not one in 10,000) iterations are affected to some degree.  You can also see the progressive effect of the latency as you head up the percentiles.

This clearly demonstrates in numbers why coordinated omission must be a significant part of your benchmarking, especially if you can’t exert flow control in your program. Flow control is the ability to stop consuming if you are not keeping up e.g. bumping users off your site if you are too busy.  Fix Engines can not exert flow control i.e. you can’t tell the market to slow down because you can’t keep up! Programs that exert flow control are consumer centric whilst programs don’t exert flow control are producer centric.

Accounting for co-ordinated omission goes hand in hand in hand with being able to set latency for a defined throughput which is something we will look at in the next example.

Daniel Shaya

Daniel has been programming in Java since it was in beta. Working predominantly in the finance industry he has created real time trading and margin risk applications. He is currently a director at OpenHFT where we are building next generation Java low latency products.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button