Core Java

Do you get Just-in-time compilation?

Remember the last time when you were laughed at by C-developers? That the Java is so slooooow that they would never even consider using a language like this?

In many ways, the concept still holds. But for it’s typical usage – in the backbones of a large enterprise – Java performance can definitely stand against many contestants. And this is possible mostly thanks to the magical JIT.

Before jumping into explaining Just-In-Time compilation tricks, lets dig into background a bit.
 
 
 
As you might remember – Java is an interpreted language. Java compiler known by most users, javac, does not compile java source files directly into processor instructions like C compilers do. Instead it produces bytecode, machine independent binary format governed by specification. This bytecode is interpreted during runtime by JVM.This is the main reason why Java is so successful in cross-platform – you can write and build the program in one platform and run it on several others.
Just in Time
On the other hand – it does introduce some negative aspects. Out of which one of the most severe is the fact that interpreted code is usually slower than code compiled directly to platform-specific native binaries. Sun realized the severity already at the end of the nineties, when it hired dr Cliff Click to provide a solution.

Welcome – HotSpot. The name derives from the ability of JVM to identify “hot spots” in your application’s – chunks of bytecode that are frequently executed. They are then targeted for the extensive optimization and compilation into processor specific instructions. The optimizations lead to high performance execution with a minimum of overhead for less performance-critical code. In some cases, it is possible for adaptive optimization of a JVM to exceed the performance of hand-coded C++ or C code.

The component in JVM responsible for those optimizations is called Just in Time compiler (JIT). It takes advantage of an interesting program property. Virtually all programs spend the majority of their time executing a minority of their code. Rather than compiling all of your code, just in time, the Java HotSpot VM immediately runs the program using an interpreter, and analyzes the code as it runs to detect the critical hot spots in the program. Then it focuses the attention of a global native-code optimizer on the hot spots. By avoiding compilation of infrequently executed code, the Java HotSpot compiler can devote more attention to the performance-critical parts of the program. This means that your compilation time does not increase overall. This hot spot monitoring is continued dynamically as the program runs, so that it adapts its performance on the fly according to the usage patterns of your application.

JIT achieves the performance benefits by several techniques, such as eliminating dead code, bypassing boundary condition checks, removing redundant loads, inlining methods, etc.

Following samples illustrates those techniques used by JIT to achieve better performance. In the first section there is the code written by a developer. In the second code snippet is the code executed after hotspot has detected it to be “hot” and applied it’s optimization magic:

  1. Unoptimized code.
  2. class Calculator {
       Wrapper wrapper;
       public void calculate() {
          y = wrapper.get();
          z = wrapper.get();
          sum = y + z;
       }
    }
     
    class Wrapper {
       final int value;
       final int get() {
          return value;
       }
    }
  3. Optimized code
  4. class Calculator {
       Wrapper wrapper;
       public void calculate() {
          y = wrapper.value;
          sum = y + y;
       }
    }
     
    class Wrapper {
       final int value;
       final int get() {
          return value;
       }
    }

First class described in the small sample above is a class a developer has written and the second is a sample after JIT has finished it’s work. The sample contains several optimization techniques applied. Lets try to look how the final result is achieved:

  1. Unoptimized code. This is the code being run before it is detected as a hot spot:
  2. public void calculate() {
       y = wrapper.get();
       z = wrapper.get();
       sum = y + z;
    }
  3. Inlining a method. wrapper.get() has been replaced by b.value as latencies are reduced by accessing wrapper.value directly instead of through a function call.
  4. public void calculate() {
       y = wrapper.value;
       z = wrapper.value;
       sum = y + z;
    }
  5. Removing redundant loads. z = wrapper.value has been replaced with z = y so that latencies will be reduced by accessing the local value instead of wrapper.value.
  6. public void calculate() {
       y = wrapper.value;
       z = y;
       sum = y + z;
    }
  7. Copy propagation. z = y has been replaced by y = y since there is no use for an extra variable z as the value of z and y will be equal.
  8. public void calculate() {
       y = wrapper.value;
       y = y;
       sum = y + y;
    }
  9. Eliminating dead code. y = y is unnecessary and can be eliminated.
  10. public void calculate() {
       y = wrapper.value;
       sum = y + y;
    }

The small sample contains several powerful techniques used by JIT to increase performance of the code. Hopefully it proved beneficial in understanding this powerful concept.
Enjoyed the post? We have a lot more under our belt. Subscribe to either our RSS feed or Twitter stream and enjoy.

The following related links were used for this article (besides two angry C developers):

 

Reference: Do you get Just-in-time compilation? from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button