Home » Java » Core Java » OpenHFT Java Lang project

About Peter Lawrey

OpenHFT Java Lang project

OpenHFT/Java Lang started as an Apache 2.0 library to provide the low level functionality used by Java Chronicle without the need to persist to a file. This allows serializable and deserialization of data and random access to memory in native space (off heap)  It supports writing and reading enumerable types with object pooling. e.g. writing and reading String without creating an object (if it has been pooled).  It also supports writing and read primitive types in binary and text without creating any garbage. Small messages can be serialized and deserialized in under a micro-second.

Recent additions

Java Lang supports a DirectStore which is like a ByteBuffer but can be any size (up to 40 to 48-bit on most systems)  It support 64-bit sizes and offset. It support compacted types, and object serialization. It also supports thread safety features such as volatile reads, ordered (lazy) writes, CAS operations and using an int (4 bytes) as a lock in native memory.

Testing a native memory lock in Java

This test has one lock and a value which is toggled.  One thread changes the value from 0 to 1 and the other switches it from 1 to 0.  This goes around 20 million times, but has been run for longer

final DirectStore store1 = DirectStore.allocate(1L << 12);
        final int lockCount = 20 * 1000 * 1000;
        new Thread(new Runnable() {
            @Override
            public void run() {
                manyToggles(store1, lockCount, 1, 0);
            }
        }).start();

        manyToggles(store1, lockCount, 0, 1);

        store1.free();

The manyToggles method is more interesting. Note is using the 4 bytes at offset 0 as a lock.  You can arrange any number of locks in native space this way.  E.g. you might have fixed length records and want to be able to lock them before updating or access them.  You can place a lock at the “head” of the record.

private void manyToggles(DirectStore store1, int lockCount, int from, int to) {
        long id = Thread.currentThread().getId();
        assertEquals(0, id >>> 24);
        System.out.println("Thread " + id);

        DirectBytes slice1 = store1.createSlice();

        for (int i = 0; i < lockCount; i++) {
            assertTrue(
                slice1.tryLockNanosInt(0L, 10 * 1000 * 1000));
            int toggle1 = slice1.readInt(4);
            if (toggle1 == from) {
                slice1.writeInt(4L, to);
            } else {
                i--;
            }
            slice1.unlockInt(0L);
        }
    }

The size of the DataStore and the offsets within it are long, allowing you to allocate a continuous block of native memory into the many GB, and access it as you required. On my 2.6 GHz i5 laptop I get the following output for this test

Contended lock rate was 9,096,824 per second

This looks great but under heavy contention, one thread can be staved out. This is more useful for lots of locks and lower contention.  Note: if I drop the timeout from 10 ms to 1 ms, it eventually fails meaning sometimes it takes more then 1 ms to get a lock !

Conclusion

The Java Lang library is taking the step of making it easier to use native memory with the same functionality available on the heap.  The language support is not as good, but if you need to store say 128 GB of data you will get a much better GC behaviour using off heap memory.

Reference: OpenHFT Java Lang project from our JCG partner Peter Lawrey at the Vanilla Java blog.

Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

6. Spring Interview Questions

7. Android UI Design

and many more ....

 

3 comments

  1. Hi Peter

    I had a look at the source and I love the idea of how to implement explicit read- / write-barriers.

    Chris

  2. This approach is used for objects on the heap. The main thing which is different here is that being used in native memory.

  3. Interested article about off-heap http://cheremin.blogspot.ru/2012/10/70.html, but it write in russian.

    Author read article http://mechanical-sympathy.blogspot.ru/2012/10/compact-off-heap-structurestuples-in.html and wrote why off-heap realy give us about 10-20% advantage.

Leave a Reply

Your email address will not be published. Required fields are marked *

*


+ one = 5

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Do you want to know how to develop your skillset and become a ...

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!
Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close