Using direct memory is no guarantee of improving performance. Given it adds complexity, it should be avoided unless you have a compelling reason to use it.
This excellent article by Sergio Oliveira Jr shows its not simply a matter of using direct memory to improve performance Which one is faster: Java heap or native memory?
Where direct memory and memory mapped files can help is when you have a large amounts of data and/or you have to perform some IO with that data.
Time series data.
Time series data tends to have both a large number of entries and involve IO to load and store the data. This makes it a good candidate for memory mapped files and direct memory.
I have provided an example here; main and tests where the same operations are performed on regular objects and on memory mapped files. Note: I am not suggesting that the access to the Objects is slow but the overhead of using objects which is the issue. e.g. loading, creating, the size of the object headers, garbage collection and saving the objects.
The test loads time series data with the time and two columns, bid and ask prices (normalised as int values) This is used to calculate and save a simple mid price basis point movement. The test performs one GC to include the overhead of managing the objects involved.
|Storage||1 million||10 million||30 million||100 million||250 million|
The full results
Not only is memory mapped data 10x faster for smaller data sets, it scales better for large data sizes because
- Memory mapped data is available as soon as it is mapped into memory
- It creates only a small number of objects, which has almost no heap foot print reducing GC times.
- It can be arranged in memory as you desire reducing the over head per row as it doesn’t have an object per row.
- It doesn’t have to do anything extra to save the data.
Using direct memory and memory mapped files is not as simple as using Java objects, but if you have big data requirements it can make a big difference.
Using direct memory and memory mapped files can also make a big difference for low latency requirements, something I have discussed in previous articles.