The world is pushing huge amounts of data to applications every second, from mobiles, the web, and various gadgets. More applications these days have to deal with this data. To preserve performance, these applications need fast access to the data tier.
RAM prices have crumbled over the past few years and we can now get hardware with a Terabyte of RAM much more cheaply. OK, got the hardware, now what? We generally use virtualization to create smaller virtual machines to meet applications scale-out requirements, as having a Java application with a terabyte of heap is impractical. JVM Garbage Collection will slaughter your application right away. Ever imagined how much time will it take to do a single full garbage collection for a terabyte of heap? It can pause an application for hours, making it unusable.
BigMemory is the key to access terabytes of data with milliseconds of latency, with no maintenance of disk/raid configurations/databases.
BigMemory = Big Data + In-memory
BigMemory can utilize your hardware to the last byte of RAM. BigMemory can store up to a terabyte of data in single java process.
BigMemory provides “fast”, “predictable” and “highly-available” data at 1 terabytes per node.
The following test uses two boxes, each with a terabyte of RAM. Leaving enough room for the OS, we were able to allocate 2 x 960 GB of BigMemory, for a total of 1.8+ TB of data. Without facing the problems of high latencies, huge scale-out architectures … just using the hardware as it is.
Test results: 23K readonly transactions per second with 20 ms latency.
Graphs for test throughput and periodic latency over time.
|Readonly Periodic Throughput Graph|
|Readonly Periodic Latency Graph|