Home » Java » Core Java » Extending Guava caches to overflow to disk

About Rafael Winterhalter

Rafael Winterhalter
Rafael is a software engineer based in Oslo. He is a Java enthusiast with particular interests in byte code engineering, functional programming, multi-threaded applications and the Scala language.

Extending Guava caches to overflow to disk

Caching allows you to significantly speed up applications with only little effort. Two great cache implementations for the Java platform are the Guava caches and Ehcache. While Ehcache is much richer in features (such as its Searchable API, the possibility of persisting caches to disk or overflowing to big memory), it also comes with quite an overhead compared to Guava. In a recent project, I found a need to overflow a comprehensive cache to disk but at the same time, I regularly needed to invalidate particular values of this cache. Because Ehcache’s Searchable API is only accessible to in-memory caches, this put me in quite a dilemma. However, it was quite easy to extend a Guava cache to allow overflowing  to disk in a structured manner. This allowed me both overflowing to disk and the required invalidation feature. In this article, I want to show how this can be achieved.

I will implement this file persisting cache FilePersistingCache in form of a wrapper to an actual Guava Cache instance. This is of course not the most elegant solution (more elegant would to implement an actual Guava Cache with this behavior), but I will do for most cases.

To begin with, I will define a protected method that creates the backing cache I mentioned before:

private LoadingCache<K, V> makeCache() {
  return customCacheBuild()
    .removalListener(new PersistingRemovalListener())
    .build(new PersistedStateCacheLoader());
protected CacheBuilder<K, V> customCacheBuild(CacheBuilder<K, V> cacheBuilder) {
  return CacheBuilder.newBuilder();

The first method will be used internally to build the necessary cache. The second method is supposed to be overridden in order to implement any custom requirement to the cache as for example an expiration strategy. This could for example be a maximum value of entries or soft references. This cache will be used just as any other Guava cache. The key to the cache’s functionality are the RemovalListener and the CacheLoader that are used for this cache. We will define these two implementation as inner classes of the FilePersistingCache:

private class PersistingRemovalListener implements RemovalListener<K, V> {
  public void onRemoval(RemovalNotification<K, V> notification) {
    if (notification.getCause() != RemovalCause.COLLECTED) {
      try {
        persistValue(notification.getKey(), notification.getValue());
      } catch (IOException e) {
        LOGGER.error(String.format("Could not persist key-value: %s, %s",
          notification.getKey(), notification.getValue()), e);
public class PersistedStateCacheLoader extends CacheLoader<K, V> {
  public V load(K key) {
    V value = null;
    try {
      value = findValueOnDisk(key);
    } catch (Exception e) {
      LOGGER.error(String.format("Error on finding disk value to key: %s",
        key), e);
    if (value != null) {
      return value;
    } else {
      return makeValue(key);

As obvious from the code, these inner classes call methods of FilePersistingCache we did not yet define. This allows us to define custom serialization behavior by overriding this class. The removal listener will check the reasons for a cache entry being evicted. If the RemovalCause is COLLECTED, the cache entry was not manually removed by the user but it was removed as a consequence of the cache’s eviction strategy. We will therefore only try to persist a cache entry if the user did not wish the entries removal. The CacheLoader will first attempt to restore an existent value from disk and create a new value only if such a value could not be restored.

The missing methods are defined as follows:

private V findValueOnDisk(K key) throws IOException {
  if (!isPersist(key)) return null;
  File persistenceFile = makePathToFile(persistenceDirectory, directoryFor(key));
  (!persistenceFile.exists()) return null;
  FileInputStream fileInputStream = new FileInputStream(persistenceFile);
  try {
    FileLock fileLock = fileInputStream.getChannel().lock();
    try {
      return readPersisted(key, fileInputStream);
    } finally {
  } finally {
private void persistValue(K key, V value) throws IOException {
  if (!isPersist(key)) return;
  File persistenceFile = makePathToFile(persistenceDirectory, directoryFor(key));
  FileOutputStream fileOutputStream = new FileOutputStream(persistenceFile);
  try {
    FileLock fileLock = fileOutputStream.getChannel().lock();
    try {
      persist(key, value, fileOutputStream);
    } finally {
  } finally {
private File makePathToFile(@Nonnull File rootDir, List<String> pathSegments) {
  File persistenceFile = rootDir;
  for (String pathSegment : pathSegments) {
    persistenceFile = new File(persistenceFile, pathSegment);
  if (rootDir.equals(persistenceFile) || persistenceFile.isDirectory()) {
    throw new IllegalArgumentException();
  return persistenceFile;
protected abstract List<String> directoryFor(K key);
protected abstract void persist(K key, V value, OutputStream outputStream)
  throws IOException;
protected abstract V readPersisted(K key, InputStream inputStream)
  throws IOException;
protected abstract boolean isPersist(K key);

The implemented methods take care of serializing and deserializing values while synchronizing file access and guaranteeing that streams are closed appropriately. The last four methods remain abstract and are up to the cache’s user to implement. The directoryFor(K) method should identify a unique file name for each key. In the easiest case, the toString method of the key’s K class is implemented in such a way. Additionally, I made the persist, readPersisted and isPersist methods abstract in order to allow for a custom serialization strategy such as using Kryo. In the easiest scenario, you would use the built in Java functionality which uses ObjectInputStream and ObjectOutputStream. For isPersist, you would return true, assuming that you would only use this implementation if you need serialization. I added this feature to support mixed caches where you can only serialize values to some keys. Be sure not to close the streams within the persist and readPersisted methods since the file system locks rely on the streams to be open. The above implementation will take care of closing the stream for you.

Finally, I added some service methods to access the cache. Implementing Guava’s Cache interface would of course be a more elegant solution:

public V get(K key) {
  return underlyingCache.getUnchecked(key);
public void put(K key, V value) {
  underlyingCache.put(key, value);
public void remove(K key) {
protected Cache<K, V> getUnderlyingCache() {
  return underlyingCache;

Of course, this solution can be further improved. If you use the cache in a concurrent scenario, be further aware that the RemovalListener is, other than most Guava cache method’s executed asynchronously. As obvious from the code, I added file locks to avoid read/write conflicts on the file system. This asynchronicity does however imply that there is a small chance that a value entry gets recreated even though there is still a value in memory. If you need to avoid this, be sure to call the underlying cache’s cleanUp method within the wrapper’s get method. Finally, remember to clean up the file system when you expire your cache. Optimally, you will use a temporary folder of your system for storing your cache entries in order to avoid this problem at all. In the example code, the directory is represented by an instance field named persistenceDirectory which could for example be initialized in the constructor.

Update: I wrote a clean implementation of what I described above which you can find on my Git Hub page and on Maven Central. Feel free to use it, if you need to store your cache objects on disk.

Reference: Extending Guava caches to overflow to disk from our JCG partner Rafael Winterhalter at the My daily Java blog.

Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!


1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

6. Spring Interview Questions

7. Android UI Design


and many more ....


Receive Java & Developer job alerts in your Area from our partners over at ZipRecruiter



  1. Nice one! Now I won’t need EhCache on my classpath.

  2. tl;dr Don’t do this use something like MapDB configured to made a NavigableMap backed by a memory mapped file with its own in-memory cache. MapDB is as easy to use as a Guava Cache as the builder api looks very similar.

    Long Analysis:

    Reading the code on github you are always looking for the item on disk before looking in memory; that’s going to be quicker than a remote call to a database but no way near as quick as if the cache was in-memory. If your app has a hot dataset which is mostly in-memory with lots of cache hits always going to disk is going to be terrible performance and unnecessary. Anyone who had a cache that worked will in-memory that added such code to overflow to disk may have an extreme performance drop even when there is few files in the cache which are always found in memory as you are alway polling the disk first. It would be better to have a secondary index cache of (key-> boolean) where the boolean indicates whether it is in-memory or on disk.

    Streaming values too and from their own files is going to be very inefficient for small to medium objects even with SSD. Very large objects (hundreds of KB or a few MB) it will be efficient enough. To deal with storage of small files you could pack the values into a memory mapped file. That is off heap and pages so fast. If objects that are written around the same time are close in the file, and objects that are written around the same time tend to read at the same time, then this will be a dramatic performance boost as the operation system will cache blocks of disk in unused memory.

    To do packing objects in a memory mapped file you need your own index of where objects are and space management code to keep object packed densely in the file. Doing that properly is quite a bit of code. The best solution is a b-tree where the small values are written into the tree sorted by key with fast read and fast write. Fortunately that has already been written many times as its the fundamental database data structure. All you need is a modern library that does that, can do memory mapped files, and a memory cache infant of it, that has a builder API as simple to use as a Guava cache: like MapDB.

  3. Rafael Winterhalter

    Hi Simbo,
    Of course you are right with that the implementation is not the most efficient. The blog is of course not the place for developing a full-blown library. It was rather intended to demonstrate how Guava caches can be extended, e.g. In an application that already relies on Guava. From there, its up to you what you make of it. But yes, there is a lot of space for improvement.
    Also, an item is only looked up on disk in case of a cache miss, not like you say on every look up.

Leave a Reply

Your email address will not be published. Required fields are marked *


Want to take your Java skills to the next level?

Grab our programming books for FREE!

Here are some of the eBooks you will get:

  • Spring Interview QnA
  • Multithreading & Concurrency QnA
  • JPA Minibook
  • JVM Troubleshooting Guide
  • Advanced Java
  • Java Interview QnA
  • Java Design Patterns