Enterprise Java

Things to consider before jumping to enterprise caching

Introduction

Relational database transactions are ACID and the strong consistency model simplifies application development. Because enabling Hibernate caching is one configurations away, it’s very appealing to turn to caching whenever the data access layer starts showing performance issues. Adding a caching layer can indeed improve application performance, but it has its price and you need to be aware of it.

Database performance tuning

The database is therefore the central part of any enterprise application, containing valuable business assets. A database server has limited resources and it can therefore serve a finite number of connections. The shorter the database transactions, the more transactions can be accommodated. The first performance tuning action is to reduce the query execution times by indexing properly and optimizing queries.

When all queries and statements are optimized, we can either add more resources (scale up) or adding more database nodes (scale out). Horizontal scaling requires database replication, which implies synchronizing nodes. Synchronous replication preserve strong consistency, while asynchronous master-slave replication leads to eventual consistency.

Analogous to database replication challenges, cache nodes induce data synchronization issues, especially for distributed enterprise applications.

Caching

Even if the database access patterns are properly optimized, higher loads might increase latency. To provide predictable and constant response times, we need to turn to caching. Caching allows us to reuse a database response for multiple user requests.

The cache can therefore:

  • reduce CPU/Memory/IO resource consumption on the database side
  • reduce network traffic between application nodes and the database tier
  • provide constant result fetch time, insensitive to traffic bursts
  • provide a read-only view when the application is in maintenance mode (e.g. when upgrading the database schema)

The downside of introducing a caching solution is that data is duplicated in two separate technologies that may easily desynchronise.

In the simplest use case you have one database server and one cache node:

singlecachenode

The caching abstraction layer is aware of the database server, but the database knows nothing of the application-level cache. If some external process updates the database without touching the cache, the two data sources will get out of sync. Because few database servers support application-level notifications, the cache may break the strong consistency guarantees.

To avoid eventual consistency, both the database and the cache need to be enrolled in a distributed XA transaction, so the affected cache entries are either updated or invalidated synchronously.

Most often, there are more application nodes or multiple distinct applications (web-fronts, batch processors, schedulers) comprising the whole enterprise system:

multiplecachenodes

If each node has its own isolated cache node, we need to be aware of possible data synchronisation issues. If one node updates the database and its own cache without notifying the rest, then other cache nodes get out of sync.

In a distributed environment, when multiple applications or application nodes use caching, we need to use a distributed caching solution, so that:

  • cache nodes communicate in a peer-to-peer topology
  • cache nodes communicate in a client-server topology and a central cache server takes care of data synchronization

distributedcachenodes

Conclusion

Caching is a fine scaling technique but you have to be aware of possible consistency issues. Taking into consideration your current project data integrity requirements, you need to design your application to take advantage of caching without compromising critical data.

Caching is not a cross-cutting concern, leaking into your application architecture and requiring a well-thought plan for compensating data integrity anomalies.

Vlad Mihalcea

Vlad Mihalcea is a software architect passionate about software integration, high scalability and concurrency challenges.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Amit
Amit
9 years ago

Nice article. Distributed caching is easy to set up but hard to make it work!

Vlad Mihalcea
9 years ago

Thanks for appreciating it. Enterprise caching requires careful design, as otherwise we end-up with all sorts of data inconsistencies.

Back to top button