Performance is critical to the success of any given microservice. Overall performance is the result of applying ‘performance friendly’ techniques at various points in the design, development, and delivery of microservices. In many cases, however, you can make vast performance improvements through basic techniques like implementing and optimizing caching at various points in between the consumers of data (users and applications) and servers that store data. Caches can return data much faster than the disk-based databases that originate the data because of caches’ use of memory to provide lower latency access. Caches are also usually located much closer to the consumers of data from a network topology perspective.
A cache can be inserted anywhere in the infrastructure where there is congestion with data delivery. In this post, we’ll focus on look-aside caching that serves as a highly performant alternative to accessing data from a microservice’s backing store. We will also clarify the meaning of various terms associated with caching patterns - such as read-aside, read thru, write through, and write behind caches - and when to choose each pattern.
Look-Aside Cache vs. Inline Cache
The two main caching patterns are the look-aside caching pattern and the inline caching pattern. The descriptions and differences between these patterns are shown in the table below.
|Pattern||How it reads||How it writes|
Look-Aside Caching 101
In the look-aside caching pattern, if the data is not cached, the application gets the data from the backing store and puts it into the cache for subsequent reads.
The upside of this pattern is that it doesn’t require the developer to deploy any code to the cache servers. Instead, the look-aside pattern puts the developer and the application code in charge of managing the cache. The benefits of control over the cache come with the burden of managing the cache. Coding frameworks, like the Spring Framework, can mitigate this burden via a caching abstraction, which provides a uniform mechanism for developers to work with a cache, regardless of which specific caching technology is being used.
The abstraction provides a set of Java annotations, like the @Cacheable annotation on a method, which executes a function and caches the result when there is a cache miss. Developers can learn and use Spring’s cache abstraction rather than specifics related to each caching technology. Time-based expiration of data built into caching products can further reduce the cache management burden.
Look-aside caching is primarily used for data that does not change often. If the data in the backing store changes fast, then the volume of notifications for invalidating entries can erode the benefits of caching.
More Control in the Application Layer
In contrast to inline caching, look-aside caching is declarative - the developer tells the application what to cache, not how to do it. For example, with inline caching, the developer must deploy code to the cache server. The developer must also imperatively handle cache misses. The developer optionally deploys code to allow writes to the cache to be pushed into the backing-store either synchronously or asynchronously.
So, a key difference between inline and look-aside caching patterns is what the application code does versus what the cache does. In the look-aside caching pattern, there is more control in the application layer. In the inline caching pattern, code is deployed into the cache servers, and then the cache takes control of reading from and writing to the backing store.
Rapid Self-Service Platform
Caching and invalidation are considered to be some of the deeper topics in computer science. The patterns we discussed in this article only begin to scratch the surface of caching techniques. Understanding the terminology around caching patterns provides a good grounding for approaching deeper, more advanced topics.
As cloud-native platforms and microservices continue to rise in popularity, developers are turning to tools like Pivotal Cloud Foundry to provision caching infrastructure on-demand as a backing service to their application deployments. Providing developers with a platform to rapidly self-service their infrastructure needs is just one of the ways Pivotal is helping customers transform how they build software.
About the Author
Jagdish Mirani is an enterprise software executive with extensive experience in Product Management and Product Marketing. Currently he is the Principal Product Marketing Manager for Pivotal’s in-memory data grid product called GemFire. Prior to Pivotal, Jagdish spent 10 years at Oracle in their Data Warehousing and Business Intelligence groups. More recently, Jag was at AgilOne, a startup in the predictive marketing cloud space. Prior to AgilOne, Jag was at Business Objects (now part of SAP), Actuate (now part o OpenText), and NetSuite (now part of Oracle). Jag holds a B.S. in Electrical Engineering and Computer Science from Santa Clara University and an MBA from the U.C. Berkeley Haas School of Business.More Content by Jagdish Mirani