7 Essential Ways to Optimize Caching in Spring Boot

With the increasing performance demands of modern applications, caching is becoming one of the key tools to meet them.

In a new translation from the team Spring AIO You will learn about 7 essential caching optimization techniques in Spring Boot that can help significantly improve performance. From choosing ideal caching candidates to implementing an asynchronous cache and monitoring cache metrics.


Identifying ideal candidates for optimal performance

First of all, we need to choose candidates for caching. The first thing that comes to mind is to cache expensive and time-consuming operations, such as database queries, web service calls, or heavy calculations. These are all valid options, but it can be useful to define some general characteristics of an ideal caching candidate. This will help us identify such characteristics for our application.

  • Frequently accessed data: Data that an application accesses frequently and many times are good candidates for caching.

  • Expensive data loading or computation: data that requires significant time or computing resources to obtain or process.

  • Static or rarely changing data: data that changes infrequently, which gives us the guarantee that the cached data will remain valid for a longer period of time.

  • High read to write ratio: Data that is read more frequently than it is changed or updated can be cached efficiently. This ensures that the benefits of fast read access outweigh the cost of updating that data.

  • Predictable patterns: data that follows predictable read patterns, allowing for more efficient caching management.

These characteristics can help us effectively identify and cache data that will give our application the most significant performance improvements.

Now that we know how to find ideal candidates for caching, we can start enabling caching in our Spring Boot application and using its annotations (or doing it programmatically).

Clearing cache

Setting the right policies for clearing ensures that our cached data remains valid, up-to-date and memory efficient. This optimizes performance and stability in your Spring Boot applications.

I recommend the following approaches to managing cache invalidation in a Spring Boot application:

1. Removal policies

There are famous deletion policies:

  • Least Recently Used – LRU: removes the most recently used elements first.

  • Least Frequently Used – LFU: Removes the least frequently used items first.

  • First In, First OutFIFO: removes the items that were added to the cache most recently first.

Spring Cache Abstraction does not support these removal policies, but you can use specific configurations from the cache provider, depending on the provider you choose. By carefully choosing and configuring removal policies, you can ensure that your caching mechanism remains effective, efficient, and consistent with your application's performance and resource usage plans.

2. Time-based cleaning

The definition of the time-to-live (TTL) interval, which removes data from the cache after a certain period of time, is different for each cache provider. For example, in the case of using Redis to cache our Spring Boot application, we can set the TTL using the following config:

spring.cache.redis.time-to-live=10m

If your cache provider does not support the time-to-live parameter, you can implement it using the annotation @CacheEvict and a scheduler as shown below:

@CacheEvict(value = "cache1", allEntries = true)
@Scheduled(fixedRateString = "${your.config.key.for.ttl.in.milli}")
public void emptyCache1() {
  // Flushing cache, we don't need to write any code here except for a descriptive log!
} 

3. Customized deletion policies

By setting customized cache clearing policies based on events or situations for a single block of cached data or for all blocks, we can avoid cache pollution and maintain a stable state. Spring Boot has various annotations to support customized cache cleaning:

  • @CacheEvict: remove one or all blocks of cached data from the cache.

  • @CachePut: update data blocks with new values.

  • CacheManager: We can implement a custom eviction policy using Spring's CacheManager and Cache interfaces. Methods such as evict(), put(), or clear() can be used for this purpose. We can also access the cache provider directly to use more of the available functionality using the getNativeCache() method.

The most important point regarding customized purge policies is to find the right place and conditions for deleting data.

Conditional caching

Conditional cachingalong with deletion policiesplays an important role in optimizing our caching strategies. In some cases, we don't want to store all the data related to an entity in the cache. Conditional caching ensures that only data that meets specific criteria is stored in the cache. This prevents unnecessary data in the cache, thereby optimizing resource usage.

Annotations @Cacheable And @CachePut contain the condition and unless attributes, which allow us to set conditions for caching data:

  • Condition: specifies a SpEL (Spring Expression Language) expression that must evaluate to true for the data to be cached or refreshed.

  • Unless: specifies a SpEL expression that must evaluate to false for the data to be cached or refreshed.

To better understand what is being said, let's look at the following code:

@Cacheable(value = "employeeByName", condition = "#result.size() > 10", 
           unless = "#result.size() < 1000")
public List<Employee> employeesByName(String name) {
  // Method logic to retrieve data 
  return someEmployeeList; 
} 

In this code, the list of employees will be cached only if the size of the resulting list is greater than 10 and less than 1000.

The last important point here is that, as in the previous section, we can implement conditional caching programmatically using the CacheManager and Cache interfaces. This method gives more flexibility and control over the caching behavior.

Distributed cache and local cache

When we talk about caching, we usually think about distributed caching using Redis, Memcached or Hazelcast. In the era of the popularity of microservices architecture, local caching also plays a big role in improving application performance.

Understanding the differences between a local cache and a distributed cache can help you choose the right strategy to optimize caching in your Spring Boot application. Each type has its own advantages and disadvantages that are important to consider based on your application needs.

What is local cache?

Local cache is a caching mechanism in which data is stored in RAM on the same car or instancewhere the application is running. Some well-known libraries for local caching are Ehcache, Caffeine And Guava Cache.

Local caching provides incredible benefits quick access to cached data, as it avoids the impact of delays/long waiting times network and the overhead associated with retrieving data remotely (as with a distributed cache). A local cache is usually easier to set up and management, than a distributed cache does not require additional infrastructure.

When should you use a local cache and when a distributed cache?

Local cache is suitable for small applications or microservices, where there is a small set of data that comfortably fits in the RAM of one machine. It is also applicable to scenarios where the criticallow network latencyand data consistency across all instances is not a big problem.

On the other hand, a distributed caching system would be suitable for a large application with high data caching needsfor which scalability, fault tolerance and data consistency on multiple instances are critical.

Implementing Local Caching in Spring Boot

Spring Boot supports local caching through various in-memory cache providers such as Ehcache, Caffeine or ConcurrentHashMap. The only thing we need to add is the required dependencies, and we also need to enable caching in our Spring Boot application. For example, to implement local caching using Caffeine, we need to add these dependencies:

<dependency> 
  <groupId>org.springframework.boot</groupId> 
  <artifactId>spring-boot-starter-cache</artifactId> 
</dependency>

<dependency> 
  <groupId>com.github.ben-manes.caffeine</groupId> 
  <artifactId>caffeine</artifactId>
</dependency> 

Then you need to enable caching using the annotation @EnableCaching:

@SpringBootApplication 
@EnableCaching 
public class Application { 
  public static void main(String[] args) { 
    SpringApplication.run(Application.class, args); 
  } 
} 

In addition to the regular Spring Cache configs, we can also configure the Caffeine cache:

spring: 
   cache:
      caffeine:
         spec: maximumSize=500,expireAfterAccess=10m 

Customized key generation strategies

Algorithm default key generation In Spring cache usually works like this:

  • If no parameters are given, return 0.

  • If only one parameter is given, return that instance.

  • If more parameters are given, return a key computed from the hashes of all parameters.

This approach works well for objects with natural keys, if in hashCode() This is reflected.

But for some scenarios, the default key generation strategy does not work properly:

  • We need meaningful keys.

  • Methods contain multiple parameters of the same type.

  • Methods contain optional or Null parameters.

  • We need to include context-sensitive data such as locale, tenet ID, or user role in the key to make it unique.

Spring Cache offers two approaches to specify a custom key generation strategy:

  • Set the SpEL (Spring Expression Language) expression to the key attribute that should be evaluated to obtain the new key:

@CachePut(value = "phonebook", key = "#phoneNumber.name") 
PhoneNumber create(PhoneNumber phoneNumber) { 
  return phonebookRepository.insert(phoneNumber);
} 
@Component("customKeyGenerator") 
public class CustomKeyGenerator implements KeyGenerator { 
  @Override 
  public Object generate(Object target, Method method, Object... params) { 
    return "UNIQUE_KEY"; 
  } 
} 

/////// 
@CachePut(value = "phonebook", keyGenerator = "customKeyGenerator") 
PhoneNumber create(PhoneNumber phoneNumber) { 
  return phonebookRepository.insert(phoneNumber);
} 

Using strategies customized key generation can significantly increase the efficiency of both caching itself and the operation of the entire application. A well-designed key generation strategy ensures that all cached data blocks are identified correct and in a unique way, cache misses are minimizedA cache hits are maximized.

Asynchronous cache

As you may have noticed, the Spring cache abstraction API is blocking and synchronousand if you use a stack WebFlux with Spring Cache, then using Spring Cache annotations such as @Cacheable or @CachePutwill result in caching of reactor wrapper objects (Mono or Flux). In this case, you have three approaches:

  • Call the method cache() on the reactor type and annotate this method with Spring Cache annotations.

  • Use the asynchronous API provided by the cache provider (if supported) and manage the cache programmatically.

  • Implement an async wrapper around the caching API and make it async (if your cache provider doesn't support it)

However, after Spring Framework 6.2 releaseif the cache provider supports asynchronous caching for WebFlux projects (e.g. Caffeine Cache), then Spring's declarative caching infrastructure detects reactive method signatures, such as those returning a Reactor Mono or Flux type value, and treats such methods specially to asynchronously cache the values ​​they produce, rather than attempting to cache the Reactive Streams Publisher instances themselves. This requires configuration within the target cache provider, e.g. CaffeineCacheManager must be set to setAsyncCacheMode(true). The config is very simple:

@Configuration 
@EnableCaching 
public class CacheConfig { 
  @Bean 
  public CacheManager cacheManager() { 
    final CaffeineCacheManager cacheManager = new CaffeineCacheManager(); 
    cacheManager.setCaffeine(buildCaffeineCache()); 
    cacheManager.setAsyncCacheMode(true); // <-- 
    return cacheManager; 
  } 
} 

Monitoring cache to find bottlenecks

Monitoring cache metrics is critical to detecting bottlenecks and optimizing caching strategies in your application.

The most important metrics to monitor are:

  • Cache Hit Rate: The ratio of cache hits to total cache hits indicates effective caching, while a low number of hits indicates that the cache is not being used efficiently.

  • Cache Miss Rate: The ratio of cache misses to total cache requests means that the cache is often unable to provide the requested data, possibly due to a small cache size or poor key management.

  • Cache Eviction Rate: The frequency of evicting blocks of information from the cache. If this value is high, it means that the cache size is too small or the eviction policy does not fit the existing access pattern very well.

  • Memory Usage: The amount of memory used by the cache.

  • Latency: The time it takes to retrieve data from the cache.

  • Error Rates: metrics related to the load on the cache servers, such as requests per second.

How to Monitor Cache Metrics in Spring Boot

Spring Boot Actuator automatically configures Micrometer for all available cache instances at startup. We need to register caches created on the fly or programmatically after the startup phase. You can see the list of supported providers in documentation.

First of all, we need to add the actuator and micrometer dependencies:

<dependency> 
  <groupId>org.springframework.boot</groupId> 
  <artifactId>spring-boot-starter-actuator</artifactId> 
</dependency> 

<dependency> 
  <groupId>io.micrometer</groupId> 
  <artifactId>micrometer-registry-prometheus</artifactId>
</dependency> 

Then make the actuator endpoints available:

management.endpoints.web.exposure.include=*

Now we can see the list of configured caches using the /actuator/caches endpoint, and for cache metrics we can use the following:

/actuator/metrics/cache.gets 
/actuator/metrics/cache.puts 
/actuator/metrics/cache.evictions 
/actuator/metrics/cache.removals 

Conclusion: Optimize Caching in Spring Boot

In this article, we have explored 7 techniques to optimize caching in Spring Boot applications. Optimizing caching is extremely important as it directly improves performance and scalability by reducing the load on back-end systems and speeding up data retrieval. Effective caching strategies minimize latency, provide faster response times, and generally improve user experience.

Join the Russian-speaking Spring Boot developer community on Telegram — Spring AIOto stay up to date with the latest news from the world of Spring Boot development and everything related to it.

Waiting for everybody, join us!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *