Clean • Professional
In high-traffic applications and microservices architecture, performance is crucial. When every request directly queries the database, it increases load and slows down response time.
Redis distributed caching solves this by adding a centralized, in-memory cache layer between services and the database. Applications first check Redis for data, reducing database queries and improving speed.
This approach ensures faster responses, lower database load, and better scalability across multiple services.
Redis is an open-source, in-memory data store designed for high performance and low latency. It is commonly used as:
Unlike traditional databases that store data on disk, Redis stores data in RAM (memory). This allows it to deliver extremely fast read and write operations, often in sub-millisecond time.
Because of its speed and simplicity, Redis is widely used in modern applications to improve performance, handle high traffic, and reduce database load.
Distributed caching is a technique where cached data is stored in a shared external system (like Redis) instead of inside individual application instances.
Without distributed caching:
Service A → Database
Service B → Database
Service C → Database
Each service directly queries the database, increasing load and slowing performance.
With distributed caching:
Service A → Redis → Database
Service B → Redis → Database
Service C → Redis → Database
Here, all services first check Redis before querying the database. Since they share the same cache layer, data remains consistent across multiple instances and unnecessary database calls are reduced.
Redis is widely adopted in production systems because it offers:
A typical request flow in Redis distributed caching works like this:
This approach significantly reduces database pressure and speeds up response times.
When implementing Redis distributed caching, different patterns are used depending on performance and consistency requirements.
This pattern is simple, flexible, and widely used.
Example:
Scenario: Product details API
GET /products/101
Spring Boot Example:
@Cacheable(value = "products", key = "#id")
public Product getProductById(Long id) {
return productRepository.findById(id).orElse(null);
}
It ensures better data consistency but may slightly increase write latency.
Practical Example:
Scenario: Updating product price
PUT /products/101
Spring Boot Example:
@CachePut(value = "products", key = "#product.id")
public Product updateProduct(Product product) {
return productRepository.save(product);
}
This improves write performance but introduces slight eventual consistency.
Concept Flow:
User Action → Redis → Background Job → Database
Concept Flow:
Application → Cache → (ifmiss) → Database → Cache → Application
This pattern simplifies application logic but requires additional configuration.
Now let’s implement Redis distributed caching step by step using Spring Boot in a clean and practical way.
Step 1: Add Dependencies (Maven)
Add these dependencies in your pom.xml:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
spring-boot-starter-data-redis → Connects your app to Redisspring-boot-starter-cache → Enables Spring’s caching abstractionStep 2: Configure Redis (application.yml)
spring:
redis:
host: localhost
port: 6379
cache:
type: redis
This tells Spring Boot:
Step 3: Enable Caching
Enable caching in your main application class:
@SpringBootApplication
@EnableCaching
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
@EnableCaching activates Spring’s caching mechanism.
Step 4: Use @Cacheable in Service Layer
@Service
public class ProductService {
@Autowired
private ProductRepository productRepository;
@Cacheable(value = "products", key = "#id")
public Product getProductById(Long id) {
simulateSlowService();
return productRepository.findById(id).orElse(null);
}
private void simulateSlowService() {
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
}
}
What Happens Here?
This improves performance dramatically in read-heavy APIs.
To prevent stale or outdated data, you should configure a TTL (Time-To-Live) for cached entries. TTL automatically removes data from Redis after a fixed duration.
Here’s how you can configure it:
@Bean
public RedisCacheManager cacheManager(RedisConnectionFactory factory) {
RedisCacheConfiguration config = RedisCacheConfiguration
.defaultCacheConfig()
.entryTtl(Duration.ofMinutes(10)); // Cache expires after 10 minutes
return RedisCacheManager.builder(factory)
.cacheDefaults(config)
.build();
}
Now, cached entries expire automatically after 10 minutes.
In real-world systems, Redis can be deployed in different ways based on scalability and availability needs:
Choosing the right deployment model ensures:
Redis distributed caching is widely used in microservices architectures to improve performance and reduce database load. Common use cases include:
Avoid distributed caching when:
Remember, Redis improves performance but does not replace your primary database.
| Feature | Redis (Distributed Cache) | Local In-Memory Cache |
|---|---|---|
| Shared Across Instances | Yes – All services share the same centralized cache | No – Each application instance has its own separate cache |
| Suitable for Microservices | Yes – Ideal for distributed systems and multiple service instances | Limited – Works mainly for single-instance applications |
| Scalability | High – Supports clustering and horizontal scaling | Low – Bound to a single application server |
| Fault Tolerance | Yes – Supports replication and failover | No – Cache is lost if the instance crashes |
| Consistency Across Nodes | Yes – Same cached data available to all services | No – Data may differ across instances |
Redis Distributed Caching is a powerful solution for building fast and scalable applications. It helps improve performance, reduce database load, support microservices at scale, and maintain high availability in production environments.
When properly integrated with Spring Boot and a distributed architecture, Redis becomes a core component that ensures faster response times, better system reliability, and smoother handling of high traffic.