Clean • Professional
Modern distributed systems require high-throughput, scalable, and fault-tolerant messaging to handle large volumes of events in real time. In Spring Boot microservices, Apache Kafka acts as a powerful event streaming platform that serves as the backbone for event-driven architectures.
Apache Kafka is a distributed event streaming platform designed for durability, horizontal scalability, and event replay. Unlike traditional message queues, Kafka treats messages as immutable events stored in a distributed log.
Kafka uses a publish–subscribe event streaming model:
Kafka is widely used for event-driven microservices, real-time analytics, data pipelines, and stream processing systems.
Spring Boot integrates Kafka using Spring for Apache Kafka, providing seamless configuration and developer-friendly APIs.
Kafka is built around a few fundamental concepts that enable scalability and fault tolerance.
A topic is a logical category or stream where events are published.
order-events, payment-eventsEach topic is split into partitions to enable parallelism.
Key characteristics:
Partitions are replicated across brokers to ensure fault tolerance.
Producers publish events to topics.
Consumers read events from topics.
Producer
↓
Topic
↓
Partition (ordered log)
↓
Consumer Group
↓
Consumer
Spring Boot makes it extremely simple to build Kafka producers and consumers using Spring for Apache Kafka. With minimal configuration, you can publish and consume events in a clean, scalable way.
Add the Spring Kafka dependency in your pom.xml:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
A producer publishes messages (events) to a Kafka topic using KafkaTemplate.
@Service
public class OrderProducer {
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void sendMessage(String message) {
kafkaTemplate.send("order-topic", message);
}
}
What Happens Here?
KafkaTemplate is the main class used to send messages."order-topic" is the Kafka topic.A consumer listens to a topic using the @KafkaListener annotation.
@Service
public class OrderConsumer {
@KafkaListener(topics = "order-topic", groupId = "order-group")
public void consume(String message) {
System.out.println("Received: " + message);
}
}
What Happens Here?
@KafkaListener automatically subscribes to the topic.groupId enables consumer group functionality.Kafka tracks how messages are consumed using offsets and consumer groups. These two concepts are the foundation of Kafka’s scalability, fault tolerance, and reliability.
An offset is a unique sequential ID assigned to each message within a Kafka partition.
A consumer group is a set of consumers that work together to consume data from a topic.
Important Rules:
Kafka supports two offset commit strategies:
Auto Commit (Default)
Manual Commit (Recommended)
Disable auto-commit in configuration:
spring.kafka.consumer.enable-auto-commit=false
Spring Kafka supports manual acknowledgment for precise offset control.
@KafkaListener(topics = "order-topic", groupId = "order-group")
public void listen(ConsumerRecord<String, String> record,
Acknowledgment ack) {
// process message
System.out.println("Received: " + record.value());
// commit offset manually
ack.acknowledge();
}
A Dead Letter Queue (DLQ) is used to handle failed messages safely without crashing the system.
When a message cannot be processed successfully after retries, it is redirected to a separate topic for further analysis instead of blocking the consumer.
DLQ in Kafka
Spring Kafka provides DeadLetterPublishingRecoverer.
@Bean
public DefaultErrorHandler errorHandler(
KafkaTemplate<Object, Object> template) {
DeadLetterPublishingRecoverer recoverer =
new DeadLetterPublishingRecoverer(template);
return new DefaultErrorHandler(recoverer);
}
Failed Messages Location
By default, failed messages are sent to:
original-topic.DLT
Why Use a DLQ?
Spring Cloud Stream abstracts Kafka-specific implementation details and provides a declarative, event-driven programming model.
Instead of writing Kafka-specific code, you work with logical bindings and message channels.
Why Use Spring Cloud Stream?
Add Dependency
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
Spring Cloud Stream supports a functional programming model, where message producers and consumers are defined as simple Java functions.
This approach removes boilerplate code and makes event handling clean and expressive.
Consumer Example
@Bean
public Consumer<String> processOrder() {
return message -> {
System.out.println("Processing: " + message);
};
}
To explicitly configure Kafka as the messaging broker, set the default binder:
spring.cloud.stream.defaultBinder=kafka
Spring Cloud Stream will now route all message bindings through Kafka.
Kafka is designed as a high-performance event streaming platform, making it ideal for large-scale distributed systems.
Key Advantages
| Feature | Kafka | RabbitMQ |
|---|---|---|
| Messaging Model | Distributed event streaming | Message queue |
| Throughput | Very high (millions of events/sec) | Moderate |
| Message Replay | Supported (offset-based) | Not supported |
| Message Ordering | Guaranteed per partition | Guaranteed per queue |
| Scalability | Horizontal via partitions | Limited by queues |
| Delivery Focus | Event durability & streaming | Reliable message delivery |
| Best Use Case | Event backbone, analytics, streaming | Task processing, workflows |
Apache Kafka is a powerful event streaming platform built for high throughput, scalability, and reliability. With Spring Boot and Spring for Apache Kafka, developers can easily build scalable producers and consumers, manage offsets, handle failures using Dead Letter Topics, and process events in real time.
Kafka is ideal for event-driven microservices, streaming pipelines, and systems that require event replay, making it the perfect choice for building modern, data-driven architectures.