Overview
In enterprise microservices environments, ensuring reliable, ordered, and fault-tolerant event delivery from services to Kafka is a challenge, especially when you’re also managing data consistency in relational databases. The Outbox Pattern solves this by making message publication part of the local transaction with your database.
This guide is ideal for backend engineers, platform architects, and DevOps teams building event-driven systems on Kubernetes.This tutorial walks you through setting up a production-ready event-driven architecture using:
Java (Spring Boot): to build microservices and manage domain logic
MySQL: as the transactional relational database
Kafka with the Strimzi Operator: to manage Kafka clusters on Kubernetes
Debezium: for Change Data Capture (CDC) from MySQL to Kafka
Kubernetes: to orchestrate and deploy the entire system
You’ll learn how to:
Use the Outbox Pattern to avoid dual writes
Configure Kafka with Strimzi Operator
Set up Debezium for CDC
Build a Spring Boot microservice with an outbox table
Deploy everything on Kubernetes
Why the Outbox Pattern?
Imagine you update a customer’s profile in MySQL and send a message to Kafka. If Kafka fails and the database commit succeeds, you’ve lost the event. If the database fails and Kafka succeeds, you’ve emitted a false event. This inconsistency is a real issue in distributed systems.
The Outbox Pattern avoids this by writing domain events to a dedicated outbox table in the same database transaction. Then, a connector like Debezium watches the table and pushes changes to Kafka.
Architecture
+-----------+ INSERT INTO +-------------+ CDC +---------+ CONSUMES +--------------+
| Service | ----------------> | Outbox DB | -------> | Debezium| ------> | Kafka Topic |
| (Java) | | (MySQL) | | | | |
+-----------+ +-------------+ +---------+ +--------------+
Step 1: Deploy Kafka on Kubernetes with Strimzi
Install Strimzi
kubectl create namespace kafka
kubectl apply -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
Create Kafka Cluster
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: kafka-cluster
namespace: kafka
spec:
kafka:
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
storage:
type: persistent-claim
size: 10Gi
deleteClaim: false
zookeeper:
replicas: 3
storage:
type: persistent-claim
size: 10Gi
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}
kubectl apply -f kafka-cluster.yaml -n kafka
Step 2: Set Up MySQL with Outbox Table
Schema Design
CREATE TABLE outbox (
id BIGINT PRIMARY KEY AUTO_INCREMENT,
aggregate_type VARCHAR(255),
aggregate_id VARCHAR(255),
type VARCHAR(255),
payload TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
Every business event (e.g., UserRegistered) gets inserted into this table along with the business logic.
Step 3: Configure Debezium to Capture Outbox Events
Deploy Debezium Connector
Use Strimzi’s KafkaConnect custom resource:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: debezium-connect
namespace: kafka
spec:
version: 3.6.0
replicas: 1
bootstrapServers: kafka-cluster-kafka-bootstrap:9092
config:
group.id: debezium-connect
offset.storage.topic: debezium-offsets
config.storage.topic: debezium-configs
status.storage.topic: debezium-status
build:
plugins:
- name: debezium-mysql-connector
artifacts:
- type: maven
group: io.debezium
artifact: debezium-connector-mysql
version: 2.5.0.Final
Apply it:
kubectl apply -f debezium-connect.yaml -n kafka
Register the Connector
POST/connectors
{
"name": "outbox-connector",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.hostname": "mysql",
"database.port": "3306",
"database.user": "debezium",
"database.password": "password",
"database.server.id": "184054",
"database.server.name": "mysql-server",
"database.include.list": "my_service_db",
"table.include.list": "my_service_db.outbox",
"database.history.kafka.bootstrap.servers": "kafka-cluster-kafka-bootstrap:9092",
"database.history.kafka.topic": "schema-changes.outbox",
"transforms": "unwrap",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": false
}
}
Step 4: Spring Boot Service with Outbox Support
Use Spring Data JPA to insert domain events to the outbox table in the same transaction as your business entity.
Java Example
@Service
public class UserService {
@Transactional
public void registerUser(User user) {
userRepository.save(user);
outboxRepository.save(new OutboxEvent(
"User",
user.getId().toString(),
"UserRegistered",
new ObjectMapper().writeValueAsString(user)
));
}
}
Step 5: Deploy to Kubernetes
Package the Spring Boot app with Docker
Use Helm or Kustomize for deployment
Ensure MySQL PVCs and services are available
Expose Kafka bootstrap service via ClusterIP for internal access
Observability and Reliability
Add Prometheus/Grafana to monitor Kafka lag and connector health
Use Kafka topic partitioning for scalability
Handle dead-letter topics and retries for consumer failures
Secure Kafka with TLS and ACLs if running in production
Conclusion
By combining the Outbox Pattern with Strimzi, Debezium, and Kubernetes, you achieve:
Transactional consistency between DB and Kafka
Resilience via CDC and retries
Scalability using Kafka partitions
Maintainability through clear service boundaries
This is a powerful foundation for building reliable, event-driven enterprise applications.
What’s Next?
Add schema registry support for validating Kafka messages
Use k9s and kafdrop for runtime Kafka debugging
Explore Kafka Streams or Flink for stream processing
Automate Helm deployments with GitOps (ArgoCD)
Stay tuned for the next tutorial where we’ll implement Kafka-based Saga Orchestration using the Outbox Pattern!