09: RabbitMQ (Durable Messaging & Background Workers)¶
9.1 Introduction¶
RabbitMQ is a robust, general-purpose message broker built for:
• Durable queues
• Reliable background processing
• Asynchronous workflows
• Distributed job execution
• Retry mechanisms
• Decoupling microservices
• Ensuring no data loss even if services restart
Where Redis Pub/Sub provides real-time, ephemeral messages, RabbitMQ provides:
✔ Guaranteed delivery
✔ Persistent queues
✔ Acknowledgements (ACK/NACK)
✔ Retry & dead-letter queues
✔ Worker scalability
✔ Message routing via exchanges
RabbitMQ is a perfect fit for LocalCloudLab when:
• A task must **not be lost**, even if a pod crashes
• A long-running job must run **in the background**
• A queue of tasks needs to be **processed reliably**
• You want to decouple the web APIs from heavy operations
Common examples in LocalCloudLab:
• Sending transactional emails
• Processing analytics events
• Heavy search indexing tasks
• Report generation
• Database synchronization tasks
• Update notifications to downstream systems
9.2 Installing RabbitMQ (Bitnami Helm Chart)¶
We use Bitnami’s RabbitMQ chart — the most stable, non-deprecated option for Kubernetes.
9.2.1 Add Bitnami repo¶
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
9.2.2 Create namespace “messaging”¶
kubectl create namespace messaging
9.2.3 Install RabbitMQ (single-node)¶
helm install rabbitmq bitnami/rabbitmq -n messaging --set auth.username=admin --set auth.password=YourRabbitPassword --set auth.erlangCookie=SomeLongSecretCookieValue --set persistence.size=8Gi --set persistence.storageClass="local-path"
Parameters:
• auth.username / auth.password
Login for RabbitMQ UI + client connections.
• auth.erlangCookie
Required for clustering (even if single-node). Must remain unchanged.
• persistence.size
Queue durability on disk.
• storageClass="local-path"
Standard persistent storage for k3s.
9.2.4 Check installation¶
kubectl get pods -n messaging
kubectl get svc -n messaging
Expected services:
rabbitmq.messaging.svc.cluster.local → AMQP client connection
rabbitmq-headless.messaging.svc → for clustering
rabbitmq-management.messaging.svc → Management UI
9.2.5 Expose RabbitMQ Management UI¶
Patch the service to LoadBalancer:
kubectl patch svc rabbitmq -n messaging -p '{"spec": {"type": "LoadBalancer"}}'
Check assigned IP:
kubectl get svc -n messaging | grep rabbitmq
Open in browser:
http://<external-ip>:15672
Login:
username: admin
password: YourRabbitPassword
Inside the UI you will see:
✔ Queues
✔ Exchanges
✔ Bindings
✔ Consumers
✔ Message rates
✔ Dead-letter queues
9.3 Connecting .NET APIs to RabbitMQ¶
Your .NET apps use the RabbitMQ.Client library.
9.3.1 Install the package¶
dotnet add package RabbitMQ.Client
9.3.2 Connection string format¶
RabbitMQ hostname inside Kubernetes:
rabbitmq.messaging.svc.cluster.local
Use AMQP URI:
"RabbitMQ": {
"ConnectionString": "amqp://admin:YourRabbitPassword@rabbitmq.messaging.svc.cluster.local:5672/"
}
9.3.3 Creating a persistent connection (recommended)¶
RabbitMQ connections should be reused, not recreated per request.
Example:
services.AddSingleton<IConnectionFactory>(new ConnectionFactory
{
Uri = new Uri(configuration["RabbitMQ:ConnectionString"]),
DispatchConsumersAsync = true
});
services.AddSingleton(sp =>
{
var factory = sp.GetRequiredService<IConnectionFactory>();
return factory.CreateConnection();
});
Later:
var channel = _connection.CreateModel();
9.3.4 Publishing a message
A basic queue + publish example:
var channel = _connection.CreateModel();
channel.QueueDeclare("email_queue", durable: true, exclusive: false, autoDelete: false);
var body = Encoding.UTF8.GetBytes(JsonSerializer.Serialize(emailRequest));
channel.BasicPublish(
exchange: "",
routingKey: "email_queue",
basicProperties: null,
body: body);
Console.WriteLine("Message sent to RabbitMQ!");
Messages are now stored durably until processed.
9.3.5 Consuming messages (background worker)
Create a Worker service with HostedService:
public class EmailWorker : BackgroundService
{
private readonly IConnection _connection;
private IModel _channel;
protected override Task ExecuteAsync(CancellationToken stoppingToken)
{
_channel = _connection.CreateModel();
_channel.QueueDeclare("email_queue", durable: true, exclusive: false, autoDelete: false);
var consumer = new AsyncEventingBasicConsumer(_channel);
consumer.Received += async (sender, ea) =>
{
try
{
var json = Encoding.UTF8.GetString(ea.Body.ToArray());
var email = JsonSerializer.Deserialize<EmailMessage>(json);
await SendEmail(email);
_channel.BasicAck(ea.DeliveryTag, multiple: false);
}
catch (Exception ex)
{
_channel.BasicNack(ea.DeliveryTag, multiple: false, requeue: true);
}
};
_channel.BasicConsume("email_queue", autoAck: false, consumer);
return Task.CompletedTask;
}
}
Key concepts:
✔ durable queues
✔ manual acknowledgements
✔ retry-on-failure
✔ background execution
This prevents message loss and ensures at-least-once delivery.
(End of Part 1 — Part 2 will continue with exchanges, routing keys, dead-letter queues, retry strategies, monitoring, and worker scaling.)
9.4 Exchanges & Routing Keys¶
RabbitMQ uses exchanges to route messages to queues. A publisher never sends a message directly to a queue — the exchange handles the routing.
There are four main types:
1. direct exchange
2. fanout exchange
3. topic exchange
4. headers exchange (rarely used)
Understanding exchanges is essential for building scalable messaging architecture inside LocalCloudLab.
9.4.1 Direct Exchange¶
A direct exchange routes messages by exact routing key match.
Example:
Exchange: bookings.direct
Queue: booking_created
Routing Key: booking.created
Publisher:
channel.BasicPublish(
exchange: "bookings.direct",
routingKey: "booking.created",
body: body);
Consumer listens to queue bound:
booking_created ← (routing key "booking.created")
Use cases:
✔ Triggering specific workflows
✔ Event notifications with fixed routing
✔ Simple "event per queue" patterns
9.4.2 Fanout Exchange¶
A fanout exchange broadcasts messages to all bound queues, ignoring routing keys.
Example:
exchange: audit.fanout
Three services bind:
analytics_queue ← receives all messages
audit_logging_queue ← receives all messages
realtime_monitoring ← receives all messages
Publisher:
channel.BasicPublish("audit.fanout", "", body);
Use cases:
✔ Broadcast events to many consumers
✔ Cluster-wide cache invalidation
✔ Realtime analytics streams
9.4.3 Topic Exchange¶
Topic exchanges use wildcard routing keys.
Routing patterns:
“search.*”
“*.created”
“order.#”
Special symbols:
* matches one word
# matches zero or more words
Example:
Publisher sends:
routingKey = "search.hotel.created"
Consumer binds queue with:
"search.*.*" → gets "search.hotel.created"
"search.#" → gets everything under "search"
Use cases:
✔ Microservices event routing
✔ Flexible filtering
✔ Logging & analytics pipelines
9.4.4 Headers Exchange (rarely used)¶
Routes messages by checking AMQP headers.
x-match = all / any
Use cases are narrow:
✔ Advanced filtering
✔ Legacy integrations
Most modern systems use direct/topic exchanges instead.
9.5 Retry Strategy & Dead-Letter Queues (DLQ)¶
Failures happen:
• Database is temporarily down
• External API returns an error
• Message contains unexpected data
• Worker crashes mid-message
RabbitMQ must handle these failures safely and without message loss.
The correct approach is using:
✔ Retry queues
✔ Delayed queues
✔ Dead-letter exchanges
✔ Exponential backoff
9.5.1 Basic retry architecture¶
Create:
• main_queue
• retry_queue (delayed)
• dead_letter_queue
Flow:
- Worker fails → NACK message → goes to retry_queue
- retry_queue has TTL (delay)
- After TTL expires → message returns to main_queue
- After X attempts → message is moved permanently to dead_letter_queue
This prevents:
✗ Infinite retry loops
✗ Flooding logs
✗ Overloading DB or external APIs
9.5.2 Declaring queues with DLX¶
Main queue:
channel.QueueDeclare("email_queue",
durable: true,
exclusive: false,
autoDelete: false,
arguments: new Dictionary<string, object>
{
{ "x-dead-letter-exchange", "email.dlx" },
{ "x-dead-letter-routing-key", "email.retry" }
});
Retry queue:
channel.QueueDeclare("email_retry_queue",
durable: true,
exclusive: false,
autoDelete: false,
arguments: new Dictionary<string, object>
{
{ "x-message-ttl", 30000 }, // 30 seconds delay
{ "x-dead-letter-exchange", "" },
{ "x-dead-letter-routing-key", "email_queue" }
});
Dead-letter queue:
channel.QueueDeclare("email_dead_queue", durable: true);
9.5.3 Exponential backoff strategy¶
Retry delays:
5 seconds
30 seconds
2 minutes
10 minutes
You accomplish this by creating multiple retry queues, each with a different TTL.
This prevents:
• hammering external services
• retry storms
• rapid hot-loop failures
9.6 Monitoring RabbitMQ¶
RabbitMQ includes a powerful management UI, but for long-term monitoring we use:
• Prometheus exporter
• Grafana dashboards
• Built-in queues charting
9.6.1 Enable Prometheus metrics in Helm install¶
Already enabled by default in Bitnami.
If not:
--set metrics.enabled=true
9.6.2 Install RabbitMQ Exporter (optional)¶
Bitnami includes exporter automatically.
Metrics include:
• queue depth
• message publish rate
• consumer count
• unacknowledged messages
• dead-letter messages
9.6.3 Import RabbitMQ dashboards in Grafana¶
Common dashboards:
✔ RabbitMQ Overview
✔ RabbitMQ Queues
✔ RabbitMQ Nodes Metrics
Monitor for:
• growing unacked messages (worker slow or broken)
• growing queue sizes (input > processing speed)
• connection spikes
• message rate anomalies
• reduced consumers count
9.7 Worker Scaling in Kubernetes¶
RabbitMQ works beautifully with Kubernetes scaling.
Rule of thumb:
More messages in the queue → scale consumers
Fewer messages → scale down
Scale workers manually:
kubectl scale deployment email-worker -n messaging --replicas=5
Or automatically via:
• KEDA (Kubernetes Event-Driven Autoscaling)
• RabbitMQ Scaler (queue-length based)
KEDA example (not required now but future-proof):
triggers:
- type: rabbitmq
metadata:
host: RabbitMQConnectionString
queueName: email_queue
queueLength: "50"
Meaning:
If queue > 50 → scale up
If queue < 10 → scale down
This ensures smooth message processing without overloading the system.
9.8 Summary of Section 9¶
In this section, you learned:
✔ How RabbitMQ differs from Redis and why you need both
✔ How to install RabbitMQ in a k3s cluster using Helm
✔ How to publish & consume messages from .NET
✔ How to implement durable background workers
✔ How to design retry / dead-letter strategies
✔ How to monitor RabbitMQ with Prometheus & Grafana
✔ How to scale workers horizontally in Kubernetes
RabbitMQ now acts as your durable asynchronous backbone, removing heavy workloads from your APIs and ensuring resilient processing.
Next section (Section 10) will introduce:
• SSL Certificates & Ingress TLS
• Cert-manager installation
• ClusterIssuer / Issuer configuration
• Automatic HTTPS with Let's Encrypt
• Using TLS with Envoy Gateway and your public domains
(End of Section 09 — Complete)