RabbitMQ on AWS Fargate with Terraform
My last adventures with AWS and Terraform chapter: Rabbit MQ + Fargate
This amazing rabbit is a well known messaging broker service and is used to help you to build asynchronous workflows. However, AWS doesn't have a service that gives you a RabbitMQ instance, instead, you have Amazon MQ that is based on Apache Active MQ. So, for the sake of the deadline, I wasn't able to try the Amazon MQ to have a full manage AWS queue service.
So the first thing that I wanted to try was to deploy the rabbit on ECS with Fargate. And googling around I didn't find much about doing that with ECS.
As always, Terraform to the rescue. Please have in mind that my network infrastructure follows the good AWS practices like VPC, private and public subnets, etc.
I am going to use the RabbitMQ Management image from Dockerhub on my task definition. And we are going to open the management port 15672 to the world via an external application load balancer, and the connection to the set up on 5672 via an internal network load balancer.
Setting up the Network Load Balancer
When I was first trying to set up this, I was adding the 5672 port to my internal application load balancer, and I wasn't able to connect to it. It wasn't obvious for me on the first try, but the application load balancer only accepts HTTP connections, and the AMQP protocol isn't an HTTP. It's TCP. Even that below HTTP you have a TCP connection.
resource "aws_lb" "network" {
name = "${var.app_name}-network-lb"
internal = true
load_balancer_type = "network"
subnets = tolist(data.aws_subnet_ids.private.ids)
enable_deletion_protection = true
tags = {
Product = var.app_name
}
}
resource "aws_lb_target_group" "rabbit" {
name = "${var.app_name}-rabbitmq-lb-tg"
port = 5672
protocol = "TCP"
vpc_id = var.vpc_id
target_type = "ip"
lifecycle {
create_before_destroy = true
}
stickiness {
enabled = false
type = "lb_cookie"
}
depends_on = [aws_lb.network]
}
resource "aws_lb_listener" "rabbit" {
load_balancer_arn = aws_lb.network.arn
port = 5672
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.rabbit.arn
}
}
One thing to note is that a Network Load Balancer doesn't have a security group configuration, because it works on the fourth layer of the OSI network model, which is the transport layer.
Setting up the Application Load Balancer
I already have an internet-facing load balancer, so at this part, I just needed to add one more listener to it and set up a load balancer rule to redirect to the right load balancer target if the subdomain with 'rabbit' was matched on the host header and the connection via the 443 HTTPS port.
resource "aws_lb_target_group" "management" {
name = "${var.app_name}-rabbitmq-mg-lb-tg"
port = 15672
protocol = "HTTP"
vpc_id = var.vpc_id
target_type = "ip"
lifecycle {
create_before_destroy = true
}
health_check {
matcher = "200"
path = "/"
port = 15672
timeout = 30
interval = 40
}
tags = {
Product = var.app_name
}
depends_on = [data.aws_lb.external]
}
resource "aws_lb_listener_rule" "management" {
listener_arn = data.aws_lb_listener.selected443.arn
action {
type = "forward"
target_group_arn = aws_lb_target_group.management.arn
}
condition {
host_header {
values = [local.url]
}
}
}
DNS
As a task running on fargate can die and when it starts again the IP address will change, I can't connect via an IP address, so a quick DNS setup for both load balancers:
resource "aws_route53_record" "this" {
zone_id = var.zone_id
name= local.url
type = "A"
alias {
name = data.aws_lb.external.dns_name
zone_id = data.aws_lb.external.zone_id
evaluate_target_health = true
}
}
resource "aws_route53_record" "internal" {
zone_id = var.zone_id
name= local.internal_url
type = "A"
alias {
name = aws_lb.network.dns_name
zone_id = aws_lb.network.zone_id
evaluate_target_health = true
}
}
About dying...
As we know, an ECS Task can die, and it will restore the original image from the task definition. In the case of a RabbitMQ with custom configuration, as for example, custom users (The docker image comes with the 'guest' user by default), when the task dies and returns it will reset those configurations. In this case, I did some extra setup to make sure that when the task dies and return I won't lose my configuration.
For this you need:
Export your RabbitMQ definitions on the Management interface: Overview -> Export Definitions
Create a custom rabbitmq.config
[
{rabbit, [
{loopback_users, []}
]},
{rabbitmq_management, [
{load_definitions, "/etc/rabbitmq/definitions.json"}
]}
].
Create a Dockerfile
FROM rabbitmq:3-management
ADD rabbitmq.config /etc/rabbitmq/
ADD definitions.json /etc/rabbitmq/
RUN chown rabbitmq:rabbitmq /etc/rabbitmq/rabbitmq.config /etc/rabbitmq/definitions.json
CMD ["rabbitmq-server"]
Build an publish the image on your Docker Repository
With the 3 steps above, you can build a custom RabbitMQ image, with your configurations, at least on the user level, making sure that your broker will get back up without needing to set up everything from the ground.
The full Terraform Module script it's available on my Gitlab Terraform Group. Check it out, and try yourself: https://gitlab.com/terraform147/rabbitmq
That's all folks!