RabbitMQ is an open-source message broker that supports AMQP, STOMP and other communication technologies. It’s widely used in enterprise applications and modern micro-service architectures where it acts as an asynchronous message channel between different micro-services.
This guide will describe how you can cluster RabbitMQ on multiple CentOS 7 servers to form a high-availability message broker. In this tutorial, one server will act as a master server and the other servers will act as mirror servers in case the master server becomes unavailable.
Prerequisites
- At least two freshly deployed and updated CentOS 7 instances in the same subnet with private networking enabled
- RabbitMQ installed with the management console enabled on each server (see How to Install RabbitMQ on CentOS 7)
- A non-admin user with sudo rights (See How to Use Sudo on Debian, CentOS, and FreeBSD)
Configure the firewall
The CentOS firewall, (firewalld
), does not permit any incoming traffic by default. To make RabbitMQ available for other systems in and outside the network, and to allow us to access the management console, we must first open some ports.
The web interface management console of RabbitMQ listens by default on port 15672
. We would like to make the management console publicly available so that we can access it from our computer. We therefore will instruct firewalld
to permanently open port 15672
in the public zone (which is the default and active zone on a Vultr instance).
sudo firewall-cmd --zone=public --add-port=15672/tcp --permanent
The RabbitMQ nodes need to be able to communicate with each other. We would like to open the necessary ports, but only over the internal network. We don’t want anyone on the internet to be able to administer or directly contact our servers. The following commands assume that our servers are on the 192.168.0.100/24
subnet.
The first service is the epmd
peer discovery service which listens by default on port 4369
.
sudo firewall-cmd --permanent --zone=public --add-rich-rule='
rule family="ipv4"
source address="192.168.0.100/24"
port protocol="tcp" port="4369" accept'
For internode and CLI communication, RabbitMQ needs to be able to communicate over port 25672
.
sudo firewall-cmd --permanent --zone=public --add-rich-rule='
rule family="ipv4"
source address="192.168.0.100/24"
port protocol="tcp" port="25672" accept'
The CLI tools communicate over the ports range 35672-35682
.
sudo firewall-cmd --permanent --zone=public --add-rich-rule='
rule family="ipv4"
source address="192.168.0.100/24"
port protocol="tcp" port="35672-35682" accept'
If your applications need the AMQP protocol, you will also need to open ports 5671
and 5672
. If you need to be able to communicate over another protocol, you can find the necessary information about the networking requirements of RabbitMQ on the official RabbitMQ documentation.
sudo firewall-cmd --permanent --zone=public --add-rich-rule='
rule family="ipv4"
source address="192.168.0.100/24"
port protocol="tcp" port="5672" accept'
sudo firewall-cmd --permanent --zone=public --add-rich-rule='
rule family="ipv4"
source address="192.168.0.100/24"
port protocol="tcp" port="5671" accept'
Now that firewalld
is configured, we need to instruct it to reload the configuration.
sudo firewall-cmd --reload
Repeat the steps from this section on all servers.
Install rabbitmqadmin
The management plugin comes with a Python tool called rabbitmqadmin
which can be easily installed on the system once the management plugin is enabled.
sudo wget http://localhost:15672/cli/rabbitmqadmin
sudo mv rabbitmqadmin /usr/local/bin/
sudo chmod +x /usr/local/bin/rabbitmqadmin
Configure DNS
You must use the server’s hostnames to identify the servers when clustering. By default, the servers have no DNS record assigned and the connection will fail. To quickly overcome this, add the master and mirror host name to the /etc/hosts
file using your favourite editor.
For example, your master’s hosts file might look like the following. Notice the last two records, which allow the servers to identify each other by their hostname. Be sure to change the IP addresses to your own.
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 guest
::1 guest
127.0.0.1 YOUR_MASTER_SERVER_HOST_NAME
::1 YOUR_MASTER_SERVER_HOST_NAME
192.168.0.101 YOUR_MASTER_SERVER_HOST_NAME
192.168.0.102 YOUR_MIRROR_SERVER_HOST_NAME
Cluster the nodes
An import prerequisite to allow nodes to join each other is that the Erlang cookie of all nodes are identical. By default, each node will be assigned a unique Erlang cookie, so you must reconfigure it on all nodes.
The following command will set the Erlang cookie to “WE<3COOKIES
“, but feel free to change this to your liking. Do this on all servers.
sudo sh -c "echo 'WE<3COOKIES' > /var/lib/rabbitmq/.erlang.cookie"
Restart RabbitMQ on all servers to make sure that the Erlang cookie is properly reloaded.
sudo systemctl restart rabbitmq-server.service
Execute the following commands on all servers except on the master server. This will let the nodes join the master server and form a cluster.
sudo rabbitmqctl stop_app
sudo rabbitmqctl join_cluster "rabbit@<YOUR_MASTER_SERVER_HOST_NAME>"
sudo rabbitmqctl start_app
Verify that the nodes have joined the cluster by running the following command.
sudo rabbitmqctl cluster_status
All of your nodes will appear in the nodes
and running_nodes
section of the output. From now on, you no longer need to repeat steps on each server, configuration will automatically be mirrored to the other nodes.
Create a high-availability policy
Now that we have a cluster of RabbitMQ nodes, we can use this to make high-availability queues and exchanges by setting up a new policy. This policy can be added through the RabbitMQ Management Console or using the command line interface.
sudo rabbitmqctl set_policy -p "/" --priority 1 --apply-to "all" ha ".*" '{ "ha-mode": "exactly", "ha-params": 2, "ha-sync-mode": "automatic"}'
The following list will explain what each part of the command means.
-p "/"
: Use this policy on the"/"
vhost (the default after installation)--priority 1
: The order in which to apply policies--apply-to "all"
: Can be"queues"
,"exchanges"
or"all"
ha
: The name we give to our policy".*"
: The regular expression which is used to decide to which queues or exchanges this policy is applied.".*"
will match anything'{ "ha-mode": "exactly", "ha-params": 2, "ha-sync-mode": "automatic"}'
: The JSON representation of the policy. This document describes that we want – exactly 2 nodes on which the data is automatically synchronized
In short, this policy will ensure that we will always have 2 copies of the data on a queue or exchange as long as we have at least 2 nodes up and running. If you have more nodes you can increase the value of ha-params
. A quorum, (N/2 + 1
), of nodes is advised. Having more copies of your data would result in higher disk, i/o and net usage which could result in a degraded performance.
If you would like to mirror the data to all the nodes in the cluster, you could use the following JSON document.
'{ "ha-mode": "all", "ha-sync-mode": "automatic"}'
If you would like to mirror the data only to specific nodes, (for example: node-1
and node-2
), you could use the following.
'{ "ha-mode": "nodes", "ha-params" :["rabbit@node-1", "rabbit@node-2"], "ha-sync-mode": "automatic"}'
You can change the regular expression to assign different policies to different queues. Say we have the following three nodes:
- rabbit@master
- rabbit@client-ha
- rabbit@product-ha
We can then create two policies which will result in queues having a name that starts with “client” to be mirrored to the rabbit@client-ha
node and all queues that have a name which starts with “product” to be mirrored to the rabbit@product-ha
node.
sudo rabbitmqctl set_policy -p "/" --priority 1 --apply-to "queues" ha-client "client.*" '{ "ha-mode": "nodes", "ha-params": ["rabbit@master", "rabbit@client-ha"], "ha-sync-mode": "automatic"}
sudo rabbitmqctl set_policy -p "/" --priority 1 --apply-to "queues" ha-product "product.*" '{ "ha-mode": "nodes", "ha-params": ["rabbit@master", "rabbit@product-ha"], "ha-sync-mode": "automatic"}
A small remark here: exclusive queues are never mirrored or durable in RabbitMQ, even if this policy would match such queues. Exclusive queues are automatically destroyed once a client disconnects and, as such, it would be of no use to replicate it to another server. If the server were to fail, the client would disconnect from it and the queue would be destroyed automatically. Mirrored instances would be destroyed as well.
Testing the setup
In order to test the clustered setup we can create a new queue using the command line interface through the management console.
sudo rabbitmqadmin declare queue --vhost "/" name=my-ha-queue durable=true
This will create a durable queue on the default /
vhost with the name my-ha-queue
.
Run the following command and verify in the output that the queue has our ‘ha’ policy assigned and has pid’s on the master and on a mirror node.
sudo rabbitmqctl list_queues name policy state pid slave_pids
We can now publish a message to the queue from the master node and stop RabbitMQ on the master node.
sudo rabbitmqadmin -u user_name -p password publish routing_key=my-ha-queue payload="hello world"
sudo systemctl rabbitmqctl shutdown
Now get it back by connecting to the mirror node.
sudo rabbitmqadmin -H MIRROR_NODE_IP_OR_DNS -u user_name -p password get queue=my-ha-queue
Finally, we can restart our master node.
sudo systemctl start rabbitmq-server.service
Delete the guest user
As mentioned before, RabbitMQ automatically creates a guest user with a default guest password. It would be bad practice to leave this default user on a publicly exposed system.
sudo rabbitmqctl delete_user guest
Want to contribute?
You could earn up to $300 by adding new articles
Suggest an update
Request an article