We configure DRBD for replication of storage on two CentOS 7 servers

Translation of the article prepared in advance of the start of the course “Linux administrator. Virtualization and clustering “.

DRBD (Distributed Replicated Block Device) is a distributed, flexible and universally replicated data storage solution for Linux. It reflects the contents of block devices such as hard drives, partitions, logical volumes, etc. between servers. It creates copies of data on two storage devices so that if one of them fails, the data can be used on the second.

We can say that this is something like RAID network configuration 1 with disks mirrored to different servers. However, it does not work at all like RAID (even network).

Initially, DRBD was used primarily in high availability computer clusters (HA – high availability), however, starting with the ninth version, it can be used to deploy cloud storage solutions.

In this article, we’ll show you how to install DRBD on CentOS, and briefly demonstrate how to use it to replicate storage (partition) on two servers. This is the perfect article to get started with DRBD on Linux.

Test environment

We will use a cluster of two nodes for this setup.

  • Node 1: – tecmint.tecmint.lan
  • Node 2: – server1.tecmint.lan

Step 1: Install DRBD Packages

DRBD is implemented as a Linux kernel module. It is a driver for a virtual block device, so it is located at the very bottom of the system I / O stack.

DRBD can be installed from ELRepo or EPEL. We start by importing the ELRepo package signing key and connecting the repository on both nodes, as shown below.

# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

Then you need to install the DRBD kernel module and utilities on both nodes using:

# yum install -y kmod-drbd84 drbd84-utils

If you have SELinux connected, you need to configure policies to free DRBD processes from SELinux control.

# semanage permissive -a drbd_t

In addition, if your system has a firewall (firewalld), you need to add the DRBD port 7789 to enable data synchronization between the two nodes.

Run these commands for the first node:

# firewall-cmd --permanent --add-rich-rule="rule family="ipv4"  source address="" port port="7789" protocol="tcp" accept"
# firewall-cmd --reload

Then run these commands for the second node:

# firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="" port port="7789" protocol="tcp" accept"
# firewall-cmd --reload

Step 2. Preparing the low-level storage

Now that we have DRBD installed on both nodes of the cluster, we need to prepare storage areas on them about the same size. It can be a hard disk partition (or an entire physical hard disk), a software RAID device, LVM logical volume or any other type of block device located on your system.

For this article, we will create a 2 GB test block device using the dd command.

# dd if=/dev/zero of=/dev/sdb1 bs=2024k count=1024

Suppose this is an unused partition (/ dev / sdb1) on a second block device (/ dev / sdb) connected to both nodes.

Step 3. Configure DRBD

The main DRBD configuration file is /etc/drbd.conf, and additional configuration files can be found in the directory /etc/drbd.d.

To replicate the repository, we need to add the necessary for this configuration to the file /etc/drbd.d/global_common.conf, which contains the global and general sections of the DRBD configuration, and we need to define the resources in .res files.

Make a backup copy of the source file on both nodes, and then open a new file for editing (use a text editor to your liking).

# mv /etc/drbd.d/global_common.conf /etc/drbd.d/global_common.conf.orig
# vim /etc/drbd.d/global_common.conf 

Add the following lines to both files:

global {
 usage-count  yes;
common {
 net {
  protocol C;

Save the file, and then close the editor.

Let’s briefly dwell on the protocol C line. DRBD supports three different replication modes (i.e. three degrees of replication synchronization), namely:

  • protocol A: asynchronous replication protocol most commonly used in long distance replication scenarios.
  • protocol B: semi-synchronous replication protocol or synchronous memory protocol.
  • protocol C: commonly used for nodes in networks with short distances; this is by far the most commonly used replication protocol in DRBD settings.

Important: The choice of replication protocol affects two deployment factors: security and latency. And throughput, on the contrary, does not depend to a large extent on the selected replication protocol.

Step 4. Adding a Resource

Resource is a collective term that refers to all aspects of a particular replicated dataset. We will define our resource in the file /etc/drbd.d/test.res.

Add the following to the file on both nodes (remember to replace the variables with actual values ​​for your environment).

Pay attention to host names, we need to specify the network host name, which can be obtained using the uname command -n.

resource test {
        on tecmint.tecmint.lan {
 		device /dev/drbd0;
       		disk /dev/sdb1;
        		meta-disk internal;	
        on server1.tecmint.lan  {
		device /dev/drbd0;
        		disk /dev/sdb1;
        		meta-disk internal;


  • on hostname: The on section to which the nested configuration statement refers.
  • test: This is the name of the new resource.
  • device / dev / drbd0: Indicates a new DRBD-managed virtual block device.
  • disk / dev / sdb1: This is the block device section that is the backup device for the DRBD device.
  • meta-disk: determines where the DRBD stores its metadata. Internal means that DRBD stores its metadata on the same physical low-level device as the actual data on production.
  • address: indicates the IP address and port number of the corresponding node.

Also note that if the parameters on both hosts have the same values, you can specify them directly in the resource section.

For example, the above configuration could be restructured to:

resource test {
	device /dev/drbd0;
	disk /dev/sdb1;
        	meta-disk internal;	
        	on tecmint.tecmint.lan {
        	on server1.tecmint.lan  {

Step 5. Initializing and starting the resource

To interact with DRBD, we will use the following administration tools (which interact with the kernel module to configure and administer DRBD resources):

  • drbdadm: DRBD High Level Administration Tool.
  • drbdsetup: A lower-level administration tool for connecting DRBD devices to their backup devices, setting up pairs of DRBD devices to reflect their backup devices and to verify the configuration of running DRBD devices.
  • Drbdmeta: metadata management tool.

After adding all the initial configurations of the resource, we must call the resource on both nodes.

# drbdadm create-md test

Initializing the Metadata Store

Next, we need to run it, which will connect the resource to its backup device, then set the replication parameters and connect the resource to its peer:

# drbdadm up test

Now if you run the command lsblk, you will notice that the DRBD device / volume drbd0 is associated with the backup device /dev/sdb1:

# lsblk

Block device list

To disable a resource, run:

# drbdadm down test

To check the status of the resource, run the following command (note that at this stage the status of the disks is expected Inconsistent / Inconsistent):

# drbdadm status test
# drbdsetup status test --verbose --statistics 	#for  a more detailed status

Checking the status of the resource on y

Step 6: Setting the primary resource / source for initial device synchronization

At this point, DRBD is ready to go. Now we need to specify which node should be used as a source for initial device synchronization.

Run the following command on only one node to start the initial full synchronization:

# drbdadm primary --force test
# drbdadm status test

Setting the primary node as the starting device

After synchronization is complete, the state of both drives should be UpToDate.

Step 7: Testing the DRBD Setup

Finally, we need to check whether the DRBD device will work as needed for storing replicated data. Remember that we used an empty disk volume, so we need to create a file system on the device and mount it to see if we can use it to store replicated data.

We need to create a file system on the device using the following command on the node from which we started the initial full synchronization (on which there is a resource with the main role):

# mkfs -t ext4 /dev/drbd0

Create a file system on a Drbd volume

Then mount it as shown (you can give the mount point a suitable name):

# mkdir -p /mnt/DRDB_PRI/
# mount /dev/drbd0 /mnt/DRDB_PRI/

Now copy or create some files at the mount point above and make a long list with ls commands:

# cd /mnt/DRDB_PRI/
# ls -l 

List the contents of the main Drbd volume

Next, unmount the device (make sure that the mount is not open, change the directory after unmounting to avoid errors) and change the role of the node from primary to secondary:

# umount /mnt/DRDB_PRI/
# cd
# drbdadm secondary test

Make the other node (on which there is a resource with a secondary role) primary, then connect the device to it and run a long list of mount points. If the setup works fine, all files stored on the volume should be there:

# drbdadm primary test
# mkdir -p /mnt/DRDB_SEC/
# mount /dev/drbd0 /mnt/DRDB_SEC/
# cd /mnt/DRDB_SEC/
# ls  -l 

Verifying the setup of a DRBD running on a secondary node.

For more information, refer to the administration tool man pages:

# man drbdadm
# man drbdsetup
# man drbdmeta

Reference: DRBD User Guide.


DRBD is extremely flexible and versatile, making it a storage replication solution suitable for adding HA to virtually any application. In this article, we showed how to install DRBD on CentOS 7, and briefly demonstrated how to use it to replicate storage. Feel free to share your thoughts with us using the feedback form below.

Learn more about the course.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *