POC of drdb replication
My notes of a quick drdb test...
The Distributed Replicated Block Device (DRBD) provides a networked version of data mirroring, classified under the redundant array of independent disks (RAID) taxonomy as RAID-1.
Showing status, add a file and check target…
After initial sync:
[root@drdb01 ~] # drbdadm status test |
test role:Primary |
disk:UpToDate |
peer role:Secondary |
replication:Established peer-disk:UpToDate |
[root@drdb02 ~] # drbdadm status test |
test role:Secondary |
disk:UpToDate |
peer role:Primary |
replication:Established peer-disk:UpToDate |
Create filesystem and some data:
[root@drdb01 ~] # mkfs -t ext4 /dev/drbd0 |
[root@drdb01 ~] # mkdir -p /mnt/DRDB_PRI |
[root@drdb01 ~] # mount /dev/drbd0 /mnt/DRDB_PRI |
[root@drdb01 ~] # cd /mnt/DRDB_PRI |
[root@drdb01 DRDB_PRI] # ls -l |
total 16 |
drwx------. 2 root root 16384 Aug 6 10:12 lost+found |
[root@drdb01 DRDB_PRI] # yum install wget |
[root@drdb01 DRDB_PRI] # wget https://osdn.net/projects/systemrescuecd/storage/releases/6.0.3/systemrescuecd-6.0.3.iso |
2019-08-06 10:18:13 (4.61 MB/s) - ‘systemrescuecd-6.0.3.iso’ saved [881852416/881852416] |
[root@drdb01 DRDB_PRI] # ls -lh |
total 842M |
drwx------. 2 root root 16K Aug 6 10:12 lost+found |
-rw-r--r--. 1 root root 841M Apr 14 08:52 systemrescuecd-6.0.3.iso |
Switch roles and check SECONDARY:
[root@drdb01 ~] # umount /mnt/DRDB_PRI |
[root@drdb01 ~] # drbdadm secondary test |
[root@drdb02 ~] # drbdadm primary test |
[root@drdb02 ~] # mkdir -p /mnt/DRDB_SEC |
[root@drdb02 ~] # mount /dev/drbd0 /mnt/DRDB_SEC |
[root@drdb02 ~] # cd /mnt/DRDB_SEC |
[root@drdb02 DRDB_SEC] # ls -lh |
total 842M |
drwx------. 2 root root 16K Aug 6 10:12 lost+found |
-rw-r--r--. 1 root root 841M Apr 14 08:52 systemrescuecd-6.0.3.iso |
Switch roles back:
[root@drdb02 DRDB_SEC] # cd |
[root@drdb02 ~] # umount /mnt/DRDB_SEC |
[root@drdb02 ~] # drbdadm secondary test |
[root@drdb01 ~] # drbdadm primary test |
[root@drdb01 ~] # mount /dev/drbd0 /mnt/DRDB_PRI |
Detailed steps below of how we got to above test…
Node 1 Setup and start initial sync:
rrosso ~ ssh root@192.168.1.95 |
[root@drdb01 ~] # rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org |
[root@drdb01 ~] # rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm |
[root@drdb01 ~] # yum install -y kmod-drbd84 drbd84-utils |
[root@drdb01 ~] # firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.96" port port="7789" protocol="tcp" accept' |
success |
[root@drdb01 ~] # firewall-cmd --reload |
success |
[root@drdb01 ~] # yum install policycoreutils-python |
[root@drdb01 ~] # semanage permissive -a drbd_t |
[root@drdb01 ~] # df -h |
Filesystem Size Used Avail Use% Mounted on |
/dev/mapper/cl-root 6.2G 1.2G 5.1G 20% / |
devtmpfs 990M 0 990M 0% /dev |
tmpfs 1001M 0 1001M 0% /dev/shm |
tmpfs 1001M 8.4M 992M 1% /run |
tmpfs 1001M 0 1001M 0% /sys/fs/cgroup |
/dev/sda1 1014M 151M 864M 15% /boot |
tmpfs 201M 0 201M 0% /run/user/0 |
[root@drdb01 ~] # init 0 |
rrosso ~ 255 ssh root@192.168.1.95 |
[root@drdb01 ~] # fdisk -l | grep sd |
Disk /dev/sda: 8589 MB, 8589934592 bytes, 16777216 sectors |
/dev/sda1 * 2048 2099199 1048576 83 Linux |
/dev/sda2 2099200 16777215 7339008 8e Linux LVM |
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors |
[root@drdb01 ~] # vi /etc/drbd.d/global_common.conf |
[root@drdb01 ~] # vi /etc/drbd.d/test.res |
[root@drdb01 ~] # uname -n |
drdb01.localdomain |
** partition disk |
[root@drdb01 ~] # fdisk /dev/sdb |
[root@drdb01 ~] # vi /etc/drbd.d/test.res |
[root@drdb01 ~] # drbdadm create-md test |
initializing activity log |
initializing bitmap (640 KB) to all zero |
Writing meta data... |
New drbd meta data block successfully created. |
[root@drdb01 ~] # drbdadm up test |
The server's response is: |
you are the 18305th user to install this version |
[root@drdb01 ~] # vi /etc/drbd.d/test.res |
[root@drdb01 ~] # drbdadm down test |
[root@drdb01 ~] # drbdadm up test |
[root@drdb01 ~] # drbdadm status test |
test role:Secondary |
disk:Inconsistent |
peer role:Secondary |
replication:Established peer-disk:Inconsistent |
[root@drdb01 ~] # drbdadm primary --force test |
[root@drdb01 ~] # drbdadm status test |
test role:Primary |
disk:UpToDate |
peer role:Secondary |
replication:SyncSource peer-disk:Inconsistent done :0.01 |
[root@drdb01 ~] # drbdadm status test |
test role:Primary |
disk:UpToDate |
peer role:Secondary |
replication:SyncSource peer-disk:Inconsistent done :3.80 |
[root@drdb01 ~] # drbdadm status test |
test role:Primary |
disk:UpToDate |
peer role:Secondary |
replication:SyncSource peer-disk:Inconsistent done :85.14 |
Node 2 setup and start initial sync:
rrosso ~ ssh root@192.168.1.96 |
[root@drdb01 ~] # rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org |
[root@drdb01 ~] # rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm |
[root@drdb01 ~] # firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.95" port port="7789" protocol="tcp" accept' |
success |
[root@drdb01 ~] # firewall-cmd --reload |
success |
[root@drdb01 ~] # yum install policycoreutils-python |
[root@drdb01 ~] # semanage permissive -a drbd_t |
[root@drdb01 ~] # init 0 |
rrosso ~ 255 ssh root@192.168.1.96 |
root@192.168.1.96's password: |
Last login: Tue Aug 6 09:46:34 2019 |
[root@drdb01 ~] # fdisk -l | grep sd |
Disk /dev/sda: 8589 MB, 8589934592 bytes, 16777216 sectors |
/dev/sda1 * 2048 2099199 1048576 83 Linux |
/dev/sda2 2099200 16777215 7339008 8e Linux LVM |
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors |
[root@drdb01 ~] # vi /etc/drbd.d/global_common.conf |
[root@drdb01 ~] # vi /etc/drbd.d/test.res |
** partition disk |
[root@drdb01 ~] # fdisk /dev/sdb |
[root@drdb01 ~] # fdisk -l | grep sd |
Disk /dev/sda: 8589 MB, 8589934592 bytes, 16777216 sectors |
/dev/sda1 * 2048 2099199 1048576 83 Linux |
/dev/sda2 2099200 16777215 7339008 8e Linux LVM |
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors |
/dev/sdb1 2048 41943039 20970496 83 Linux |
[root@drdb01 ~] # vi /etc/drbd.d/test.res |
[root@drdb01 ~] # drbdadm create-md test |
initializing activity log |
initializing bitmap (640 KB) to all zero |
Writing meta data... |
New drbd meta data block successfully created. |
** note I had wrong hostname because I cloned 2nd VM |
[root@drdb01 ~] # drbdadm up test |
you are the 18306th user to install this version |
drbd.d/ test .res:6: in resource test , on drdb01.localdomain: |
IP 192.168.1.95 not found on this host. |
[root@drdb01 ~] # vi /etc/drbd.d/test.res |
[root@drdb01 ~] # uname -a |
Linux drdb01.localdomain 3.10.0-957.27.2.el7.x86_64 #1 SMP Mon Jul 29 17:46:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
[root@drdb01 ~] # vi /etc/hostname |
[root@drdb01 ~] # vi /etc/hosts |
[root@drdb01 ~] # reboot |
rrosso ~ 255 ssh root@192.168.1.96 |
[root@drdb02 ~] # uname -a |
Linux drdb02.localdomain 3.10.0-957.27.2.el7.x86_64 #1 SMP Mon Jul 29 17:46:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
[root@drdb02 ~] # drbdadm up test |
[root@drdb02 ~] # drbdadm status test |
test role:Secondary |
disk:Inconsistent |
peer role:Secondary |
replication:Established peer-disk:Inconsistent |
[root@drdb02 ~] # drbdadm status test |
test role:Secondary |
disk:Inconsistent |
peer role:Primary |
replication:SyncTarget peer-disk:UpToDate done :0.21 |
[root@drdb02 ~] # drbdadm status test |
test role:Secondary |
disk:Inconsistent |
peer role:Primary |
replication:SyncTarget peer-disk:UpToDate done :82.66 |
LINKS:
https://www.linbit.com/en/disaster-recovery/
https://www.tecmint.com/setup-drbd-storage-replication-on-centos-7/