DRBD is a software-based, open-source storage solution designed to provide high availability (HA) and data redundancy by mirroring block devices between multiple nodes in a cluster.
NFS is a distributed file system protocol that allows a user on a client computer to access files over a network as if they were stored locally on the client's machine.
Overview:
Configuring NFS (Network File System) and DRBD (Distributed Replicated Block Device) as cluster resources for highly available storage and continuous data replication involves setting up a cluster environment, configuring DRBD for data replication, and configuring NFS for shared access to the replicated data.
Distributed Replicated Block Device (DRBD):
DRBD is a software-based, open-source storage solution for Linux. It is designed to provide high availability (HA) and data redundancy by mirroring block devices between multiple nodes in a cluster. DRBD operates at the block level, which means it replicates data at the disk-block level rather than the file level. In a DRBD setup, one node is designated as the primary and the others as secondary. The primary node handles both read and write operations, while the secondary nodes are in standby mode, ready to take over if the primary fails.
Network File System (NFS):
NFS is a distributed file system protocol that allows a user on a client computer to access files over a network as if they were stored locally on the client's machine. NFS enables transparent file sharing and access among multiple computers in a network, providing a way for users and applications to access and manipulate files on remote systems.
Requirements and Prerequisites:
Before implementing DRBD and NFS as cluster resource, it's essential to verify that certain prerequisites are met:
Cluster Infrastructure: Have at least two nodes (Servers) in your cluster with reliable and dedicated network connection.
Install necessary software: Install DRBD and NFS server software on all nodes where data replication will occur.
Configure LVM Device:
On the first cluster node (ha-node1), login with root user and enter the following command to create an LVM.
Create the physical volume. Replace /dev/vdb with your actual device name:
root@ha-node1~] # pvcreate /dev/vdb
Creating volume group:
root@ha-node1~] # vgcreate nfs /dev/vdb
Creating logical volume:
root@ha-node1~] # lvcreate -n drbdvol -L 40G nfs
Activating volume group:
root@ha-node1~] # vgchange –ay
The logical volume and volume group should be created and active, Enter following command to verify:
root@ha-node1~] # lvs
root@ha-node1~] # vgs
Repeat the previous steps on the second cluster node (ha-node2) to create LVM device.
Create and Configure DRBD
On the first cluster node (ha-node1), enter the following command to create/open the resource configurartion file /etc/drbd.d/nfs.res to be edited :
root@ha-node1~] # vim /etc/drbd.d/nfs.res
Enter the following content in the file:
resource nfs {
device /dev/drbd0;
disk /dev/nfs/drbdvol;
meta-disk internal;
net {
protocol C;
}
connection-mesh {
hosts ha-node1 ha-node2;
}
on ha-node1 {
address 10.0.100.5:7790;
node-id 0;
}
on ha-node2 {
address 10.0.100.6:7790;
node-id 1;
}
}
Resource Name: ‘nfs’ is the name assigned to the DRBD resource.
Device: /dev/drbd0 is the block device that will be managed by DRBD.
Disk: /dev/nfs/drbdvol is the physical storage device or logical volume associated with the DRBD resource.
Meta-disk: internal specifies that the metadata for DRBD should be stored within the resource itself.
Network Configuration:
Protocol: C indicates that the protocol used for communication is 'C' (Connected mode), a commonly used protocol for DRBD.
Connection Mesh: hosts - ha-node1, ha-node2; defines the nodes involved in the DRBD replication. In this case, it's ha-node1and ha-node2.
Node Configuration:
Node 1 (ha-node1):
Address: 10.0.100.5:7790 specifies the IP address and port for communication with ha-node1.
Node ID: node-id 0 assigns the node ID 0 to node21.
Node 2 (ha-node2):
Address: 10.0.100.6:7790 specifies the IP address and port for communication with ha-node2.
Node ID: node-id 1 assigns the node ID 1 to ha-node2.
Save the file and quit the editor.
In a text editor, open /etc/csync2/csync2.cfg to be edited
root@ha-node1~] # vim /etc/csync2/csync2.cfg
If the file does not contain the following lines, add them into the file to sync these files with another node of the cluster.
include /etc/drbd.conf;
include /etc/drbd.d;
Enter the following command to sync the files to the other cluster node:
root@ha-node1~] # csync2 -xv
The files should sync without error. If there is an error, run the command once again.
Activate the DRBD:
On the first cluster node (ha-node1), enter the following command to create and activate the DRBD volume:
root@ha-node1~] # drbdadm create-md nfs
root@ha-node1~] # drbdadm up nfs
The DRBD volume should be created and activated.
Repeat the above drbdadm commands on the second cluster node (ha-node2) to activate the DRBD.
On the first cluster node (ha-node1), enter the following command to force an initial synchronization:
root@ha-node1~] # drbdadm new-current-uuid --clear-bitmap nfs/0
Enter the following command to watch the status of the volume:
root@ha-node1~] # watch drbdadm status nfs
You may initially see that the peer-disk is Inconsistent with the percentage of synchronization displayed. When the peer-disk shows UpToDate, enter Ctrl+c to stop the watch command and continue to the next step.
Create file system on DRBD device:
After activating the DRBD, you will see a device with the name /dev/drbd0.
Enter the following command to create a filesystem on the DRBD volume:
root@ha-node1~] # mkfs.ext4 /dev/drbd0
The filesystem has now been created.
Adjust the Default Configuration of Pacemaker:
When a resource goes down and then comes back online in a cluster, it might automatically return to its original node. To control this behavior and either prevent the resource from returning to its initial node or specify a different node for it to go back to, you can adjust its "stickiness" value. Stickiness determines how "sticky" or attached a resource is to its current node.
Enter following command to to adjust the stickiness value:
root@ha-node1~] # crm configure
crm(live)configure# rsc_defaults resource-stickiness="200"
crm(live)configure# commit
Creating Cluster Resources:
To configure the DRBD primitive and clone resources, run the following commands from the crm shell:
crm(live/ha-node1)configure# primitive p-drbd_nfs ocf:linbit:drbd
params drbd_resource=nfs
op monitor_Master interval=15
op monitor_Slave interval=30
crm(live/ha-node1)configure# ms ms-drbd_nfs p-drbd_nfs
meta master-max=1
master-node-max=1
clone-max=2
clone-node-max=1
notify=true
crm(live/ha-node1)configure# commit
crm(live/ha-node1)configure# quit
This will create a Pacemaker promotable clone resource that matches the DRBD resource called "nfs." This step will prompt Pacemaker to activate your DRBD resource on both nodes and give it the primary (master) role on one of them.
Check the cluster status using following command:
root@ha-node1~] # crm status
NFS Cluster Resources:
On the first cluster node (ha-node1), in a terminal session as the root user, enter the following command to create a mount point for the DRBD volume:
root@ha-node1~] # mkdir -p /srv/nfs/drbd0
Repeat this command on the second cluster node (ha-node2).
On the first cluster node (ha-node1), in a terminal session as the root user, enter the following command to create the NFS resources:
root@ha-node1~] # crm configure
crm(live/ha-node1)configure# primitive p-nfsserver systemd:nfs-server
op monitor interval=60
crm(live)configure# clone c-nfsserver p-nfsserver
meta interleave=true
Filesystem Resource
crm(live/ha-node1)configure# primitive p-nfs_drbdvol Filesystem
params device=/dev/drbd0
directory=/srv/nfs/drbd0
fstype=ext4
op monitor interval=20
NFS Export Resource
crm(live/ha-node1)configure# primitive p-exportfs_drbdvol exportfs
params directory=/srv/nfs/drbd0
options="rw,mountpoint"
clientspec="10.0.100.0/24"
wait_for_leasetime_on_stop=true
fsid=100
op monitor interval=30
crm(live/ha-node1)configure# primitive p-vip_nfs IPaddr2
params ip=10.0.100.4 cidr_netmask=24 nic=eth0
op monitor timeout=20 interval=10
Combine these resources into a Pacemaker resource group:
crm(live)configure# group g-nfs p-nfs_drbdvol p-exportfs_drbdvol p-vip_nfs
Add the following constraints to make sure that the group is started on the same node on which the DRBD promotable clone resource is in the master role:
crm(live/ha-node1)configure#order o-drbd_before_nfs Mandatory: ms-drbd_nfs:promote g-nfs:start
crm(live/ha-node1)configure#colocation col-nfs_on_drbd Mandatory: g-nfs ms-drbd_nfs:Master
Commit this configuration and quit from the crmsh:
crm(live/ha-node1)configure# commit
crm(live/ha-node1)configure# quit
DRBD and NFS as cluster resources have been configured successfully.
Check the cluster status using following command:
root@ha-node1~] # crm status
Check Cluster Status from Hawk2:
Start a browser. Use the following URL and login to the Hawk2 web console:
https://:7630
From the left navigation pane, select Status
All the resources are now working in the cluster as per their respective configuration. The Hawk2 screen above confirms the same.