Press "Enter" to skip to content

Setting up high availability cluster (part 2)

In the previous post we were configuring an active/standby cluster with Corosync and Pacemaker.

However, we saw that a functional cluster needs file replication between the two nodes of the cluster.

In this post we will finish configuring the cluster by adding the replication of the /var/www directory of both Apache servers. For this we will use the DRBD tool which is a distributed replicated storage system for Linux platforms.

The topology used for this post is the same as the previous post:

 

1. Install DRBD on each node

First of all we need to install DRBD package on each node:

It is recommended to disable selinux completely or at least set to permissive mode for drbd services:

We must allow DRBD communication between nodes:

Node 1:

Node 2:

 

2. Allocate a disk volume for DRBD

DRBD will need its own block device on each node. This can be a physical disk partition or logical volume, of whatever size you need for your data.

Since in this tutorial I’m using virtual machines I added a new 20GB physical disk ‘sdb’ to each node:

After that I used ‘fdisk’ utility to create a partition (sdb1) into ‘sdb’ on each node:

You could use a logical volume instead of a physical partition.

 

3. Configure DRBD

We create a new DRBD resource on each node where we configure the partition to be used and the nodes involved on that DRBD communication.

On ‘wwwdata.res’ we insert the following config:

With the configuration in place, we can now get DRBD running.

These commands create the local metadata for the DRBD resource and bring up the DRBD resource:

IMPORTANT: DRBD service will not start until we run the same commands (‘drbadm create-md’ and ‘systemctl start’) on the other node.

Now we can check the status with the command ‘drbdadm status‘.

You can see the state has changed to Connected, meaning the two DRBD nodes are communicating properly, and both nodes are in Secondary role with Inconsistent data.

To make the data consistent, we need to tell DRBD which node should be considered to have the correct data. In this case, since we are creating a new resource, both have garbage, so we’ll just pick ‘node1’ and run this command on it:

We can see that this node has the Primary role, the partner node has the Secondary role, this node’s data is now considered UpToDate, the partner node’s data is still Inconsistent because is synchronizing 78.76 %. We must wait for node 2 to finish synchronizing

 

4. Add the replication path to DRBD

Now, we’ve come to the final part which is testing of the DRBD service to ensure it meets the objective. First, let’s mount the DRBD partition.

IMPORTANT: Do the below steps once on the primary server node1 ONLY!

On the node with the primary role (node1 in this example), create a filesystem on the DRBD device:

Mount the newly created filesystem, populate it with our web document and then unmount it (the cluster will handle mounting and unmounting it later):

Now, we will create a cluster resource for the DRBD device, and an additional clone resource to allow the resource to run on both nodes at the same time.

NOTE: Using the pcs -f option, make changes to the configuration saved in the drbd_cfg file. These changes will not be seen by the cluster until the drbd_cfg file is pushed into the live cluster’s CIB later.

After you are satisfied with all the changes, you can commit them all at once by pushing the drbd_cfg file into the live CIB:

The resource agent should load the DRBD module when needed if it’s not already loaded. If that does not happen, configure your operating system to  the module at boot time. For CentOS 7.1, you would run this on both nodes:

Now that we have a working DRBD device, we need to mount its filesystem.  In addition to defining the filesystem, we also need to tell the cluster where it can be located (only on the DRBD Primary)  and when it is allowed to start (after the Primary was promoted).

We also need to tell the cluster that Apache needs to run on the same machine as the filesystem and that it must be active before Apache can start.

Now if we check the status of the DRBD resoruce:

Since node1 is running the web server right now, the WebFS is mounted on node1.

Now if we perform a request against http://192.168.125.10 we will see the index.html created on ‘/dev/drbd1’

Now we proceed to change the index.html mounted on node1 on /var/www/ and we change the HTML body:

And we force a failover to node2 putting node1 in standby mode:

If we perform ‘drbdadm’ status on node2 we don’t see node1 and node2 is now primary:

And finally if we check the content of /var/www/index.html on node2 we can see the file with the new HTML body that we changed on node1!!!

We activate again node1:

With the above steps and steps and steps followed on the previous post you should be able to set up your own HA cluster active/standby.

3 Comments

  1. Carter Duane
    Carter Duane 29 March, 2018

    Good job, I’ll be waiting for some new posts.

  2. Felipe
    Felipe 1 April, 2018

    👍👍

  3. Rick
    Rick 20 April, 2018

    very useful and interesting article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.