Product SiteDocumentation Site

9.5. Clone the IP address

There’s no point making the services active on both locations if we can’t reach them both, so let’s clone the IP address.
The IPaddr2 resource agent has built-in intelligence for when it is configured as a clone. It will utilize a multicast MAC address to have the local switch send the relevant packets to all nodes in the cluster, together with iptables clusterip rules on the nodes so that any given packet will be grabbed by exactly one node. This will give us a simple but effective form of load-balancing requests between our two nodes.
Let’s start a new config, and clone our IP:
[root@pcmk-1 ~]# pcs cluster cib loadbalance_cfg
[root@pcmk-1 ~]# pcs -f loadbalance_cfg resource clone ClusterIP \
     clone-max=2 clone-node-max=2 globally-unique=true
Notice that when the ClusterIP becomes a clone, the constraints referencing ClusterIP now reference the clone. This is done automatically by pcs.
[root@pcmk-1 ~]# pcs -f loadbalance_cfg constraint
Location Constraints:
Ordering Constraints:
  start ClusterIP-clone then start WebSite (kind:Mandatory)
  promote WebDataClone then start WebFS (kind:Mandatory)
  start WebFS then start WebSite (kind:Mandatory)
  start dlm-clone then start WebFS (kind:Mandatory)
Colocation Constraints:
  WebSite with ClusterIP-clone (score:INFINITY)
  WebFS with WebDataClone (score:INFINITY) (with-rsc-role:Master)
  WebSite with WebFS (score:INFINITY)
  WebFS with dlm-clone (score:INFINITY)
Ticket Constraints:
Now we must tell the resource how to decide which requests are processed by which hosts. To do this, we specify the clusterip_hash parameter. The value of sourceip means that the source IP address of incoming packets will be hashed; each node will process a certain range of hashes.
[root@pcmk-1 ~]# pcs -f loadbalance_cfg resource update ClusterIP clusterip_hash=sourceip
Load our configuration to the cluster, and see how it responds.
[root@pcmk-1 ~]# pcs cluster cib-push loadbalance_cfg --config
CIB updated
[root@pcmk-1 ~]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: pcmk-1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Tue Sep 11 10:36:38 2018
Last change: Tue Sep 11 10:36:33 2018 by root via cibadmin on pcmk-1

2 nodes configured
9 resources configured (1 DISABLED)

Online: [ pcmk-1 pcmk-2 ]

Full list of resources:

 ipmi-fencing   (stonith:fence_ipmilan):        Started pcmk-1
 WebSite        (ocf::heartbeat:apache):        Stopped
 Master/Slave Set: WebDataClone [WebData]
     Masters: [ pcmk-1 ]
     Slaves: [ pcmk-2 ]
 WebFS  (ocf::heartbeat:Filesystem):    Stopped (disabled)
 Clone Set: dlm-clone [dlm]
     Started: [ pcmk-1 pcmk-2 ]
 Clone Set: ClusterIP-clone [ClusterIP] (unique)
     ClusterIP:0        (ocf::heartbeat:IPaddr2):       Started pcmk-2
     ClusterIP:1        (ocf::heartbeat:IPaddr2):       Started pcmk-1

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled
If desired, you can demonstrate that all request buckets are working by using a tool such as arping from several source hosts to see which host responds to each.