<div dir="ltr">Team,<br><div><div class="gmail_quote"><br><div dir="ltr">I am working on configuring cluster environment for NFS share using pacemaker. Below are the resources I have configured.<br>
<br>
<div style="margin:5px 20px 20px">      <div class="m_-2300545431002381598gmail-smallfont" style="margin-bottom:2px">Quote:</div>       <table width="100%" cellspacing="0" cellpadding="3" border="0">
        <tbody><tr>
                <td class="m_-2300545431002381598gmail-bbcodeblock" style="border-width:1px;border-style:inset;border-color:currentcolor">                                                     Group: nfsgroup<br>Resource: my_lvm (class=ocf provider=heartbeat type=LVM)<br>Attributes: volgrpname=my_vg exclusive=true<br>Operations: start interval=0s timeout=30 (my_lvm-start-interval-0s)<br>stop interval=0s timeout=30 (my_lvm-stop-interval-0s)<br>monitor interval=10 timeout=30 (my_lvm-monitor-interval-10)<br>Resource: nfsshare (class=ocf provider=heartbeat type=Filesystem)<br>Attributes: device=/dev/my_vg/my_lv directory=/nfsshare fstype=ext4<br>Operations: start interval=0s timeout=60 (nfsshare-start-interval-0s)<br>stop interval=0s timeout=60 (nfsshare-stop-interval-0s)<br>monitor interval=20 timeout=40 (nfsshare-monitor-interval-20)<br>Resource: nfs-daemon (class=ocf provider=heartbeat type=nfsserver)<br>Attributes: nfs_shared_infodir=/nfsshare/<wbr>nfsinfo nfs_no_notify=true<br>Operations: start interval=0s timeout=40 (nfs-daemon-start-interval-0s)<br>stop interval=0s timeout=20s (nfs-daemon-stop-interval-0s)<br>monitor interval=10 timeout=20s (nfs-daemon-monitor-interval-<wbr>10)<br>Resource: nfs-root (class=ocf provider=heartbeat type=exportfs)<br>Attributes: clientspec=<a href="http://10.199.1.0/255.255.255.0" target="_blank">10.199.1.0/255.255.<wbr>255.0</a> options=rw,sync,no_root_squash directory=/nfsshare/exports fsid=0<br>Operations: start interval=0s timeout=40 (nfs-root-start-interval-0s)<br>stop interval=0s timeout=120 (nfs-root-stop-interval-0s)<br>monitor interval=10 timeout=20 (nfs-root-monitor-interval-10)<br>Resource: nfs-export1 (class=ocf provider=heartbeat type=exportfs)<br>Attributes: clientspec=<a href="http://10.199.1.0/255.255.255.0" target="_blank">10.199.1.0/255.255.<wbr>255.0</a> options=rw,sync,no_root_squash directory=/nfsshare/exports/<wbr>export1 fsid=1<br>Operations: start interval=0s timeout=40 (nfs-export1-start-interval-<wbr>0s)<br>stop interval=0s timeout=120 (nfs-export1-stop-interval-0s)<br>monitor interval=10 timeout=20 (nfs-export1-monitor-interval-<wbr>10)<br>Resource: nfs-export2 (class=ocf provider=heartbeat type=exportfs)<br>Attributes: clientspec=<a href="http://10.199.1.0/255.255.255.0" target="_blank">10.199.1.0/255.255.<wbr>255.0</a> options=rw,sync,no_root_squash directory=/nfsshare/exports/<wbr>export2 fsid=2<br>Operations: start interval=0s timeout=40 (nfs-export2-start-interval-<wbr>0s)<br>stop interval=0s timeout=120 (nfs-export2-stop-interval-0s)<br>monitor interval=10 timeout=20 (nfs-export2-monitor-interval-<wbr>10)<br>Resource: nfs_ip (class=ocf provider=heartbeat type=IPaddr2)<br>Attributes: ip=10.199.1.86 cidr_netmask=24<br>Operations: start interval=0s timeout=20s (nfs_ip-start-interval-0s)<br>stop interval=0s timeout=20s (nfs_ip-stop-interval-0s)<br>monitor interval=10s timeout=20s (nfs_ip-monitor-interval-10s)<br>Resource: nfs-notify (class=ocf provider=heartbeat type=nfsnotify)<br>Attributes: source_host=10.199.1.86<br>Operations: start interval=0s timeout=90 (nfs-notify-start-interval-0s)<br>stop interval=0s timeout=90 (nfs-notify-stop-interval-0s)<br>monitor interval=30 timeout=90 (nfs-notify-monitor-interval-<wbr>30)                                         </td>
        </tr>
        </tbody></table>
</div><br>PCS Status<br>
<div style="margin:5px 20px 20px">      <div class="m_-2300545431002381598gmail-smallfont" style="margin-bottom:2px">Quote:</div>       <table width="100%" cellspacing="0" cellpadding="3" border="0">
        <tbody><tr>
                <td class="m_-2300545431002381598gmail-bbcodeblock" style="border-width:1px;border-style:inset;border-color:currentcolor">                                                    Cluster name: my_cluster<br>Stack: corosync<br>Current DC: <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a> (version 1.1.15-11.el7_3.5-e174ec8) - partition with quorum<br>Last updated: Wed Jul 5 13:12:48 2017 Last change: Wed Jul 5 13:11:52 2017 by root via crm_attribute on <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a><br>
<br>2 nodes and 10 resources configured<br>
<br>Online: [ <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a> <a href="http://node4.cluster.com" target="_blank">node4.cluster.com</a> ]<br>
<br>Full list of resources:<br>
<br>fence-3 (stonith:fence_vmware_soap): Started <a href="http://node4.cluster.com" target="_blank">node4.cluster.com</a><br>fence-4 (stonith:fence_vmware_soap): Started <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a><br>Resource Group: nfsgroup<br>my_lvm (ocf::heartbeat:LVM): Started <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a><br>nfsshare (ocf::heartbeat:Filesystem): Started <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a><br>nfs-daemon (ocf::heartbeat:nfsserver): Started <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a><br>nfs-root (ocf::heartbeat:exportfs): Started <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a><br>nfs-export1 (ocf::heartbeat:exportfs): Started <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a><br>nfs-export2 (ocf::heartbeat:exportfs): Started <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a><br>nfs_ip (ocf::heartbeat:IPaddr2): Started <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a><br>nfs-notify (ocf::heartbeat:nfsnotify): Started <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a><br>                                  </td>
        </tr>
        </tbody></table>
</div>I followedthe redhat <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Administration/ch-nfsserver-HAAA.html#s1-nfsclustcreate-HAAA" rel="nofollow" target="_blank">link</a> to configure.<br>
<br>Once configured, I could mount the directory from nfs client with no issues. However, wehn I try entering the node to standby, the resources not starting up in secondary node.<br>
<br>After entering active node to standby,<br>
<br>
<div style="margin:5px 20px 20px">      <div class="m_-2300545431002381598gmail-smallfont" style="margin-bottom:2px">Quote:</div>       <table width="100%" cellspacing="0" cellpadding="3" border="0">
        <tbody><tr>
                <td class="m_-2300545431002381598gmail-bbcodeblock" style="border-width:1px;border-style:inset;border-color:currentcolor">                                                    [root@node3 ~]# pcs status<br>Cluster name: my_cluster<br>Stack: corosync<br>Current DC: <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a> (version 1.1.15-11.el7_3.5-e174ec8) - partition with quorum<br>Last updated: Wed Jul 5 13:16:05 2017 Last change: Wed Jul 5 13:15:38 2017 by root via crm_attribute on <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a><br>
<br>2 nodes and 10 resources configured<br>
<br>Node <a href="http://node3.cluster.com" target="_blank">node3.cluster.com</a>: standby<br>Online: [ <a href="http://node4.cluster.com" target="_blank">node4.cluster.com</a> ]<br>
<br>Full list of resources:<br>
<br>fence-3 (stonith:fence_vmware_soap): Started <a href="http://node4.cluster.com" target="_blank">node4.cluster.com</a><br>fence-4 (stonith:fence_vmware_soap): Started <a href="http://node4.cluster.com" target="_blank">node4.cluster.com</a><br>Resource Group: nfsgroup<br>my_lvm (ocf::heartbeat:LVM): Stopped<br>nfsshare (ocf::heartbeat:Filesystem): Stopped<br>nfs-daemon (ocf::heartbeat:nfsserver): Stopped<br>nfs-root (ocf::heartbeat:exportfs): Stopped<br>nfs-export1 (ocf::heartbeat:exportfs): Stopped<br>nfs-export2 (ocf::heartbeat:exportfs): Stopped<br>nfs_ip (ocf::heartbeat:IPaddr2): Stopped<br>nfs-notify (ocf::heartbeat:nfsnotify): Stopped<br>
<br>Failed Actions:<br>* fence-3_monitor_60000 on <a href="http://node4.cluster.com" target="_blank">node4.cluster.com</a> 'unknown error' (1): call=50, status=Timed Out, exitreason='none',<br>last-rc-change='Wed Jul 5 13:11:54 2017', queued=0ms, exec=20012ms<br>* fence-4_monitor_60000 on <a href="http://node4.cluster.com" target="_blank">node4.cluster.com</a> 'unknown error' (1): call=47, status=Timed Out, exitreason='none',<br>last-rc-change='Wed Jul 5 13:05:32 2017', queued=0ms, exec=20028ms<br>* my_lvm_start_0 on <a href="http://node4.cluster.com" target="_blank">node4.cluster.com</a> 'unknown error' (1): call=49, status=complete, exitreason='Volume group [my_vg] does not exist or contains error! Volume group "my_vg" not found',<br>last-rc-change='Wed Jul 5 13:05:39 2017', queued=0ms, exec=1447ms<br>
<br>
<br>Daemon Status:<br>corosync: active/enabled<br>pacemaker: active/enabled<br>pcsd: active/enabled                                     </td>
        </tr>
        </tbody></table>
</div><br>
<br>I am seeing this error,<br>
<div style="margin:5px 20px 20px">      <div class="m_-2300545431002381598gmail-smallfont" style="margin-bottom:2px">Quote:</div>       <table width="100%" cellspacing="0" cellpadding="3" border="0">
        <tbody><tr>
                <td class="m_-2300545431002381598gmail-bbcodeblock" style="border-width:1px;border-style:inset;border-color:currentcolor">                                                     ERROR: Volume group [my_vg] does not exist or contains error! Volume group "my_vg" not found#012 Cannot process volume group my_vg                                   </td>
        </tr>
        </tbody></table>
</div>This resolves when I create the lvm manually on secondary but I expect the resources to do the job. Am I missing something in this configuration?<span class="HOEnZb"><font color="#888888"><br clear="all"><br>-- <br><div class="m_-2300545431002381598gmail_signature"><div dir="ltr"><div>Regards,<br></div>Pradeep Anandh<br></div></div> </font></span></div>
</div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Regards,<br></div>Pradeep Anandh<br></div></div>
</div></div>