<div dir="ltr"><div><div>Hi,<br><br>I&#39;m still testing (before production running) the solution with pacemaker+corosync+drbd+dlm+gfs2 on Centos7 with double-primary config.<br><br>I have two nodes: wirt1v and wirt2v - each node contains LVM partition  with DRBD (/dev/drbd2) and filesystem mounted as /virtfs2. Filesystems /virtfs2 contain the images of virtual machines.<br><br>My problem is so - I can&#39;t start the cluster and the resources on one node only (cold start) when the second node is completely powered off.<br><br>Is it in such configuration at all posssible - is it posible to start one node only?<br><br>Could you help me, please?<br><br>The  configs and log (during cold start)  are attached. <br><br>Thanks in advance,<br>Gienek Nowacki<br><br>==============================================================<br><br>#---------------------------------<br>### result:  cat /etc/redhat-release  ###<br><br>CentOS Linux release 7.2.1511 (Core)<br><br>#---------------------------------<br>### result:  uname -a  ###<br><br>Linux <a href="http://wirt1v.example.com">wirt1v.example.com</a> 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux<br><br>#---------------------------------<br>### result:  cat /etc/hosts  ###<br><br>127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4<br>172.31.0.23     <a href="http://wirt1.example.com">wirt1.example.com</a> wirt1<br>172.31.0.24     <a href="http://wirt2.example.com">wirt2.example.com</a> wirt2<br>1.1.1.1         <a href="http://wirt1v.example.com">wirt1v.example.com</a> wirt1v<br>1.1.1.2         <a href="http://wirt2v.example.com">wirt2v.example.com</a> wirt2v<br><br>#---------------------------------<br>### result:  cat /etc/drbd.conf  ###<br><br>include &quot;drbd.d/global_common.conf&quot;;<br>include &quot;drbd.d/*.res&quot;;<br><br>#---------------------------------<br>### result:  cat /etc/drbd.d/global_common.conf  ###<br><br>common {<br>        protocol C;<br>        syncer {<br>                verify-alg sha1;<br>        }<br>        startup {<br>                become-primary-on both;<br>                wfc-timeout 30;<br>                outdated-wfc-timeout 20;<br>                degr-wfc-timeout 30;<br>        }<br>        disk {<br>                fencing resource-and-stonith;<br>        }<br>        handlers {<br>                fence-peer &quot;/usr/lib/drbd/crm-fence-peer.sh&quot;;<br>                after-resync-target &quot;/usr/lib/drbd/crm-unfence-peer.sh&quot;;<br>                split-brain             &quot;/usr/lib/drbd/notify-split-brain.sh <a href="mailto:linuxadmin@example.com">linuxadmin@example.com</a>&quot;;<br>                pri-lost-after-sb       &quot;/usr/lib/drbd/notify-split-brain.sh <a href="mailto:linuxadmin@example.com">linuxadmin@example.com</a>&quot;;<br>                out-of-sync             &quot;/usr/lib/drbd/notify-out-of-sync.sh <a href="mailto:linuxadmin@example.com">linuxadmin@example.com</a>&quot;;<br>                local-io-error          &quot;/usr/lib/drbd/notify-io-error.sh    <a href="mailto:linuxadmin@example.com">linuxadmin@example.com</a>&quot;;<br>        }<br>        net {<br>                allow-two-primaries;<br>                after-sb-0pri discard-zero-changes;<br>                after-sb-1pri discard-secondary;<br>                after-sb-2pri disconnect;<br>        }<br>}<br><br>#---------------------------------<br>### result:  cat /etc/drbd.d/drbd2.res  ###<br><br>resource drbd2 {<br>        meta-disk internal;<br>        device /dev/drbd2;<br>        on <a href="http://wirt1v.example.com">wirt1v.example.com</a> {<br>                disk /dev/vg02/drbd2;<br>                address <a href="http://1.1.1.1:7782">1.1.1.1:7782</a>;<br>        }<br>        on <a href="http://wirt2v.example.com">wirt2v.example.com</a> {<br>                disk /dev/vg02/drbd2;<br>                address <a href="http://1.1.1.2:7782">1.1.1.2:7782</a>;<br>        }<br>}<br><br>#---------------------------------<br>### result:  cat /etc/corosync/corosync.conf  ###<br><br>totem {<br>    version: 2<br>    secauth: off<br>    cluster_name: klasterek<br>    transport: udpu<br>}<br>nodelist {<br>    node {<br>        ring0_addr: wirt1v<br>        nodeid: 1<br>    }<br>    node {<br>        ring0_addr: wirt2v<br>        nodeid: 2<br>    }<br>}<br>quorum {<br>    provider: corosync_votequorum<br>    two_node: 1<br>}<br>logging {<br>    to_logfile: yes<br>    logfile: /var/log/cluster/corosync.log<br>    to_syslog: yes<br>}<br><br>#---------------------------------<br>### result:  mount | grep virtfs2  ###<br><br>/dev/drbd2 on /virtfs2 type gfs2 (rw,relatime,seclabel)<br><br>#---------------------------------<br>### result:  pcs status  ###<br><br>Cluster name: klasterek<br>Last updated: Tue Sep 13 20:01:40 2016          Last change: Tue Sep 13 18:31:33 2016 by root via crm_resource on wirt1v<br>Stack: corosync<br>Current DC: wirt1v (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum<br>2 nodes and 8 resources configured<br>Online: [ wirt1v wirt2v ]<br>Full list of resources:<br> Master/Slave Set: Drbd2-clone [Drbd2]<br>     Masters: [ wirt1v wirt2v ]<br> Clone Set: Virtfs2-clone [Virtfs2]<br>     Started: [ wirt1v wirt2v ]<br> Clone Set: dlm-clone [dlm]<br>     Started: [ wirt1v wirt2v ]<br> fencing-idrac1 (stonith:fence_idrac):  Started wirt1v<br> fencing-idrac2 (stonith:fence_idrac):  Started wirt2v<br>PCSD Status:<br>  wirt1v: Online<br>  wirt2v: Online<br>Daemon Status:<br>  corosync: active/disabled<br>  pacemaker: active/disabled<br>  pcsd: active/enabled<br><br>#---------------------------------<br>### result:  pcs property  ###<br><br>Cluster Properties:<br> cluster-infrastructure: corosync<br> cluster-name: klasterek<br> dc-version: 1.1.13-10.el7_2.4-44eb2dd<br> have-watchdog: false<br> no-quorum-policy: ignore<br> stonith-enabled: true<br> symmetric-cluster: true<br><br>#---------------------------------<br>### result:  pcs cluster cib  ###<br><br>&lt;cib crm_feature_set=&quot;3.0.10&quot; validate-with=&quot;pacemaker-2.3&quot; epoch=&quot;69&quot; num_updates=&quot;38&quot; admin_epoch=&quot;0&quot; cib-last-written=&quot;Tue Sep 13 18:31:33 2016&quot; update-origin=&quot;wirt1v&quot; update-client=&quot;crm_resource&quot; update-user=&quot;root&quot; have-quorum=&quot;1&quot; dc-uuid=&quot;1&quot;&gt;<br>  &lt;configuration&gt;<br>    &lt;crm_config&gt;<br>      &lt;cluster_property_set id=&quot;cib-bootstrap-options&quot;&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-have-watchdog&quot; name=&quot;have-watchdog&quot; value=&quot;false&quot;/&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-dc-version&quot; name=&quot;dc-version&quot; value=&quot;1.1.13-10.el7_2.4-44eb2dd&quot;/&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-cluster-infrastructure&quot; name=&quot;cluster-infrastructure&quot; value=&quot;corosync&quot;/&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-cluster-name&quot; name=&quot;cluster-name&quot; value=&quot;klasterek&quot;/&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-no-quorum-policy&quot; name=&quot;no-quorum-policy&quot; value=&quot;ignore&quot;/&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-symmetric-cluster&quot; name=&quot;symmetric-cluster&quot; value=&quot;true&quot;/&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-stonith-enabled&quot; name=&quot;stonith-enabled&quot; value=&quot;true&quot;/&gt;<br>      &lt;/cluster_property_set&gt;<br>    &lt;/crm_config&gt;<br>    &lt;nodes&gt;<br>      &lt;node id=&quot;1&quot; uname=&quot;wirt1v&quot;/&gt;<br>      &lt;node id=&quot;2&quot; uname=&quot;wirt2v&quot;/&gt;<br>    &lt;/nodes&gt;<br>    &lt;resources&gt;<br>      &lt;master id=&quot;Drbd2-clone&quot;&gt;<br>        &lt;primitive class=&quot;ocf&quot; id=&quot;Drbd2&quot; provider=&quot;linbit&quot; type=&quot;drbd&quot;&gt;<br>          &lt;instance_attributes id=&quot;Drbd2-instance_attributes&quot;&gt;<br>            &lt;nvpair id=&quot;Drbd2-instance_attributes-drbd_resource&quot; name=&quot;drbd_resource&quot; value=&quot;drbd2&quot;/&gt;<br>          &lt;/instance_attributes&gt;<br>          &lt;operations&gt;<br>            &lt;op id=&quot;Drbd2-start-interval-0s&quot; interval=&quot;0s&quot; name=&quot;start&quot; timeout=&quot;240&quot;/&gt;<br>            &lt;op id=&quot;Drbd2-promote-interval-0s&quot; interval=&quot;0s&quot; name=&quot;promote&quot; timeout=&quot;90&quot;/&gt;<br>            &lt;op id=&quot;Drbd2-demote-interval-0s&quot; interval=&quot;0s&quot; name=&quot;demote&quot; timeout=&quot;90&quot;/&gt;<br>            &lt;op id=&quot;Drbd2-stop-interval-0s&quot; interval=&quot;0s&quot; name=&quot;stop&quot; timeout=&quot;100&quot;/&gt;<br>            &lt;op id=&quot;Drbd2-monitor-interval-60s&quot; interval=&quot;60s&quot; name=&quot;monitor&quot;/&gt;<br>          &lt;/operations&gt;<br>        &lt;/primitive&gt;<br>        &lt;meta_attributes id=&quot;Drbd2-clone-meta_attributes&quot;&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-master-max&quot; name=&quot;master-max&quot; value=&quot;2&quot;/&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-master-node-max&quot; name=&quot;master-node-max&quot; value=&quot;1&quot;/&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-clone-max&quot; name=&quot;clone-max&quot; value=&quot;2&quot;/&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-clone-node-max&quot; name=&quot;clone-node-max&quot; value=&quot;1&quot;/&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-notify&quot; name=&quot;notify&quot; value=&quot;true&quot;/&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-globally-unique&quot; name=&quot;globally-unique&quot; value=&quot;false&quot;/&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-interleave&quot; name=&quot;interleave&quot; value=&quot;true&quot;/&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-ordered&quot; name=&quot;ordered&quot; value=&quot;true&quot;/&gt;<br>        &lt;/meta_attributes&gt;<br>      &lt;/master&gt;<br><br>      &lt;clone id=&quot;Virtfs2-clone&quot;&gt;<br>        &lt;primitive class=&quot;ocf&quot; id=&quot;Virtfs2&quot; provider=&quot;heartbeat&quot; type=&quot;Filesystem&quot;&gt;<br>          &lt;instance_attributes id=&quot;Virtfs2-instance_attributes&quot;&gt;<br>            &lt;nvpair id=&quot;Virtfs2-instance_attributes-device&quot; name=&quot;device&quot; value=&quot;/dev/drbd2&quot;/&gt;<br>            &lt;nvpair id=&quot;Virtfs2-instance_attributes-directory&quot; name=&quot;directory&quot; value=&quot;/virtfs2&quot;/&gt;<br>            &lt;nvpair id=&quot;Virtfs2-instance_attributes-fstype&quot; name=&quot;fstype&quot; value=&quot;gfs2&quot;/&gt;<br>          &lt;/instance_attributes&gt;<br>          &lt;operations&gt;<br>            &lt;op id=&quot;Virtfs2-start-interval-0s&quot; interval=&quot;0s&quot; name=&quot;start&quot; timeout=&quot;60&quot;/&gt;<br>            &lt;op id=&quot;Virtfs2-stop-interval-0s&quot; interval=&quot;0s&quot; name=&quot;stop&quot; timeout=&quot;60&quot;/&gt;<br>            &lt;op id=&quot;Virtfs2-monitor-interval-20&quot; interval=&quot;20&quot; name=&quot;monitor&quot; timeout=&quot;40&quot;/&gt;<br>          &lt;/operations&gt;<br>        &lt;/primitive&gt;<br>        &lt;meta_attributes id=&quot;Virtfs2-clone-meta_attributes&quot;&gt;<br>          &lt;nvpair id=&quot;Virtfs2-interleave&quot; name=&quot;interleave&quot; value=&quot;true&quot;/&gt;<br>        &lt;/meta_attributes&gt;<br>      &lt;/clone&gt;<br>      &lt;clone id=&quot;dlm-clone&quot;&gt;<br>        &lt;primitive class=&quot;ocf&quot; id=&quot;dlm&quot; provider=&quot;pacemaker&quot; type=&quot;controld&quot;&gt;<br>          &lt;instance_attributes id=&quot;dlm-instance_attributes&quot;/&gt;<br>          &lt;operations&gt;<br>            &lt;op id=&quot;dlm-start-interval-0s&quot; interval=&quot;0s&quot; name=&quot;start&quot; timeout=&quot;90&quot;/&gt;<br>            &lt;op id=&quot;dlm-stop-interval-0s&quot; interval=&quot;0s&quot; name=&quot;stop&quot; timeout=&quot;100&quot;/&gt;<br>            &lt;op id=&quot;dlm-monitor-interval-60s&quot; interval=&quot;60s&quot; name=&quot;monitor&quot;/&gt;<br>          &lt;/operations&gt;<br>        &lt;/primitive&gt;<br>        &lt;meta_attributes id=&quot;dlm-clone-meta_attributes&quot;&gt;<br>          &lt;nvpair id=&quot;dlm-clone-max&quot; name=&quot;clone-max&quot; value=&quot;2&quot;/&gt;<br>          &lt;nvpair id=&quot;dlm-clone-node-max&quot; name=&quot;clone-node-max&quot; value=&quot;1&quot;/&gt;<br>          &lt;nvpair id=&quot;dlm-interleave&quot; name=&quot;interleave&quot; value=&quot;true&quot;/&gt;<br>          &lt;nvpair id=&quot;dlm-ordered&quot; name=&quot;ordered&quot; value=&quot;true&quot;/&gt;<br>        &lt;/meta_attributes&gt;<br>      &lt;/clone&gt;<br>      &lt;primitive class=&quot;stonith&quot; id=&quot;fencing-idrac1&quot; type=&quot;fence_idrac&quot;&gt;<br>        &lt;instance_attributes id=&quot;fencing-idrac1-instance_attributes&quot;&gt;<br>          &lt;nvpair id=&quot;fencing-idrac1-instance_attributes-pcmk_host_list&quot; name=&quot;pcmk_host_list&quot; value=&quot;wirt1v&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac1-instance_attributes-ipaddr&quot; name=&quot;ipaddr&quot; value=&quot;172.31.0.223&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac1-instance_attributes-lanplus&quot; name=&quot;lanplus&quot; value=&quot;on&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac1-instance_attributes-login&quot; name=&quot;login&quot; value=&quot;root&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac1-instance_attributes-passwd&quot; name=&quot;passwd&quot; value=&quot;my1secret2password3&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac1-instance_attributes-action&quot; name=&quot;action&quot; value=&quot;reboot&quot;/&gt;<br>        &lt;/instance_attributes&gt;<br>        &lt;operations&gt;<br>          &lt;op id=&quot;fencing-idrac1-monitor-interval-60&quot; interval=&quot;60&quot; name=&quot;monitor&quot;/&gt;<br>        &lt;/operations&gt;<br>      &lt;/primitive&gt;<br>      &lt;primitive class=&quot;stonith&quot; id=&quot;fencing-idrac2&quot; type=&quot;fence_idrac&quot;&gt;<br>        &lt;instance_attributes id=&quot;fencing-idrac2-instance_attributes&quot;&gt;<br>          &lt;nvpair id=&quot;fencing-idrac2-instance_attributes-pcmk_host_list&quot; name=&quot;pcmk_host_list&quot; value=&quot;wirt2v&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac2-instance_attributes-ipaddr&quot; name=&quot;ipaddr&quot; value=&quot;172.31.0.224&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac2-instance_attributes-lanplus&quot; name=&quot;lanplus&quot; value=&quot;on&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac2-instance_attributes-login&quot; name=&quot;login&quot; value=&quot;root&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac2-instance_attributes-passwd&quot; name=&quot;passwd&quot; value=&quot;my1secret2password3&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac2-instance_attributes-action&quot; name=&quot;action&quot; value=&quot;reboot&quot;/&gt;<br>        &lt;/instance_attributes&gt;<br>        &lt;operations&gt;<br>          &lt;op id=&quot;fencing-idrac2-monitor-interval-60&quot; interval=&quot;60&quot; name=&quot;monitor&quot;/&gt;<br>        &lt;/operations&gt;<br>      &lt;/primitive&gt;<br>    &lt;/resources&gt;<br>    &lt;constraints&gt;<br>      &lt;rsc_colocation id=&quot;colocation-Virtfs2-clone-Drbd2-clone-INFINITY&quot; rsc=&quot;Virtfs2-clone&quot; score=&quot;INFINITY&quot; with-rsc=&quot;Drbd2-clone&quot; with-rsc-role=&quot;Master&quot;/&gt;<br>      &lt;rsc_order first=&quot;Drbd2-clone&quot; first-action=&quot;promote&quot; id=&quot;order-Drbd2-clone-Virtfs2-clone-mandatory&quot; then=&quot;Virtfs2-clone&quot; then-action=&quot;start&quot;/&gt;<br>      &lt;rsc_order first=&quot;dlm-clone&quot; first-action=&quot;start&quot; id=&quot;order-dlm-clone-Virtfs2-clone-mandatory&quot; then=&quot;Virtfs2-clone&quot; then-action=&quot;start&quot;/&gt;<br>      &lt;rsc_colocation id=&quot;colocation-Virtfs2-clone-dlm-clone-INFINITY&quot; rsc=&quot;Virtfs2-clone&quot; score=&quot;INFINITY&quot; with-rsc=&quot;dlm-clone&quot;/&gt;<br>    &lt;/constraints&gt;<br>    &lt;rsc_defaults&gt;<br>      &lt;meta_attributes id=&quot;rsc_defaults-options&quot;&gt;<br>        &lt;nvpair id=&quot;rsc_defaults-options-resource-stickiness&quot; name=&quot;resource-stickiness&quot; value=&quot;100&quot;/&gt;<br>      &lt;/meta_attributes&gt;<br>    &lt;/rsc_defaults&gt;<br>  &lt;/configuration&gt;<br>  &lt;status&gt;<br>    &lt;node_state id=&quot;1&quot; uname=&quot;wirt1v&quot; in_ccm=&quot;true&quot; crmd=&quot;online&quot; crm-debug-origin=&quot;do_update_resource&quot; join=&quot;member&quot; expected=&quot;member&quot;&gt;<br>      &lt;lrm id=&quot;1&quot;&gt;<br>        &lt;lrm_resources&gt;<br>          &lt;lrm_resource id=&quot;fencing-idrac1&quot; type=&quot;fence_idrac&quot; class=&quot;stonith&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac1_last_0&quot; operation_key=&quot;fencing-idrac1_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;55:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;55:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt1v&quot; call-id=&quot;27&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473786030&quot; last-rc-change=&quot;1473786030&quot; exec-time=&quot;54&quot; queue-time=&quot;0&quot; op-digest=&quot;c5f495355c70285327a4ecd128166155&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;58f15e2aeb9ef41c7d7016ac60c95b3d&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac1_monitor_60000&quot; operation_key=&quot;fencing-idrac1_monitor_60000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;51:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;51:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt1v&quot; call-id=&quot;29&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;60000&quot; last-rc-change=&quot;1473786031&quot; exec-time=&quot;54&quot; queue-time=&quot;0&quot; op-digest=&quot;2c3a04590a892a02a6109a0e8bd4b89a&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;58f15e2aeb9ef41c7d7016ac60c95b3d&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;fencing-idrac2&quot; type=&quot;fence_idrac&quot; class=&quot;stonith&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac2_last_0&quot; operation_key=&quot;fencing-idrac2_monitor_0&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;8:0:7:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:7;8:0:7:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt1v&quot; call-id=&quot;24&quot; rc-code=&quot;7&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473786029&quot; last-rc-change=&quot;1473786029&quot; exec-time=&quot;0&quot; queue-time=&quot;0&quot; op-digest=&quot;62957a33f7a67eda09c15e3f933f2d0b&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;65925748cee98be7e9d827ae5f2eb74f&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;Drbd2&quot; type=&quot;drbd&quot; class=&quot;ocf&quot; provider=&quot;linbit&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;Drbd2_last_0&quot; operation_key=&quot;Drbd2_promote_0&quot; operation=&quot;promote&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;10:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;10:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt1v&quot; call-id=&quot;33&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473786032&quot; last-rc-change=&quot;1473786032&quot; exec-time=&quot;64&quot; queue-time=&quot;1&quot; op-digest=&quot;d0c8a735862843030d8426a5218ceb92&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;Virtfs2&quot; type=&quot;Filesystem&quot; class=&quot;ocf&quot; provider=&quot;heartbeat&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;Virtfs2_last_0&quot; operation_key=&quot;Virtfs2_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;41:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;41:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt1v&quot; call-id=&quot;35&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473786032&quot; last-rc-change=&quot;1473786032&quot; exec-time=&quot;1372&quot; queue-time=&quot;0&quot; op-digest=&quot;8dbd904c2115508ebcf3dffe8e7c6d82&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;Virtfs2_monitor_20000&quot; operation_key=&quot;Virtfs2_monitor_20000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;42:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;42:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt1v&quot; call-id=&quot;36&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;20000&quot; last-rc-change=&quot;1473786034&quot; exec-time=&quot;64&quot; queue-time=&quot;0&quot; op-digest=&quot;051271837d1a8eccc0af38fbd8c406e4&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;dlm&quot; type=&quot;controld&quot; class=&quot;ocf&quot; provider=&quot;pacemaker&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;dlm_last_0&quot; operation_key=&quot;dlm_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;47:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;47:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt1v&quot; call-id=&quot;26&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473786030&quot; last-rc-change=&quot;1473786030&quot; exec-time=&quot;1098&quot; queue-time=&quot;0&quot; op-digest=&quot;f2317cad3d54cec5d7d7aa7d0bf35cf8&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;dlm_monitor_60000&quot; operation_key=&quot;dlm_monitor_60000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;42:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;42:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt1v&quot; call-id=&quot;28&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;60000&quot; last-rc-change=&quot;1473786031&quot; exec-time=&quot;34&quot; queue-time=&quot;0&quot; op-digest=&quot;4811cef7f7f94e3a35a70be7916cb2fd&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>        &lt;/lrm_resources&gt;<br>      &lt;/lrm&gt;<br>      &lt;transient_attributes id=&quot;1&quot;&gt;<br>        &lt;instance_attributes id=&quot;status-1&quot;&gt;<br>          &lt;nvpair id=&quot;status-1-shutdown&quot; name=&quot;shutdown&quot; value=&quot;0&quot;/&gt;<br>          &lt;nvpair id=&quot;status-1-probe_complete&quot; name=&quot;probe_complete&quot; value=&quot;true&quot;/&gt;<br>          &lt;nvpair id=&quot;status-1-master-Drbd2&quot; name=&quot;master-Drbd2&quot; value=&quot;10000&quot;/&gt;<br>        &lt;/instance_attributes&gt;<br>      &lt;/transient_attributes&gt;<br>    &lt;/node_state&gt;<br>    &lt;node_state id=&quot;2&quot; uname=&quot;wirt2v&quot; in_ccm=&quot;true&quot; crmd=&quot;online&quot; crm-debug-origin=&quot;do_update_resource&quot; join=&quot;member&quot; expected=&quot;member&quot;&gt;<br>      &lt;lrm id=&quot;2&quot;&gt;<br>        &lt;lrm_resources&gt;<br>          &lt;lrm_resource id=&quot;fencing-idrac1&quot; type=&quot;fence_idrac&quot; class=&quot;stonith&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac1_last_0&quot; operation_key=&quot;fencing-idrac1_monitor_0&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;13:0:7:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:7;13:0:7:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt2v&quot; call-id=&quot;20&quot; rc-code=&quot;7&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473786029&quot; last-rc-change=&quot;1473786029&quot; exec-time=&quot;3&quot; queue-time=&quot;0&quot; op-digest=&quot;c5f495355c70285327a4ecd128166155&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;58f15e2aeb9ef41c7d7016ac60c95b3d&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;fencing-idrac2&quot; type=&quot;fence_idrac&quot; class=&quot;stonith&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac2_last_0&quot; operation_key=&quot;fencing-idrac2_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;57:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;57:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt2v&quot; call-id=&quot;25&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473786030&quot; last-rc-change=&quot;1473786030&quot; exec-time=&quot;62&quot; queue-time=&quot;0&quot; op-digest=&quot;62957a33f7a67eda09c15e3f933f2d0b&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;65925748cee98be7e9d827ae5f2eb74f&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac2_monitor_60000&quot; operation_key=&quot;fencing-idrac2_monitor_60000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;54:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;54:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt2v&quot; call-id=&quot;26&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;60000&quot; last-rc-change=&quot;1473786031&quot; exec-time=&quot;74&quot; queue-time=&quot;0&quot; op-digest=&quot;02c5ce42002631d918b41adc571d64b8&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;65925748cee98be7e9d827ae5f2eb74f&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;dlm&quot; type=&quot;controld&quot; class=&quot;ocf&quot; provider=&quot;pacemaker&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;dlm_last_0&quot; operation_key=&quot;dlm_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;43:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;43:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt2v&quot; call-id=&quot;27&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473786031&quot; last-rc-change=&quot;1473786031&quot; exec-time=&quot;1102&quot; queue-time=&quot;0&quot; op-digest=&quot;f2317cad3d54cec5d7d7aa7d0bf35cf8&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;dlm_monitor_60000&quot; operation_key=&quot;dlm_monitor_60000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;50:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;50:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt2v&quot; call-id=&quot;30&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;60000&quot; last-rc-change=&quot;1473786032&quot; exec-time=&quot;32&quot; queue-time=&quot;0&quot; op-digest=&quot;4811cef7f7f94e3a35a70be7916cb2fd&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;Drbd2&quot; type=&quot;drbd&quot; class=&quot;ocf&quot; provider=&quot;linbit&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;Drbd2_last_0&quot; operation_key=&quot;Drbd2_promote_0&quot; operation=&quot;promote&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;13:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;13:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt2v&quot; call-id=&quot;32&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473786032&quot; last-rc-change=&quot;1473786032&quot; exec-time=&quot;55&quot; queue-time=&quot;0&quot; op-digest=&quot;d0c8a735862843030d8426a5218ceb92&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;Virtfs2&quot; type=&quot;Filesystem&quot; class=&quot;ocf&quot; provider=&quot;heartbeat&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;Virtfs2_last_0&quot; operation_key=&quot;Virtfs2_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;43:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;43:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt2v&quot; call-id=&quot;34&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473786032&quot; last-rc-change=&quot;1473786032&quot; exec-time=&quot;939&quot; queue-time=&quot;0&quot; op-digest=&quot;8dbd904c2115508ebcf3dffe8e7c6d82&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;Virtfs2_monitor_20000&quot; operation_key=&quot;Virtfs2_monitor_20000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;44:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; transition-magic=&quot;0:0;44:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df&quot; on_node=&quot;wirt2v&quot; call-id=&quot;35&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;20000&quot; last-rc-change=&quot;1473786033&quot; exec-time=&quot;39&quot; queue-time=&quot;0&quot; op-digest=&quot;051271837d1a8eccc0af38fbd8c406e4&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>        &lt;/lrm_resources&gt;<br>      &lt;/lrm&gt;<br>      &lt;transient_attributes id=&quot;2&quot;&gt;<br>        &lt;instance_attributes id=&quot;status-2&quot;&gt;<br>          &lt;nvpair id=&quot;status-2-shutdown&quot; name=&quot;shutdown&quot; value=&quot;0&quot;/&gt;<br>          &lt;nvpair id=&quot;status-2-probe_complete&quot; name=&quot;probe_complete&quot; value=&quot;true&quot;/&gt;<br>          &lt;nvpair id=&quot;status-2-master-Drbd2&quot; name=&quot;master-Drbd2&quot; value=&quot;10000&quot;/&gt;<br>        &lt;/instance_attributes&gt;<br>      &lt;/transient_attributes&gt;<br>    &lt;/node_state&gt;<br>  &lt;/status&gt;<br>&lt;/cib&gt;<br><br>#-------- The End --------------------<br><br>### result:  pcs config  ###<br><br>Cluster Name: klasterek<br>Corosync Nodes:<br> wirt1v wirt2v<br>Pacemaker Nodes:<br> wirt1v wirt2v<br>Resources:<br> Master: Drbd2-clone<br>  Meta Attrs: master-max=2 master-node-max=1 clone-max=2 clone-node-max=1 notify=true globally-unique=false interleave=true ordered=true<br>  Resource: Drbd2 (class=ocf provider=linbit type=drbd)<br>   Attributes: drbd_resource=drbd2<br>   Operations: start interval=0s timeout=240 (Drbd2-start-interval-0s)<br>               promote interval=0s timeout=90 (Drbd2-promote-interval-0s)<br>               demote interval=0s timeout=90 (Drbd2-demote-interval-0s)<br>               stop interval=0s timeout=100 (Drbd2-stop-interval-0s)<br>               monitor interval=60s (Drbd2-monitor-interval-60s)<br> Clone: Virtfs2-clone<br>  Meta Attrs: interleave=true<br>  Resource: Virtfs2 (class=ocf provider=heartbeat type=Filesystem)<br>   Attributes: device=/dev/drbd2 directory=/virtfs2 fstype=gfs2<br>   Operations: start interval=0s timeout=60 (Virtfs2-start-interval-0s)<br>               stop interval=0s timeout=60 (Virtfs2-stop-interval-0s)<br>               monitor interval=20 timeout=40 (Virtfs2-monitor-interval-20)<br> Clone: dlm-clone<br>  Meta Attrs: clone-max=2 clone-node-max=1 interleave=true ordered=true<br>  Resource: dlm (class=ocf provider=pacemaker type=controld)<br>   Operations: start interval=0s timeout=90 (dlm-start-interval-0s)<br>               stop interval=0s timeout=100 (dlm-stop-interval-0s)<br>               monitor interval=60s (dlm-monitor-interval-60s)<br>Stonith Devices:<br> Resource: fencing-idrac1 (class=stonith type=fence_idrac)<br>  Attributes: pcmk_host_list=wirt1v ipaddr=172.31.0.223 lanplus=on login=root passwd=my1secret2password3 action=reboot<br>  Operations: monitor interval=60 (fencing-idrac1-monitor-interval-60)<br> Resource: fencing-idrac2 (class=stonith type=fence_idrac)<br>  Attributes: pcmk_host_list=wirt2v ipaddr=172.31.0.224 lanplus=on login=root passwd=my1secret2password3 action=reboot<br>  Operations: monitor interval=60 (fencing-idrac2-monitor-interval-60)<br>Fencing Levels:<br>Location Constraints:<br>Ordering Constraints:<br>  promote Drbd2-clone then start Virtfs2-clone (kind:Mandatory) (id:order-Drbd2-clone-Virtfs2-clone-mandatory)<br>  start dlm-clone then start Virtfs2-clone (kind:Mandatory) (id:order-dlm-clone-Virtfs2-clone-mandatory)<br>Colocation Constraints:<br>  Virtfs2-clone with Drbd2-clone (score:INFINITY) (with-rsc-role:Master) (id:colocation-Virtfs2-clone-Drbd2-clone-INFINITY)<br>  Virtfs2-clone with dlm-clone (score:INFINITY) (id:colocation-Virtfs2-clone-dlm-clone-INFINITY)<br>Resources Defaults:<br> resource-stickiness: 100<br>Operations Defaults:<br> No defaults set<br>Cluster Properties:<br> cluster-infrastructure: corosync<br> cluster-name: klasterek<br> dc-version: 1.1.13-10.el7_2.4-44eb2dd<br> have-watchdog: false<br> no-quorum-policy: ignore<br> stonith-enabled: true<br> symmetric-cluster: true<br><br><br>#---------------------------------<br></div># /var/log/messages<br><br>Sep 13 22:00:19 wirt1v systemd: Starting Corosync Cluster Engine...<br>Sep 13 22:00:19 wirt1v corosync[5720]: [MAIN  ] Corosync Cluster Engine (&#39;2.3.4&#39;): started and ready to provide service.<br>Sep 13 22:00:19 wirt1v corosync[5720]: [MAIN  ] Corosync built-in features: dbus systemd xmlconf snmp pie relro bindnow<br>Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] Initializing transport (UDP/IP Unicast).<br>Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none<br>Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] The network interface [1.1.1.1] is now up.<br>Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded: corosync configuration map access [0]<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: cmap<br>Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded: corosync configuration service [1]<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: cfg<br>Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: cpg<br>Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded: corosync profile loading service [4]<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QUORUM] Using quorum provider corosync_votequorum<br>Sep 13 22:00:19 wirt1v corosync[5721]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2<br>Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: votequorum<br>Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: quorum<br>Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] adding new UDPU member {1.1.1.1}<br>Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] adding new UDPU member {1.1.1.2}<br>Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] A new membership (<a href="http://1.1.1.1:708">1.1.1.1:708</a>) was formed. Members joined: 1<br>Sep 13 22:00:19 wirt1v corosync[5721]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2<br>Sep 13 22:00:19 wirt1v corosync[5721]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2<br>Sep 13 22:00:19 wirt1v corosync[5721]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QUORUM] Members[1]: 1<br>Sep 13 22:00:19 wirt1v corosync[5721]: [MAIN  ] Completed service synchronization, ready to provide service.<br>Sep 13 22:00:20 wirt1v corosync: Starting Corosync Cluster Engine (corosync): [  OK  ]<br>Sep 13 22:00:20 wirt1v systemd: Started Corosync Cluster Engine.<br>Sep 13 22:00:20 wirt1v systemd: Started Pacemaker High Availability Cluster Manager.<br>Sep 13 22:00:20 wirt1v systemd: Starting Pacemaker High Availability Cluster Manager...<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Additional logging available in /var/log/pacemaker.log<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Switching to /var/log/cluster/corosync.log<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Additional logging available in /var/log/cluster/corosync.log<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Configured corosync to accept connections from group 189: OK (1)<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Starting Pacemaker 1.1.13-10.el7_2.4 (Build: 44eb2dd):  generated-manpages agent-manpages ncurses libqb-logging libqb-ipc upstart systemd nagios  corosync-native atomic-attrd acls<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Tracking existing lrmd process (pid=3413)<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Tracking existing pengine process (pid=3415)<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Quorum lost<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: pcmk_quorum_notification: Node wirt1v[1] - state is now member (was (null))<br>Sep 13 22:00:20 wirt1v stonith-ng[5742]:  notice: Additional logging available in /var/log/cluster/corosync.log<br>Sep 13 22:00:20 wirt1v cib[5741]:  notice: Additional logging available in /var/log/cluster/corosync.log<br>Sep 13 22:00:20 wirt1v stonith-ng[5742]:  notice: Connecting to cluster infrastructure: corosync<br>Sep 13 22:00:20 wirt1v attrd[5743]:  notice: Additional logging available in /var/log/cluster/corosync.log<br>Sep 13 22:00:20 wirt1v attrd[5743]:  notice: Connecting to cluster infrastructure: corosync<br>Sep 13 22:00:20 wirt1v crmd[5744]:  notice: Additional logging available in /var/log/cluster/corosync.log<br>Sep 13 22:00:20 wirt1v crmd[5744]:  notice: CRM Git Version: 1.1.13-10.el7_2.4 (44eb2dd)<br>Sep 13 22:00:20 wirt1v cib[5741]:  notice: Connecting to cluster infrastructure: corosync<br>Sep 13 22:00:20 wirt1v attrd[5743]:  notice: crm_update_peer_proc: Node wirt1v[1] - state is now member (was (null))<br>Sep 13 22:00:20 wirt1v stonith-ng[5742]:  notice: crm_update_peer_proc: Node wirt1v[1] - state is now member (was (null))<br>Sep 13 22:00:20 wirt1v cib[5741]:  notice: crm_update_peer_proc: Node wirt1v[1] - state is now member (was (null))<br>Sep 13 22:00:21 wirt1v crmd[5744]:  notice: Connecting to cluster infrastructure: corosync<br>Sep 13 22:00:21 wirt1v crmd[5744]:  notice: Quorum lost<br>Sep 13 22:00:21 wirt1v stonith-ng[5742]:  notice: Watching for stonith topology changes<br>Sep 13 22:00:21 wirt1v stonith-ng[5742]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:21 wirt1v crmd[5744]:  notice: pcmk_quorum_notification: Node wirt1v[1] - state is now member (was (null))<br>Sep 13 22:00:21 wirt1v crmd[5744]:  notice: Notifications disabled<br>Sep 13 22:00:21 wirt1v crmd[5744]:  notice: The local CRM is operational<br>Sep 13 22:00:21 wirt1v crmd[5744]:  notice: State transition S_STARTING -&gt; S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]<br>Sep 13 22:00:22 wirt1v stonith-ng[5742]:  notice: Added &#39;fencing-idrac1&#39; to the device list (1 active devices)<br>Sep 13 22:00:22 wirt1v stonith-ng[5742]:  notice: Added &#39;fencing-idrac2&#39; to the device list (2 active devices)<br>Sep 13 22:00:42 wirt1v crmd[5744]: warning: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: State transition S_ELECTION -&gt; S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=election_timeout_popped ]<br>Sep 13 22:00:42 wirt1v crmd[5744]: warning: FSA: Input I_ELECTION_DC from do_election_check() received in state S_INTEGRATION<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Notifications disabled<br>Sep 13 22:00:42 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:42 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:42 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:42 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:42 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:42 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:42 wirt1v pengine[3415]: warning: Calculated Transition 84: /var/lib/pacemaker/pengine/pe-warn-294.bz2<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 4: monitor Drbd2:0_monitor_0 on wirt1v (local)<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 5: monitor Virtfs2:0_monitor_0 on wirt1v (local)<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 6: monitor dlm:0_monitor_0 on wirt1v (local)<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 7: monitor fencing-idrac1_monitor_0 on wirt1v (local)<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 8: monitor fencing-idrac2_monitor_0 on wirt1v (local)<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (50) on wirt2v (timeout=60000)<br>Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) &#39;wirt2v&#39; with device &#39;(any)&#39;<br>Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: e87b942f-997d-42ad-91ad-dfa501f4ede0 (0)<br>Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:42 wirt1v Filesystem(Virtfs2)[5753]: WARNING: Couldn&#39;t find device [/dev/drbd2]. Expected /dev/??? to exist<br>Sep 13 22:00:42 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation fencing-idrac1_monitor_0: not running (node=wirt1v, call=33, rc=7, cib-update=31, confirmed=true)<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation fencing-idrac2_monitor_0: not running (node=wirt1v, call=35, rc=7, cib-update=32, confirmed=true)<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation dlm_monitor_0: not running (node=wirt1v, call=31, rc=7, cib-update=33, confirmed=true)<br>Sep 13 22:00:43 wirt1v crmd[5744]:   error: pcmkRegisterNode: Triggered assert at xml.c:594 : node-&gt;type == XML_ELEMENT_NODE<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation Drbd2_monitor_0: not running (node=wirt1v, call=27, rc=7, cib-update=34, confirmed=true)<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation Virtfs2_monitor_0: not running (node=wirt1v, call=29, rc=7, cib-update=35, confirmed=true)<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Initiating action 3: probe_complete probe_complete-wirt1v on wirt1v (local) - no waiting<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Transition aborted by status-1-probe_complete, probe_complete=true: Transient attribute change (create cib=0.69.11, source=abort_unless_down:319, path=/cib/status/node_state[@id=&#39;1&#39;]/transient_attributes[@id=&#39;1&#39;]/instance_attributes[@id=&#39;status-1&#39;], 0)<br>Sep 13 22:00:43 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:   error: Operation &#39;reboot&#39; [5849] (call 2 from crmd.5744) for host &#39;wirt2v&#39; with device &#39;fencing-idrac2&#39; returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5849 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5849 [  ]<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5849 [  ]<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: Couldn&#39;t find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by &lt;no-one&gt; for crmd.5744@wirt1v.e87b942f: No route to host<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Stonith operation 2/50:84:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Stonith operation 2 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by &lt;anyone&gt; for wirt1v: No route to host (ref=e87b942f-997d-42ad-91ad-dfa501f4ede0) by client crmd.5744<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Transition 84 (Complete=12, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-294.bz2): Complete<br>Sep 13 22:00:43 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:43 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:43 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:43 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:43 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:43 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:43 wirt1v pengine[3415]: warning: Calculated Transition 85: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) &#39;wirt2v&#39; with device &#39;(any)&#39;<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 880b2614-09d2-47df-b740-e1d24732e6c5 (0)<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:43 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:44 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:   error: Operation &#39;reboot&#39; [5879] (call 3 from crmd.5744) for host &#39;wirt2v&#39; with device &#39;fencing-idrac2&#39; returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5879 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5879 [  ]<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5879 [  ]<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: Couldn&#39;t find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by &lt;no-one&gt; for crmd.5744@wirt1v.880b2614: No route to host<br>Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Stonith operation 3/45:85:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Stonith operation 3 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by &lt;anyone&gt; for wirt1v: No route to host (ref=880b2614-09d2-47df-b740-e1d24732e6c5) by client crmd.5744<br>Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Transition 85 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:44 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:44 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:44 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:44 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:44 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:44 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:44 wirt1v pengine[3415]: warning: Calculated Transition 86: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) &#39;wirt2v&#39; with device &#39;(any)&#39;<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 4c7af8ee-ffa6-4381-8d98-073d5abba631 (0)<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:44 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:45 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:   error: Operation &#39;reboot&#39; [5893] (call 4 from crmd.5744) for host &#39;wirt2v&#39; with device &#39;fencing-idrac2&#39; returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5893 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5893 [  ]<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5893 [  ]<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: Couldn&#39;t find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by &lt;no-one&gt; for crmd.5744@wirt1v.4c7af8ee: No route to host<br>Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Stonith operation 4/45:86:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Stonith operation 4 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by &lt;anyone&gt; for wirt1v: No route to host (ref=4c7af8ee-ffa6-4381-8d98-073d5abba631) by client crmd.5744<br>Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Transition 86 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:45 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:45 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:45 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:45 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:45 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:45 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:45 wirt1v pengine[3415]: warning: Calculated Transition 87: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) &#39;wirt2v&#39; with device &#39;(any)&#39;<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 268e4c7b-0340-4cf5-9c88-4f3c203f1499 (0)<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:46 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:47 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:   error: Operation &#39;reboot&#39; [5907] (call 5 from crmd.5744) for host &#39;wirt2v&#39; with device &#39;fencing-idrac2&#39; returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5907 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5907 [  ]<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5907 [  ]<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: Couldn&#39;t find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by &lt;no-one&gt; for crmd.5744@wirt1v.268e4c7b: No route to host<br>Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Stonith operation 5/45:87:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Stonith operation 5 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by &lt;anyone&gt; for wirt1v: No route to host (ref=268e4c7b-0340-4cf5-9c88-4f3c203f1499) by client crmd.5744<br>Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Transition 87 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:47 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:47 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:47 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:47 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:47 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:47 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:47 wirt1v pengine[3415]: warning: Calculated Transition 88: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) &#39;wirt2v&#39; with device &#39;(any)&#39;<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 8c5bf217-030f-400a-b1f8-7aa19918954f (0)<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:47 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:48 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:   error: Operation &#39;reboot&#39; [5921] (call 6 from crmd.5744) for host &#39;wirt2v&#39; with device &#39;fencing-idrac2&#39; returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [  ]<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [  ]<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [  ]<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: Couldn&#39;t find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by &lt;no-one&gt; for crmd.5744@wirt1v.8c5bf217: No route to host<br>Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Stonith operation 6/45:88:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Stonith operation 6 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by &lt;anyone&gt; for wirt1v: No route to host (ref=8c5bf217-030f-400a-b1f8-7aa19918954f) by client crmd.5744<br>Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Transition 88 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:48 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:48 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:48 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:48 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:48 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:48 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:48 wirt1v pengine[3415]: warning: Calculated Transition 89: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) &#39;wirt2v&#39; with device &#39;(any)&#39;<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 25e51799-e072-4622-bbb3-1430bdb20536 (0)<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:48 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:49 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:   error: Operation &#39;reboot&#39; [5935] (call 7 from crmd.5744) for host &#39;wirt2v&#39; with device &#39;fencing-idrac2&#39; returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5935 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5935 [  ]<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5935 [  ]<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: Couldn&#39;t find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by &lt;no-one&gt; for crmd.5744@wirt1v.25e51799: No route to host<br>Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Stonith operation 7/45:89:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Stonith operation 7 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by &lt;anyone&gt; for wirt1v: No route to host (ref=25e51799-e072-4622-bbb3-1430bdb20536) by client crmd.5744<br>Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Transition 89 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:49 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:49 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:49 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:49 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:49 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:49 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:49 wirt1v pengine[3415]: warning: Calculated Transition 90: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) &#39;wirt2v&#39; with device &#39;(any)&#39;<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 7f520e61-b613-49e4-9213-1958d8a68c6a (0)<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:49 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:50 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:   error: Operation &#39;reboot&#39; [5949] (call 8 from crmd.5744) for host &#39;wirt2v&#39; with device &#39;fencing-idrac2&#39; returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5949 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5949 [  ]<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5949 [  ]<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: Couldn&#39;t find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by &lt;no-one&gt; for crmd.5744@wirt1v.7f520e61: No route to host<br>Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Stonith operation 8/45:90:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Stonith operation 8 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by &lt;anyone&gt; for wirt1v: No route to host (ref=7f520e61-b613-49e4-9213-1958d8a68c6a) by client crmd.5744<br>Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Transition 90 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:50 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:50 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:50 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:50 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:50 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:50 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:50 wirt1v pengine[3415]: warning: Calculated Transition 91: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) &#39;wirt2v&#39; with device &#39;(any)&#39;<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 25b67d0b-5b8f-4cd8-82c2-4421474c111c (0)<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:50 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:51 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:   error: Operation &#39;reboot&#39; [5963] (call 9 from crmd.5744) for host &#39;wirt2v&#39; with device &#39;fencing-idrac2&#39; returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5963 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5963 [  ]<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5963 [  ]<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: Couldn&#39;t find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by &lt;no-one&gt; for crmd.5744@wirt1v.25b67d0b: No route to host<br>Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Stonith operation 9/45:91:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Stonith operation 9 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by &lt;anyone&gt; for wirt1v: No route to host (ref=25b67d0b-5b8f-4cd8-82c2-4421474c111c) by client crmd.5744<br>Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Transition 91 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:51 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:51 wirt1v pengine[3415]: warning: Calculated Transition 92: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) &#39;wirt2v&#39; with device &#39;(any)&#39;<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 292a57e9-fd1b-4630-8c10-0d48a268fd68 (0)<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:51 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:52 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:   error: Operation &#39;reboot&#39; [5977] (call 10 from crmd.5744) for host &#39;wirt2v&#39; with device &#39;fencing-idrac2&#39; returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5977 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5977 [  ]<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5977 [  ]<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: Couldn&#39;t find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by &lt;no-one&gt; for crmd.5744@wirt1v.292a57e9: No route to host<br>Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Stonith operation 10/45:92:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Stonith operation 10 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by &lt;anyone&gt; for wirt1v: No route to host (ref=292a57e9-fd1b-4630-8c10-0d48a268fd68) by client crmd.5744<br>Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Transition 92 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:52 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:52 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:52 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:52 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:52 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:52 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:52 wirt1v pengine[3415]: warning: Calculated Transition 93: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) &#39;wirt2v&#39; with device &#39;(any)&#39;<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: f324baad-ef9b-44e6-9e09-02176fa447ef (0)<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:53 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:54 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:   error: Operation &#39;reboot&#39; [5991] (call 11 from crmd.5744) for host &#39;wirt2v&#39; with device &#39;fencing-idrac2&#39; returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5991 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5991 [  ]<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5991 [  ]<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: Couldn&#39;t find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by &lt;no-one&gt; for crmd.5744@wirt1v.f324baad: No route to host<br>Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Stonith operation 11/45:93:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Stonith operation 11 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by &lt;anyone&gt; for wirt1v: No route to host (ref=f324baad-ef9b-44e6-9e09-02176fa447ef) by client crmd.5744<br>Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Transition 93 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:54 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:54 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:54 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:54 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:54 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:54 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:54 wirt1v pengine[3415]: warning: Calculated Transition 94: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) &#39;wirt2v&#39; with device &#39;(any)&#39;<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 61af386a-ce3f-438f-b83b-90dee4bdb1c6 (0)<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:54 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:55 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:55 wirt1v stonith-ng[5742]:   error: Operation &#39;reboot&#39; [6005] (call 12 from crmd.5744) for host &#39;wirt2v&#39; with device &#39;fencing-idrac2&#39; returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:55 wirt1v stonith-ng[5742]: warning: fencing-idrac2:6005 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:55 wirt1v stonith-ng[5742]: warning: fencing-idrac2:6005 [  ]<br>Sep 13 22:00:55 wirt1v stonith-ng[5742]: warning: fencing-idrac2:6005 [  ]<br>Sep 13 22:00:55 wirt1v stonith-ng[5742]:  notice: Couldn&#39;t find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:55 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by &lt;no-one&gt; for crmd.5744@wirt1v.61af386a: No route to host<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Stonith operation 12/45:94:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Stonith operation 12 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by &lt;anyone&gt; for wirt1v: No route to host (ref=61af386a-ce3f-438f-b83b-90dee4bdb1c6) by client crmd.5744<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Transition 94 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Too many failures to fence wirt2v (11), giving up<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: State transition S_TRANSITION_ENGINE -&gt; S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br></div><br># -------------------- end of /var/log/messages<br><br><div><br><div><br></div></div></div>