<div dir="ltr">Hi,<br><br>I have a problem with my cluster.<br>There are two nodes: wirt1v and wirt2v with pacemaker, corosync, dlm,<br>drbd (/dev/drbd2) and filesystem mounted as /virtfs2 with gfs2.<br>Each node contains LVM partition on wihich is DRBD.<br><br>The situation is as follow:<br><br>pcs cluster standby wirt2v<br> ...all is OK, there is possible to use /virtfs2 on wirt1v node<br><br>pcs cluster unstandby wirt2v<br> This situation causes restarting/remounting Drbd2/Virtfs2 on the wirt1v node.<br><br> And my question is as follow - why the resources restart on the wirt1v node?<br><br> I&#39;m going to use /virtfs2 filesystem to install virtual machines,<br> so in such situation they will restart also.<br><br> Couuld you advise me what to do with this problem?<br><br>The logs and configs are below..<br><br>Thanks in advance,<br>Gienek Nowacki<br>======================================<br>#---------------------------------<br>### result:  cat /etc/redhat-release  ###<br><br>CentOS Linux release 7.2.1511 (Core)<br><br>#---------------------------------<br>### result:  uname -a  ###<br><br>Linux wirt1v 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux<br><br>#---------------------------------<br>### result:  cat /etc/hosts  ###<br><br>127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4<br>172.31.0.23     wirt1<br>172.31.0.24     wirt2<br>1.1.1.1         wirt1v<br>1.1.1.2         wirt2v<br><br>#---------------------------------<br>### result:  cat /etc/drbd.conf  ###<br><br>include &quot;drbd.d/global_common.conf&quot;;<br>include &quot;drbd.d/*.res&quot;;<br><br>#---------------------------------<br>### result:  cat /etc/drbd.d/global_common.conf  ###<br><br>common {<br>        protocol C;<br>        syncer {<br>                verify-alg sha1;<br>        }<br>        net {<br>                allow-two-primaries;<br>        }<br>}<br><br>#---------------------------------<br>### result:  cat /etc/drbd.d/drbd2.res  ###<br><br>resource drbd2 {<br>        meta-disk internal;<br>        device /dev/drbd2;<br>        on wirt1v {<br>                disk /dev/vg02/drbd2;<br>                address <a href="http://1.1.1.1:7782">1.1.1.1:7782</a>;<br>        }<br>        on wirt2v {<br>                disk /dev/vg02/drbd2;<br>                address <a href="http://1.1.1.2:7782">1.1.1.2:7782</a>;<br>        }<br>}<br>#---------------------------------<br>### result:  cat /proc/drbd  ###<br><br>version: 8.4.7-1 (api:1/proto:86-101)<br>GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49 build by phil@Build64R7, 2016-01-12 14:29:40<br> 2: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----<br>    ns:132180 nr:44 dw:132224 dr:301948 al:33 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0<br><br>#---------------------------------<br>### result:  cat /etc/corosync/corosync.conf  ###<br><br>totem {<br>    version: 2<br>    secauth: off<br>    cluster_name: klasterek<br>    transport: udpu<br>}<br>nodelist {<br>    node {<br>        ring0_addr: wirt1v<br>        nodeid: 1<br>    }<br>    node {<br>        ring0_addr: wirt2v<br>        nodeid: 2<br>    }<br>}<br>quorum {<br>    provider: corosync_votequorum<br>    two_node: 1<br>}<br>logging {<br>    to_logfile: yes<br>    logfile: /var/log/cluster/corosync.log<br>    to_syslog: yes<br>}<br><br>#---------------------------------<br>### result:  mnt  ###<br><br>#---------------------------------<br>### result:  mount | grep virtfs2  ###<br><br>/dev/drbd2 on /virtfs2 type gfs2 (rw,relatime,seclabel)<br><br><br>#---------------------------------<br>### result:  pcs config  ###<br><br>Cluster Name: klasterek<br>Corosync Nodes:<br> wirt1v wirt2v<br>Pacemaker Nodes:<br> wirt1v wirt2v<br>Resources:<br> Clone: dlm-clone<br>  Meta Attrs: clone-max=2 clone-node-max=1<br>  Resource: dlm (class=ocf provider=pacemaker type=controld)<br>   Operations: start interval=0s timeout=90 (dlm-start-interval-0s)<br>               stop interval=0s timeout=100 (dlm-stop-interval-0s)<br>               monitor interval=60s (dlm-monitor-interval-60s)<br> Master: Drbd2-clone<br>  Meta Attrs: master-max=2 master-node-max=1 clone-max=2 clone-node-max=1 notify=true<br>  Resource: Drbd2 (class=ocf provider=linbit type=drbd)<br>   Attributes: drbd_resource=drbd2<br>   Operations: start interval=0s timeout=240 (Drbd2-start-interval-0s)<br>               promote interval=0s timeout=90 (Drbd2-promote-interval-0s)<br>               demote interval=0s timeout=90 (Drbd2-demote-interval-0s)<br>               stop interval=0s timeout=100 (Drbd2-stop-interval-0s)<br>               monitor interval=60s (Drbd2-monitor-interval-60s)<br> Clone: Virtfs2-clone<br>  Resource: Virtfs2 (class=ocf provider=heartbeat type=Filesystem)<br>   Attributes: device=/dev/drbd2 directory=/virtfs2 fstype=gfs2<br>   Operations: start interval=0s timeout=60 (Virtfs2-start-interval-0s)<br>               stop interval=0s timeout=60 (Virtfs2-stop-interval-0s)<br>               monitor interval=20 timeout=40 (Virtfs2-monitor-interval-20)<br>Stonith Devices:<br> Resource: fencing-idrac1 (class=stonith type=fence_idrac)<br>  Attributes: pcmk_host_list=wirt1v ipaddr=172.31.0.223 lanplus=on login=root passwd=my1secret2password3<br>  Operations: monitor interval=60 (fencing-idrac1-monitor-interval-60)<br> Resource: fencing-idrac2 (class=stonith type=fence_idrac)<br>  Attributes: pcmk_host_list=wirt2v ipaddr=172.31.0.224 lanplus=on login=root passwd=my1secret2password3<br>  Operations: monitor interval=60 (fencing-idrac2-monitor-interval-60)<br>Fencing Levels:<br>Location Constraints:<br>Ordering Constraints:<br>  start dlm-clone then start Virtfs2-clone (kind:Mandatory) (id:order-dlm-clone-Virtfs2-clone-mandatory)<br>  promote Drbd2-clone then start Virtfs2-clone (kind:Mandatory) (id:order-Drbd2-clone-Virtfs2-clone-mandatory)<br>Colocation Constraints:<br>  Virtfs2-clone with Drbd2-clone (score:INFINITY) (with-rsc-role:Master) (id:colocation-Virtfs2-clone-Drbd2-clone-INFINITY)<br>  Virtfs2-clone with dlm-clone (score:INFINITY) (id:colocation-Virtfs2-clone-dlm-clone-INFINITY)<br>Resources Defaults:<br> resource-stickiness: 100<br>Operations Defaults:<br> No defaults set<br>Cluster Properties:<br> cluster-infrastructure: corosync<br> cluster-name: klasterek<br> dc-version: 1.1.13-10.el7_2.4-44eb2dd<br> have-watchdog: false<br> no-quorum-policy: ignore<br> stonith-enabled: true<br> symmetric-cluster: true<br><br>#---------------------------------<br>### result:  pcs status  ###<br><br>Cluster name: klasterek<br>Last updated: Sun Sep 11 16:02:15 2016          Last change: Sun Sep 11 15:01:05 2016 by root via crm_attribute on wirt2v<br>Stack: corosync<br>Current DC: wirt1v (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum<br>2 nodes and 8 resources configured<br>Online: [ wirt1v wirt2v ]<br>Full list of resources:<br> fencing-idrac1 (stonith:fence_idrac):  Started wirt1v<br> fencing-idrac2 (stonith:fence_idrac):  Started wirt1v<br> Clone Set: dlm-clone [dlm]<br>     Started: [ wirt1v wirt2v ]<br> Master/Slave Set: Drbd2-clone [Drbd2]<br>     Masters: [ wirt1v wirt2v ]<br> Clone Set: Virtfs2-clone [Virtfs2]<br>     Started: [ wirt1v wirt2v ]<br>PCSD Status:<br>  wirt1v: Online<br>  wirt2v: Online<br>Daemon Status:<br>  corosync: active/disabled<br>  pacemaker: active/disabled<br>  pcsd: active/enabled<br><br>#---------------------------------<br>### result:  pcs property  ###<br><br>Cluster Properties:<br> cluster-infrastructure: corosync<br> cluster-name: klasterek<br> dc-version: 1.1.13-10.el7_2.4-44eb2dd<br> have-watchdog: false<br> no-quorum-policy: ignore<br> stonith-enabled: true<br> symmetric-cluster: true<br><br>#---------------------------------<br>### result:  pcs cluster cib  ###<br><br>&lt;cib crm_feature_set=&quot;3.0.10&quot; validate-with=&quot;pacemaker-2.3&quot; epoch=&quot;105&quot; num_updates=&quot;11&quot; admin_epoch=&quot;0&quot; cib-last-written=&quot;Sun Sep 11 15:01:05 2016&quot; update-origin=&quot;wirt2v&quot; update-client=&quot;crm_attribute&quot; update-user=&quot;root&quot; have-quorum=&quot;1&quot; dc-uuid=&quot;1&quot;&gt;<br>  &lt;configuration&gt;<br>    &lt;crm_config&gt;<br>      &lt;cluster_property_set id=&quot;cib-bootstrap-options&quot;&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-have-watchdog&quot; name=&quot;have-watchdog&quot; value=&quot;false&quot;/&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-dc-version&quot; name=&quot;dc-version&quot; value=&quot;1.1.13-10.el7_2.4-44eb2dd&quot;/&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-cluster-infrastructure&quot; name=&quot;cluster-infrastructure&quot; value=&quot;corosync&quot;/&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-cluster-name&quot; name=&quot;cluster-name&quot; value=&quot;klasterek&quot;/&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-stonith-enabled&quot; name=&quot;stonith-enabled&quot; value=&quot;true&quot;/&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-symmetric-cluster&quot; name=&quot;symmetric-cluster&quot; value=&quot;true&quot;/&gt;<br>        &lt;nvpair id=&quot;cib-bootstrap-options-no-quorum-policy&quot; name=&quot;no-quorum-policy&quot; value=&quot;ignore&quot;/&gt;<br>      &lt;/cluster_property_set&gt;<br>    &lt;/crm_config&gt;<br>    &lt;nodes&gt;<br>      &lt;node id=&quot;1&quot; uname=&quot;wirt1v&quot;&gt;<br>        &lt;instance_attributes id=&quot;nodes-1&quot;/&gt;<br>      &lt;/node&gt;<br>      &lt;node id=&quot;2&quot; uname=&quot;wirt2v&quot;&gt;<br>        &lt;instance_attributes id=&quot;nodes-2&quot;/&gt;<br>      &lt;/node&gt;<br>    &lt;/nodes&gt;<br>    &lt;resources&gt;<br>      &lt;primitive class=&quot;stonith&quot; id=&quot;fencing-idrac1&quot; type=&quot;fence_idrac&quot;&gt;<br>        &lt;instance_attributes id=&quot;fencing-idrac1-instance_attributes&quot;&gt;<br>          &lt;nvpair id=&quot;fencing-idrac1-instance_attributes-pcmk_host_list&quot; name=&quot;pcmk_host_list&quot; value=&quot;wirt1v&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac1-instance_attributes-ipaddr&quot; name=&quot;ipaddr&quot; value=&quot;172.31.0.223&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac1-instance_attributes-lanplus&quot; name=&quot;lanplus&quot; value=&quot;on&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac1-instance_attributes-login&quot; name=&quot;login&quot; value=&quot;root&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac1-instance_attributes-passwd&quot; name=&quot;passwd&quot; value=&quot;my1secret2password3&quot;/&gt;<br>        &lt;/instance_attributes&gt;<br>        &lt;operations&gt;<br>          &lt;op id=&quot;fencing-idrac1-monitor-interval-60&quot; interval=&quot;60&quot; name=&quot;monitor&quot;/&gt;<br>        &lt;/operations&gt;<br>        &lt;meta_attributes id=&quot;fencing-idrac1-meta_attributes&quot;/&gt;<br>      &lt;/primitive&gt;<br>      &lt;primitive class=&quot;stonith&quot; id=&quot;fencing-idrac2&quot; type=&quot;fence_idrac&quot;&gt;<br>        &lt;instance_attributes id=&quot;fencing-idrac2-instance_attributes&quot;&gt;<br>          &lt;nvpair id=&quot;fencing-idrac2-instance_attributes-pcmk_host_list&quot; name=&quot;pcmk_host_list&quot; value=&quot;wirt2v&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac2-instance_attributes-ipaddr&quot; name=&quot;ipaddr&quot; value=&quot;172.31.0.224&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac2-instance_attributes-lanplus&quot; name=&quot;lanplus&quot; value=&quot;on&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac2-instance_attributes-login&quot; name=&quot;login&quot; value=&quot;root&quot;/&gt;<br>          &lt;nvpair id=&quot;fencing-idrac2-instance_attributes-passwd&quot; name=&quot;passwd&quot; value=&quot;my1secret2password3&quot;/&gt;<br>        &lt;/instance_attributes&gt;<br>        &lt;operations&gt;<br>          &lt;op id=&quot;fencing-idrac2-monitor-interval-60&quot; interval=&quot;60&quot; name=&quot;monitor&quot;/&gt;<br>        &lt;/operations&gt;<br>        &lt;meta_attributes id=&quot;fencing-idrac2-meta_attributes&quot;/&gt;<br>      &lt;/primitive&gt;<br>      &lt;clone id=&quot;dlm-clone&quot;&gt;<br>        &lt;primitive class=&quot;ocf&quot; id=&quot;dlm&quot; provider=&quot;pacemaker&quot; type=&quot;controld&quot;&gt;<br>          &lt;instance_attributes id=&quot;dlm-instance_attributes&quot;/&gt;<br>          &lt;operations&gt;<br>            &lt;op id=&quot;dlm-start-interval-0s&quot; interval=&quot;0s&quot; name=&quot;start&quot; timeout=&quot;90&quot;/&gt;<br>            &lt;op id=&quot;dlm-stop-interval-0s&quot; interval=&quot;0s&quot; name=&quot;stop&quot; timeout=&quot;100&quot;/&gt;<br>            &lt;op id=&quot;dlm-monitor-interval-60s&quot; interval=&quot;60s&quot; name=&quot;monitor&quot;/&gt;<br>          &lt;/operations&gt;<br>        &lt;/primitive&gt;<br>        &lt;meta_attributes id=&quot;dlm-clone-meta_attributes&quot;&gt;<br>          &lt;nvpair id=&quot;dlm-clone-max&quot; name=&quot;clone-max&quot; value=&quot;2&quot;/&gt;<br>          &lt;nvpair id=&quot;dlm-clone-node-max&quot; name=&quot;clone-node-max&quot; value=&quot;1&quot;/&gt;<br>        &lt;/meta_attributes&gt;<br>      &lt;/clone&gt;<br>        &lt;primitive class=&quot;ocf&quot; id=&quot;Drbd2&quot; provider=&quot;linbit&quot; type=&quot;drbd&quot;&gt;<br>          &lt;instance_attributes id=&quot;Drbd2-instance_attributes&quot;&gt;<br>            &lt;nvpair id=&quot;Drbd2-instance_attributes-drbd_resource&quot; name=&quot;drbd_resource&quot; value=&quot;drbd2&quot;/&gt;<br>          &lt;/instance_attributes&gt;<br>          &lt;operations&gt;<br>            &lt;op id=&quot;Drbd2-start-interval-0s&quot; interval=&quot;0s&quot; name=&quot;start&quot; timeout=&quot;240&quot;/&gt;<br>            &lt;op id=&quot;Drbd2-promote-interval-0s&quot; interval=&quot;0s&quot; name=&quot;promote&quot; timeout=&quot;90&quot;/&gt;<br>            &lt;op id=&quot;Drbd2-demote-interval-0s&quot; interval=&quot;0s&quot; name=&quot;demote&quot; timeout=&quot;90&quot;/&gt;<br>            &lt;op id=&quot;Drbd2-stop-interval-0s&quot; interval=&quot;0s&quot; name=&quot;stop&quot; timeout=&quot;100&quot;/&gt;<br>            &lt;op id=&quot;Drbd2-monitor-interval-60s&quot; interval=&quot;60s&quot; name=&quot;monitor&quot;/&gt;<br>          &lt;/operations&gt;<br>        &lt;/primitive&gt;<br>        &lt;meta_attributes id=&quot;Drbd2-clone-meta_attributes&quot;&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-master-max&quot; name=&quot;master-max&quot; value=&quot;2&quot;/&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-master-node-max&quot; name=&quot;master-node-max&quot; value=&quot;1&quot;/&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-clone-max&quot; name=&quot;clone-max&quot; value=&quot;2&quot;/&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-clone-node-max&quot; name=&quot;clone-node-max&quot; value=&quot;1&quot;/&gt;<br>          &lt;nvpair id=&quot;Drbd2-clone-meta_attributes-notify&quot; name=&quot;notify&quot; value=&quot;true&quot;/&gt;<br>        &lt;/meta_attributes&gt;<br>      &lt;/master&gt;<br>      &lt;clone id=&quot;Virtfs2-clone&quot;&gt;<br>        &lt;primitive class=&quot;ocf&quot; id=&quot;Virtfs2&quot; provider=&quot;heartbeat&quot; type=&quot;Filesystem&quot;&gt;<br>          &lt;instance_attributes id=&quot;Virtfs2-instance_attributes&quot;&gt;<br>            &lt;nvpair id=&quot;Virtfs2-instance_attributes-device&quot; name=&quot;device&quot; value=&quot;/dev/drbd2&quot;/&gt;<br>            &lt;nvpair id=&quot;Virtfs2-instance_attributes-directory&quot; name=&quot;directory&quot; value=&quot;/virtfs2&quot;/&gt;<br>            &lt;nvpair id=&quot;Virtfs2-instance_attributes-fstype&quot; name=&quot;fstype&quot; value=&quot;gfs2&quot;/&gt;<br>          &lt;/instance_attributes&gt;<br>          &lt;operations&gt;<br>            &lt;op id=&quot;Virtfs2-start-interval-0s&quot; interval=&quot;0s&quot; name=&quot;start&quot; timeout=&quot;60&quot;/&gt;<br>            &lt;op id=&quot;Virtfs2-stop-interval-0s&quot; interval=&quot;0s&quot; name=&quot;stop&quot; timeout=&quot;60&quot;/&gt;<br>            &lt;op id=&quot;Virtfs2-monitor-interval-20&quot; interval=&quot;20&quot; name=&quot;monitor&quot; timeout=&quot;40&quot;/&gt;<br>          &lt;/operations&gt;<br>        &lt;/primitive&gt;<br>        &lt;meta_attributes id=&quot;Virtfs2-clone-meta_attributes&quot;/&gt;<br>      &lt;/clone&gt;<br>    &lt;/resources&gt;<br>    &lt;constraints&gt;<br>      &lt;rsc_colocation id=&quot;colocation-Virtfs2-clone-Drbd2-clone-INFINITY&quot; rsc=&quot;Virtfs2-clone&quot; score=&quot;INFINITY&quot; with-rsc=&quot;Drbd2-clone&quot; with-rsc-role=&quot;Master&quot;/&gt;<br>      &lt;rsc_colocation id=&quot;colocation-Virtfs2-clone-dlm-clone-INFINITY&quot; rsc=&quot;Virtfs2-clone&quot; score=&quot;INFINITY&quot; with-rsc=&quot;dlm-clone&quot;/&gt;<br>      &lt;rsc_order first=&quot;dlm-clone&quot; first-action=&quot;start&quot; id=&quot;order-dlm-clone-Virtfs2-clone-mandatory&quot; then=&quot;Virtfs2-clone&quot; then-action=&quot;start&quot;/&gt;<br>      &lt;rsc_order first=&quot;Drbd2-clone&quot; first-action=&quot;promote&quot; id=&quot;order-Drbd2-clone-Virtfs2-clone-mandatory&quot; then=&quot;Virtfs2-clone&quot; then-action=&quot;start&quot;/&gt;<br>    &lt;/constraints&gt;<br>    &lt;rsc_defaults&gt;<br>      &lt;meta_attributes id=&quot;rsc_defaults-options&quot;&gt;<br>        &lt;nvpair id=&quot;rsc_defaults-options-resource-stickiness&quot; name=&quot;resource-stickiness&quot; value=&quot;100&quot;/&gt;<br>      &lt;/meta_attributes&gt;<br>    &lt;/rsc_defaults&gt;<br>  &lt;/configuration&gt;<br>  &lt;status&gt;<br>    &lt;node_state id=&quot;1&quot; uname=&quot;wirt1v&quot; in_ccm=&quot;true&quot; crmd=&quot;online&quot; crm-debug-origin=&quot;do_update_resource&quot; join=&quot;member&quot; expected=&quot;member&quot;&gt;<br>      &lt;transient_attributes id=&quot;1&quot;&gt;<br>        &lt;instance_attributes id=&quot;status-1&quot;&gt;<br>          &lt;nvpair id=&quot;status-1-shutdown&quot; name=&quot;shutdown&quot; value=&quot;0&quot;/&gt;<br>          &lt;nvpair id=&quot;status-1-probe_complete&quot; name=&quot;probe_complete&quot; value=&quot;true&quot;/&gt;<br>          &lt;nvpair id=&quot;status-1-master-Drbd2&quot; name=&quot;master-Drbd2&quot; value=&quot;10000&quot;/&gt;<br>        &lt;/instance_attributes&gt;<br>      &lt;/transient_attributes&gt;<br>      &lt;lrm id=&quot;1&quot;&gt;<br>        &lt;lrm_resources&gt;<br>          &lt;lrm_resource id=&quot;Virtfs2&quot; type=&quot;Filesystem&quot; class=&quot;ocf&quot; provider=&quot;heartbeat&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;Virtfs2_last_0&quot; operation_key=&quot;Virtfs2_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;55:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;55:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt1v&quot; call-id=&quot;96&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473598870&quot; last-rc-change=&quot;1473598870&quot; exec-time=&quot;639&quot; queue-time=&quot;1&quot; op-digest=&quot;8dbd904c2115508ebcf3dffe8e7c6d82&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;Virtfs2_monitor_20000&quot; operation_key=&quot;Virtfs2_monitor_20000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;56:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;56:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt1v&quot; call-id=&quot;97&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;20000&quot; last-rc-change=&quot;1473598871&quot; exec-time=&quot;41&quot; queue-time=&quot;0&quot; op-digest=&quot;051271837d1a8eccc0af38fbd8c406e4&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;fencing-idrac1&quot; type=&quot;fence_idrac&quot; class=&quot;stonith&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac1_last_0&quot; operation_key=&quot;fencing-idrac1_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;9:58:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;9:58:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt1v&quot; call-id=&quot;58&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473588685&quot; last-rc-change=&quot;1473588685&quot; exec-time=&quot;1045&quot; queue-time=&quot;0&quot; op-digest=&quot;23a748cdf02f6f0fd03ac9823fc9bd52&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;2a5376722a6d891302b4e811e4de5c5a&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac1_monitor_60000&quot; operation_key=&quot;fencing-idrac1_monitor_60000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;11:59:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;11:59:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt1v&quot; call-id=&quot;59&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;60000&quot; last-rc-change=&quot;1473588687&quot; exec-time=&quot;84&quot; queue-time=&quot;0&quot; op-digest=&quot;592f6bfb8f36e6645a6221de49f6f3b3&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;2a5376722a6d891302b4e811e4de5c5a&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;fencing-idrac2&quot; type=&quot;fence_idrac&quot; class=&quot;stonith&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac2_last_0&quot; operation_key=&quot;fencing-idrac2_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;14:62:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;14:62:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt1v&quot; call-id=&quot;61&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473590528&quot; last-rc-change=&quot;1473590528&quot; exec-time=&quot;80&quot; queue-time=&quot;0&quot; op-digest=&quot;268b7ef79bdf7a09609aa321d3d18a61&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;f22e287dc4906f866a82eac0ab75d217&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac2_monitor_60000&quot; operation_key=&quot;fencing-idrac2_monitor_60000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;15:62:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;15:62:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt1v&quot; call-id=&quot;62&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;60000&quot; last-rc-change=&quot;1473590529&quot; exec-time=&quot;75&quot; queue-time=&quot;1&quot; op-digest=&quot;40430ed0cd93e10fcba03a5e867b2af3&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;f22e287dc4906f866a82eac0ab75d217&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;dlm&quot; type=&quot;controld&quot; class=&quot;ocf&quot; provider=&quot;pacemaker&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;dlm_last_0&quot; operation_key=&quot;dlm_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;14:15:0:29cf445c-f17d-4274-89e9-a869e4783c46&quot; transition-magic=&quot;0:0;14:15:0:29cf445c-f17d-4274-89e9-a869e4783c46&quot; on_node=&quot;wirt1v&quot; call-id=&quot;25&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473544487&quot; last-rc-change=&quot;1473544487&quot; exec-time=&quot;1082&quot; queue-time=&quot;0&quot; op-digest=&quot;f2317cad3d54cec5d7d7aa7d0bf35cf8&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;dlm_monitor_60000&quot; operation_key=&quot;dlm_monitor_60000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;8:16:0:29cf445c-f17d-4274-89e9-a869e4783c46&quot; transition-magic=&quot;0:0;8:16:0:29cf445c-f17d-4274-89e9-a869e4783c46&quot; on_node=&quot;wirt1v&quot; call-id=&quot;28&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;60000&quot; last-rc-change=&quot;1473544489&quot; exec-time=&quot;38&quot; queue-time=&quot;0&quot; op-digest=&quot;4811cef7f7f94e3a35a70be7916cb2fd&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;Drbd2&quot; type=&quot;drbd&quot; class=&quot;ocf&quot; provider=&quot;linbit&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;Drbd2_last_0&quot; operation_key=&quot;Drbd2_promote_0&quot; operation=&quot;promote&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;17:16:0:29cf445c-f17d-4274-89e9-a869e4783c46&quot; transition-magic=&quot;0:0;17:16:0:29cf445c-f17d-4274-89e9-a869e4783c46&quot; on_node=&quot;wirt1v&quot; call-id=&quot;30&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473544489&quot; last-rc-change=&quot;1473544489&quot; exec-time=&quot;58&quot; queue-time=&quot;0&quot; op-digest=&quot;d0c8a735862843030d8426a5218ceb92&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>        &lt;/lrm_resources&gt;<br>      &lt;/lrm&gt;<br>    &lt;/node_state&gt;<br>    &lt;node_state id=&quot;2&quot; uname=&quot;wirt2v&quot; in_ccm=&quot;true&quot; crmd=&quot;online&quot; crm-debug-origin=&quot;do_update_resource&quot; join=&quot;member&quot; expected=&quot;member&quot;&gt;<br>      &lt;transient_attributes id=&quot;2&quot;&gt;<br>        &lt;instance_attributes id=&quot;status-2&quot;&gt;<br>          &lt;nvpair id=&quot;status-2-shutdown&quot; name=&quot;shutdown&quot; value=&quot;0&quot;/&gt;<br>          &lt;nvpair id=&quot;status-2-probe_complete&quot; name=&quot;probe_complete&quot; value=&quot;true&quot;/&gt;<br>          &lt;nvpair id=&quot;status-2-master-Drbd2&quot; name=&quot;master-Drbd2&quot; value=&quot;10000&quot;/&gt;<br>        &lt;/instance_attributes&gt;<br>      &lt;/transient_attributes&gt;<br>      &lt;lrm id=&quot;2&quot;&gt;<br>        &lt;lrm_resources&gt;<br>          &lt;lrm_resource id=&quot;fencing-idrac1&quot; type=&quot;fence_idrac&quot; class=&quot;stonith&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac1_last_0&quot; operation_key=&quot;fencing-idrac1_monitor_0&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;7:1:7:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:7;7:1:7:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt2v&quot; call-id=&quot;5&quot; rc-code=&quot;7&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473544754&quot; last-rc-change=&quot;1473544754&quot; exec-time=&quot;1&quot; queue-time=&quot;0&quot; op-digest=&quot;23a748cdf02f6f0fd03ac9823fc9bd52&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;2a5376722a6d891302b4e811e4de5c5a&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;fencing-idrac2&quot; type=&quot;fence_idrac&quot; class=&quot;stonith&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac2_last_0&quot; operation_key=&quot;fencing-idrac2_stop_0&quot; operation=&quot;stop&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;13:62:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;13:62:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt2v&quot; call-id=&quot;55&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473590528&quot; last-rc-change=&quot;1473590528&quot; exec-time=&quot;0&quot; queue-time=&quot;0&quot; op-digest=&quot;268b7ef79bdf7a09609aa321d3d18a61&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;f22e287dc4906f866a82eac0ab75d217&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;fencing-idrac2_monitor_60000&quot; operation_key=&quot;fencing-idrac2_monitor_60000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;13:59:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;13:59:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt2v&quot; call-id=&quot;53&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;60000&quot; last-rc-change=&quot;1473588689&quot; exec-time=&quot;63&quot; queue-time=&quot;0&quot; op-digest=&quot;40430ed0cd93e10fcba03a5e867b2af3&quot; op-secure-params=&quot; passwd &quot; op-secure-digest=&quot;f22e287dc4906f866a82eac0ab75d217&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;dlm&quot; type=&quot;controld&quot; class=&quot;ocf&quot; provider=&quot;pacemaker&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;dlm_last_0&quot; operation_key=&quot;dlm_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;15:83:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;15:83:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt2v&quot; call-id=&quot;101&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473598865&quot; last-rc-change=&quot;1473598865&quot; exec-time=&quot;1116&quot; queue-time=&quot;0&quot; op-digest=&quot;f2317cad3d54cec5d7d7aa7d0bf35cf8&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;dlm_monitor_60000&quot; operation_key=&quot;dlm_monitor_60000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;16:84:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;16:84:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt2v&quot; call-id=&quot;104&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;60000&quot; last-rc-change=&quot;1473598870&quot; exec-time=&quot;47&quot; queue-time=&quot;0&quot; op-digest=&quot;4811cef7f7f94e3a35a70be7916cb2fd&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;Drbd2&quot; type=&quot;drbd&quot; class=&quot;ocf&quot; provider=&quot;linbit&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;Drbd2_last_0&quot; operation_key=&quot;Drbd2_promote_0&quot; operation=&quot;promote&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;27:84:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;27:84:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt2v&quot; call-id=&quot;106&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473598870&quot; last-rc-change=&quot;1473598870&quot; exec-time=&quot;69&quot; queue-time=&quot;0&quot; op-digest=&quot;d0c8a735862843030d8426a5218ceb92&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>          &lt;lrm_resource id=&quot;Virtfs2&quot; type=&quot;Filesystem&quot; class=&quot;ocf&quot; provider=&quot;heartbeat&quot;&gt;<br>            &lt;lrm_rsc_op id=&quot;Virtfs2_last_0&quot; operation_key=&quot;Virtfs2_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;53:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;53:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt2v&quot; call-id=&quot;108&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1473598870&quot; last-rc-change=&quot;1473598870&quot; exec-time=&quot;859&quot; queue-time=&quot;0&quot; op-digest=&quot;8dbd904c2115508ebcf3dffe8e7c6d82&quot;/&gt;<br>            &lt;lrm_rsc_op id=&quot;Virtfs2_monitor_20000&quot; operation_key=&quot;Virtfs2_monitor_20000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.10&quot; transition-key=&quot;54:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; transition-magic=&quot;0:0;54:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8&quot; on_node=&quot;wirt2v&quot; call-id=&quot;109&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;20000&quot; last-rc-change=&quot;1473598871&quot; exec-time=&quot;50&quot; queue-time=&quot;0&quot; op-digest=&quot;051271837d1a8eccc0af38fbd8c406e4&quot;/&gt;<br>          &lt;/lrm_resource&gt;<br>        &lt;/lrm_resources&gt;<br>      &lt;/lrm&gt;<br>    &lt;/node_state&gt;<br>  &lt;/status&gt;<br>&lt;/cib&gt;<br># =====================================================================<br># wirt1v-during_pcs-cluster-standby-wirt2v.log<br>#<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: State transition S_IDLE -&gt; S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<br>Sep 11 14:58:06 wirt1v pengine[31950]:  notice: On loss of CCM Quorum: Ignore<br>Sep 11 14:58:06 wirt1v pengine[31950]:  notice: Stop    dlm:1#011(wirt2v)<br>Sep 11 14:58:06 wirt1v pengine[31950]:  notice: Demote  Drbd2:1#011(Master -&gt; Stopped wirt2v)<br>Sep 11 14:58:06 wirt1v pengine[31950]:  notice: Stop    Virtfs2:1#011(wirt2v)<br>Sep 11 14:58:06 wirt1v pengine[31950]:  notice: Calculated Transition 81: /var/lib/pacemaker/pengine/pe-input-2852.bz2<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Initiating action 71: notify Drbd2_pre_notify_demote_0 on wirt1v (local)<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Initiating action 73: notify Drbd2_pre_notify_demote_0 on wirt2v<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Initiating action 54: stop Virtfs2_stop_0 on wirt2v<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Operation Drbd2_notify_0: ok (node=wirt1v, call=86, rc=0, cib-update=0, confirmed=true)<br>Sep 11 14:58:06 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: recover generation 2 done<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Initiating action 17: stop dlm_stop_0 on wirt2v<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Initiating action 26: demote Drbd2_demote_0 on wirt2v<br>Sep 11 14:58:06 wirt1v kernel: block drbd2: peer( Primary -&gt; Secondary )<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Initiating action 72: notify Drbd2_post_notify_demote_0 on wirt1v (local)<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Initiating action 74: notify Drbd2_post_notify_demote_0 on wirt2v<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Operation Drbd2_notify_0: ok (node=wirt1v, call=87, rc=0, cib-update=0, confirmed=true)<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Initiating action 66: notify Drbd2_pre_notify_stop_0 on wirt1v (local)<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Initiating action 68: notify Drbd2_pre_notify_stop_0 on wirt2v<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Operation Drbd2_notify_0: ok (node=wirt1v, call=88, rc=0, cib-update=0, confirmed=true)<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Initiating action 27: stop Drbd2_stop_0 on wirt2v<br>Sep 11 14:58:06 wirt1v kernel: drbd drbd2: peer( Secondary -&gt; Unknown ) conn( Connected -&gt; TearDown ) pdsk( UpToDate -&gt; DUnknown )<br>Sep 11 14:58:06 wirt1v kernel: drbd drbd2: ack_receiver terminated<br>Sep 11 14:58:06 wirt1v kernel: drbd drbd2: Terminating drbd_a_drbd2<br>Sep 11 14:58:06 wirt1v kernel: block drbd2: new current UUID 12027A6DEC39CCB7:9A297D737BE3FBC7:214339307E5385FF:214239307E5385FF<br>Sep 11 14:58:06 wirt1v kernel: drbd drbd2: Connection closed<br>Sep 11 14:58:06 wirt1v kernel: drbd drbd2: conn( TearDown -&gt; Unconnected )<br>Sep 11 14:58:06 wirt1v kernel: drbd drbd2: receiver terminated<br>Sep 11 14:58:06 wirt1v kernel: drbd drbd2: Restarting receiver thread<br>Sep 11 14:58:06 wirt1v kernel: drbd drbd2: receiver (re)started<br>Sep 11 14:58:06 wirt1v kernel: drbd drbd2: conn( Unconnected -&gt; WFConnection )<br>Sep 11 14:58:06 wirt1v crmd[31951]: warning: No match for shutdown action on 2<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Transition aborted by deletion of nvpair[@id=&#39;status-2-master-Drbd2&#39;]: Transient attribute change (cib=0.104.3, source=abort_unless_down:333, path=/cib/status/node_state[@id=&#39;2&#39;]/transient_attributes[@id=&#39;2&#39;]/instance_attributes[@id=&#39;status-2&#39;]/nvpair[@id=&#39;status-2-master-Drbd2&#39;], 0)<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Initiating action 67: notify Drbd2_post_notify_stop_0 on wirt1v (local)<br>Sep 11 14:58:06 wirt1v crmd[31951]:  notice: Operation Drbd2_notify_0: ok (node=wirt1v, call=89, rc=0, cib-update=0, confirmed=true)<br>Sep 11 14:58:08 wirt1v crmd[31951]:  notice: Transition 81 (Complete=28, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-2852.bz2): Complete<br>Sep 11 14:58:08 wirt1v pengine[31950]:  notice: On loss of CCM Quorum: Ignore<br>Sep 11 14:58:08 wirt1v pengine[31950]:  notice: Calculated Transition 82: /var/lib/pacemaker/pengine/pe-input-2853.bz2<br>Sep 11 14:58:08 wirt1v crmd[31951]:  notice: Transition 82 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-2853.bz2): Complete<br>Sep 11 14:58:08 wirt1v crmd[31951]:  notice: State transition S_TRANSITION_ENGINE -&gt; S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br><br># =====================================================================<br># wirt1v-during_pcs-cluster-unstandby-wirt2v.log<br>#<br>Sep 11 15:01:01 wirt1v systemd: Started Session 59 of user root.<br>Sep 11 15:01:01 wirt1v systemd: Starting Session 59 of user root.<br>Sep 11 15:01:05 wirt1v crmd[31951]:  notice: State transition S_IDLE -&gt; S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<br>Sep 11 15:01:05 wirt1v pengine[31950]:  notice: On loss of CCM Quorum: Ignore<br>Sep 11 15:01:05 wirt1v pengine[31950]:  notice: Start   dlm:1#011(wirt2v)<br>Sep 11 15:01:05 wirt1v pengine[31950]:  notice: Start   Drbd2:1#011(wirt2v)<br>Sep 11 15:01:05 wirt1v pengine[31950]:  notice: Restart Virtfs2:0#011(Started wirt1v)<br>Sep 11 15:01:05 wirt1v pengine[31950]:  notice: Calculated Transition 83: /var/lib/pacemaker/pengine/pe-input-2854.bz2<br>Sep 11 15:01:05 wirt1v crmd[31951]:  notice: Initiating action 15: start dlm_start_0 on wirt2v<br>Sep 11 15:01:05 wirt1v crmd[31951]:  notice: Initiating action 61: notify Drbd2_pre_notify_start_0 on wirt1v (local)<br>Sep 11 15:01:05 wirt1v crmd[31951]:  notice: Initiating action 51: stop Virtfs2_stop_0 on wirt1v (local)<br>Sep 11 15:01:06 wirt1v Filesystem(Virtfs2)[42311]: INFO: Running stop for /dev/drbd2 on /virtfs2<br>Sep 11 15:01:06 wirt1v crmd[31951]:  notice: Operation Drbd2_notify_0: ok (node=wirt1v, call=90, rc=0, cib-update=0, confirmed=true)<br>Sep 11 15:01:06 wirt1v crmd[31951]:  notice: Initiating action 25: start Drbd2_start_0 on wirt2v<br>Sep 11 15:01:06 wirt1v Filesystem(Virtfs2)[42311]: INFO: Trying to unmount /virtfs2<br>Sep 11 15:01:06 wirt1v Filesystem(Virtfs2)[42311]: ERROR: Couldn&#39;t unmount /virtfs2; trying cleanup with TERM<br>Sep 11 15:01:06 wirt1v crmd[31951]:  notice: Transition aborted by status-2-master-Drbd2, master-Drbd2=1000: Transient attribute change (create cib=0.105.1, source=abort_unless_down:319, path=/cib/status/node_state[@id=&#39;2&#39;]/transient_attributes[@id=&#39;2&#39;]/instance_attributes[@id=&#39;status-2&#39;], 0)<br>Sep 11 15:01:06 wirt1v Filesystem(Virtfs2)[42311]: INFO: sending signal TERM to: root     39937 39934  0 14:49 pts/0    Ss+    0:00 -bash<br>Sep 11 15:01:06 wirt1v crmd[31951]:  notice: Initiating action 62: notify Drbd2_post_notify_start_0 on wirt1v (local)<br>Sep 11 15:01:06 wirt1v crmd[31951]:  notice: Initiating action 63: notify Drbd2_post_notify_start_0 on wirt2v<br>Sep 11 15:01:06 wirt1v crmd[31951]:  notice: Operation Drbd2_notify_0: ok (node=wirt1v, call=93, rc=0, cib-update=0, confirmed=true)<br>Sep 11 15:01:06 wirt1v kernel: drbd drbd2: Handshake successful: Agreed network protocol version 101<br>Sep 11 15:01:06 wirt1v kernel: drbd drbd2: Feature flags enabled on protocol level: 0x7 TRIM THIN_RESYNC WRITE_SAME.<br>Sep 11 15:01:06 wirt1v kernel: drbd drbd2: conn( WFConnection -&gt; WFReportParams )<br>Sep 11 15:01:06 wirt1v kernel: drbd drbd2: Starting ack_recv thread (from drbd_r_drbd2 [32139])<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: drbd_sync_handshake:<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: self 12027A6DEC39CCB7:9A297D737BE3FBC7:214339307E5385FF:214239307E5385FF bits:0 flags:0<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: peer 9A297D737BE3FBC6:0000000000000000:214339307E5385FE:214239307E5385FF bits:0 flags:0<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: uuid_compare()=1 by rule 70<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: peer( Unknown -&gt; Secondary ) conn( WFReportParams -&gt; WFBitMapS ) pdsk( DUnknown -&gt; Consistent )<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: send bitmap stats [Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: receive bitmap stats [Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: helper command: /sbin/drbdadm before-resync-source minor-2<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: helper command: /sbin/drbdadm before-resync-source minor-2 exit code 0 (0x0)<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: conn( WFBitMapS -&gt; SyncSource ) pdsk( Consistent -&gt; Inconsistent )<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: Began resync as SyncSource (will sync 0 KB [0 bits set]).<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: updated sync UUID 12027A6DEC39CCB7:9A2A7D737BE3FBC7:9A297D737BE3FBC7:214339307E5385FF<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: Resync done (total 1 sec; paused 0 sec; 0 K/sec)<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: updated UUIDs 12027A6DEC39CCB7:0000000000000000:9A2A7D737BE3FBC7:9A297D737BE3FBC7<br>Sep 11 15:01:06 wirt1v kernel: block drbd2: conn( SyncSource -&gt; Connected ) pdsk( Inconsistent -&gt; UpToDate )<br>Sep 11 15:01:07 wirt1v Filesystem(Virtfs2)[42311]: ERROR: Couldn&#39;t unmount /virtfs2; trying cleanup with TERM<br>Sep 11 15:01:07 wirt1v Filesystem(Virtfs2)[42311]: INFO: sending signal TERM to: root     39937 39934  0 14:49 pts/0    Ss+    0:00 -bash<br>Sep 11 15:01:08 wirt1v Filesystem(Virtfs2)[42311]: ERROR: Couldn&#39;t unmount /virtfs2; trying cleanup with TERM<br>Sep 11 15:01:08 wirt1v Filesystem(Virtfs2)[42311]: INFO: sending signal TERM to: root     39937 39934  0 14:49 pts/0    Ss+    0:00 -bash<br>Sep 11 15:01:09 wirt1v Filesystem(Virtfs2)[42311]: ERROR: Couldn&#39;t unmount /virtfs2; trying cleanup with KILL<br>Sep 11 15:01:09 wirt1v Filesystem(Virtfs2)[42311]: INFO: sending signal KILL to: root     39937 39934  0 14:49 pts/0    Ss+    0:00 -bash<br>Sep 11 15:01:09 wirt1v systemd-logind: Removed session 58.<br>Sep 11 15:01:10 wirt1v Filesystem(Virtfs2)[42311]: INFO: unmounted /virtfs2 successfully<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [ umount: /virtfs2: target is busy. ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [         (In some cases useful info about processes that use ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [          the device is found by lsof(8) or fuser(1)) ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [ ocf-exit-reason:Couldn&#39;t unmount /virtfs2; trying cleanup with TERM ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [ umount: /virtfs2: target is busy. ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [         (In some cases useful info about processes that use ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [          the device is found by lsof(8) or fuser(1)) ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [ ocf-exit-reason:Couldn&#39;t unmount /virtfs2; trying cleanup with TERM ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [ umount: /virtfs2: target is busy. ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [         (In some cases useful info about processes that use ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [          the device is found by lsof(8) or fuser(1)) ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [ ocf-exit-reason:Couldn&#39;t unmount /virtfs2; trying cleanup with TERM ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [ umount: /virtfs2: target is busy. ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [         (In some cases useful info about processes that use ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [          the device is found by lsof(8) or fuser(1)) ]<br>Sep 11 15:01:10 wirt1v lrmd[31948]:  notice: Virtfs2_stop_0:42311:stderr [ ocf-exit-reason:Couldn&#39;t unmount /virtfs2; trying cleanup with KILL ]<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Operation Virtfs2_stop_0: ok (node=wirt1v, call=92, rc=0, cib-update=178, confirmed=true)<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Transition 83 (Complete=18, Pending=0, Fired=0, Skipped=3, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-input-2854.bz2): Stopped<br>Sep 11 15:01:10 wirt1v pengine[31950]:  notice: On loss of CCM Quorum: Ignore<br>Sep 11 15:01:10 wirt1v pengine[31950]:  notice: Promote Drbd2:1#011(Slave -&gt; Master wirt2v)<br>Sep 11 15:01:10 wirt1v pengine[31950]:  notice: Start   Virtfs2:0#011(wirt1v)<br>Sep 11 15:01:10 wirt1v pengine[31950]:  notice: Start   Virtfs2:1#011(wirt2v)<br>Sep 11 15:01:10 wirt1v pengine[31950]:  notice: Calculated Transition 84: /var/lib/pacemaker/pengine/pe-input-2855.bz2<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Initiating action 16: monitor dlm_monitor_60000 on wirt2v<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Initiating action 69: notify Drbd2_pre_notify_promote_0 on wirt1v (local)<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Initiating action 71: notify Drbd2_pre_notify_promote_0 on wirt2v<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Operation Drbd2_notify_0: ok (node=wirt1v, call=94, rc=0, cib-update=0, confirmed=true)<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Initiating action 27: promote Drbd2_promote_0 on wirt2v<br>Sep 11 15:01:10 wirt1v kernel: block drbd2: peer( Secondary -&gt; Primary )<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Initiating action 70: notify Drbd2_post_notify_promote_0 on wirt1v (local)<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Initiating action 72: notify Drbd2_post_notify_promote_0 on wirt2v<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Transition aborted by status-2-master-Drbd2, master-Drbd2=10000: Transient attribute change (modify cib=0.105.7, source=abort_unless_down:319, path=/cib/status/node_state[@id=&#39;2&#39;]/transient_attributes[@id=&#39;2&#39;]/instance_attributes[@id=&#39;status-2&#39;]/nvpair[@id=&#39;status-2-master-Drbd2&#39;], 0)<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Operation Drbd2_notify_0: ok (node=wirt1v, call=95, rc=0, cib-update=0, confirmed=true)<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Transition 84 (Complete=13, Pending=0, Fired=0, Skipped=2, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-input-2855.bz2): Stopped<br>Sep 11 15:01:10 wirt1v pengine[31950]:  notice: On loss of CCM Quorum: Ignore<br>Sep 11 15:01:10 wirt1v pengine[31950]:  notice: Start   Virtfs2:0#011(wirt2v)<br>Sep 11 15:01:10 wirt1v pengine[31950]:  notice: Start   Virtfs2:1#011(wirt1v)<br>Sep 11 15:01:10 wirt1v pengine[31950]:  notice: Calculated Transition 85: /var/lib/pacemaker/pengine/pe-input-2856.bz2<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Initiating action 53: start Virtfs2_start_0 on wirt2v<br>Sep 11 15:01:10 wirt1v crmd[31951]:  notice: Initiating action 55: start Virtfs2:1_start_0 on wirt1v (local)<br>Sep 11 15:01:10 wirt1v Filesystem(Virtfs2)[42615]: INFO: Running start for /dev/drbd2 on /virtfs2<br>Sep 11 15:01:10 wirt1v kernel: GFS2: fsid=klasterek:drbd2: Trying to join cluster &quot;lock_dlm&quot;, &quot;klasterek:drbd2&quot;<br>Sep 11 15:01:10 wirt1v kernel: dlm: Using TCP for communications<br>Sep 11 15:01:10 wirt1v kernel: dlm: connecting to 2<br>Sep 11 15:01:10 wirt1v kernel: dlm: got connection from 2<br>Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2: first mounter control generation 0<br>Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2: Joined cluster. Now mounting FS...<br>Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: jid=0, already locked for use<br>Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: jid=0: Looking at journal...<br>Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: jid=0: Done<br>Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: jid=1: Trying to acquire journal lock...<br>Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: jid=1: Looking at journal...<br>Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: jid=1: Done<br>Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: first mount done, others may mount<br>Sep 11 15:01:11 wirt1v crmd[31951]:  notice: Operation Virtfs2_start_0: ok (node=wirt1v, call=96, rc=0, cib-update=181, confirmed=true)<br>Sep 11 15:01:11 wirt1v crmd[31951]:  notice: Initiating action 56: monitor Virtfs2:1_monitor_20000 on wirt1v (local)<br>Sep 11 15:01:11 wirt1v crmd[31951]:  notice: Initiating action 54: monitor Virtfs2_monitor_20000 on wirt2v<br>Sep 11 15:01:11 wirt1v crmd[31951]:  notice: Transition 85 (Complete=6, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-2856.bz2): Complete<br>Sep 11 15:01:11 wirt1v crmd[31951]:  notice: State transition S_TRANSITION_ENGINE -&gt; S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>Sep 11 15:01:18 wirt1v systemd-logind: New session 60 of user root.<br>Sep 11 15:01:18 wirt1v systemd: Started Session 60 of user root.<br>Sep 11 15:01:18 wirt1v systemd: Starting Session 60 of user root.<br><br># =====================================================================<br># wirt2v-during_pcs-cluster-standby-wirt2v.log<br>#<br>Sep 11 14:58:06 wirt2v crmd[27038]:  notice: Operation Drbd2_notify_0: ok (node=wirt2v, call=92, rc=0, cib-update=0, confirmed=true)<br>Sep 11 14:58:06 wirt2v Filesystem(Virtfs2)[28208]: INFO: Running stop for /dev/drbd2 on /virtfs2<br>Sep 11 14:58:06 wirt2v Filesystem(Virtfs2)[28208]: INFO: Trying to unmount /virtfs2<br>Sep 11 14:58:06 wirt2v Filesystem(Virtfs2)[28208]: INFO: unmounted /virtfs2 successfully<br>Sep 11 14:58:06 wirt2v crmd[27038]:  notice: Operation Virtfs2_stop_0: ok (node=wirt2v, call=94, rc=0, cib-update=58, confirmed=true)<br>Sep 11 14:58:06 wirt2v kernel: dlm: closing connection to node 2<br>Sep 11 14:58:06 wirt2v kernel: dlm: closing connection to node 1<br>Sep 11 14:58:06 wirt2v kernel: block drbd2: role( Primary -&gt; Secondary )<br>Sep 11 14:58:06 wirt2v kernel: block drbd2: bitmap WRITE of 0 pages took 0 jiffies<br>Sep 11 14:58:06 wirt2v kernel: block drbd2: 0 KB (0 bits) marked out-of-sync by on disk bit-map.<br>Sep 11 14:58:06 wirt2v systemd-udevd: error: /dev/drbd2: Wrong medium type<br>Sep 11 14:58:06 wirt2v crmd[27038]:   error: pcmkRegisterNode: Triggered assert at xml.c:594 : node-&gt;type == XML_ELEMENT_NODE<br>Sep 11 14:58:06 wirt2v crmd[27038]:  notice: Operation Drbd2_demote_0: ok (node=wirt2v, call=97, rc=0, cib-update=59, confirmed=true)<br>Sep 11 14:58:06 wirt2v systemd-udevd: error: /dev/drbd2: Wrong medium type<br>Sep 11 14:58:06 wirt2v crmd[27038]:  notice: Operation Drbd2_notify_0: ok (node=wirt2v, call=98, rc=0, cib-update=0, confirmed=true)<br>Sep 11 14:58:06 wirt2v crmd[27038]:  notice: Operation Drbd2_notify_0: ok (node=wirt2v, call=99, rc=0, cib-update=0, confirmed=true)<br>Sep 11 14:58:06 wirt2v kernel: drbd drbd2: peer( Primary -&gt; Unknown ) conn( Connected -&gt; Disconnecting ) pdsk( UpToDate -&gt; DUnknown )<br>Sep 11 14:58:06 wirt2v kernel: drbd drbd2: ack_receiver terminated<br>Sep 11 14:58:06 wirt2v kernel: drbd drbd2: Terminating drbd_a_drbd2<br>Sep 11 14:58:06 wirt2v kernel: drbd drbd2: Connection closed<br>Sep 11 14:58:06 wirt2v kernel: drbd drbd2: conn( Disconnecting -&gt; StandAlone )<br>Sep 11 14:58:06 wirt2v kernel: drbd drbd2: receiver terminated<br>Sep 11 14:58:06 wirt2v kernel: drbd drbd2: Terminating drbd_r_drbd2<br>Sep 11 14:58:06 wirt2v kernel: block drbd2: disk( UpToDate -&gt; Failed )<br>Sep 11 14:58:06 wirt2v kernel: block drbd2: bitmap WRITE of 0 pages took 0 jiffies<br>Sep 11 14:58:06 wirt2v kernel: block drbd2: 0 KB (0 bits) marked out-of-sync by on disk bit-map.<br>Sep 11 14:58:06 wirt2v kernel: block drbd2: disk( Failed -&gt; Diskless )<br>Sep 11 14:58:06 wirt2v kernel: drbd drbd2: Terminating drbd_w_drbd2<br>Sep 11 14:58:06 wirt2v crmd[27038]:  notice: Operation Drbd2_stop_0: ok (node=wirt2v, call=100, rc=0, cib-update=60, confirmed=true)<br>Sep 11 14:58:08 wirt2v crmd[27038]:  notice: Operation dlm_stop_0: ok (node=wirt2v, call=96, rc=0, cib-update=61, confirmed=true)<br><br># =====================================================================<br># wirt2v-during_pcs-cluster-unstandby-wirt2v.log<br>#<br>Sep 11 15:01:01 wirt2v systemd: Started Session 51 of user root.<br>Sep 11 15:01:01 wirt2v systemd: Starting Session 51 of user root.<br>Sep 11 15:01:06 wirt2v dlm_controld[28577]: 62718 dlm_controld 4.0.2 started<br>Sep 11 15:01:06 wirt2v systemd-udevd: error: /dev/drbd2: Wrong medium type<br>Sep 11 15:01:06 wirt2v kernel: drbd drbd2: Starting worker thread (from drbdsetup-84 [28610])<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: disk( Diskless -&gt; Attaching )<br>Sep 11 15:01:06 wirt2v kernel: drbd drbd2: Method to ensure write ordering: flush<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: max BIO size = 262144<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: drbd_bm_resize called with capacity == 104854328<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: resync bitmap: bits=13106791 words=204794 pages=400<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: size = 50 GB (52427164 KB)<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: recounting of set bits took additional 1 jiffies<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: 0 KB (0 bits) marked out-of-sync by on disk bit-map.<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: disk( Attaching -&gt; UpToDate )<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: attached to UUIDs 9A297D737BE3FBC6:0000000000000000:214339307E5385FE:214239307E5385FF<br>Sep 11 15:01:06 wirt2v systemd-udevd: error: /dev/drbd2: Wrong medium type<br>Sep 11 15:01:06 wirt2v kernel: drbd drbd2: conn( StandAlone -&gt; Unconnected )<br>Sep 11 15:01:06 wirt2v kernel: drbd drbd2: Starting receiver thread (from drbd_w_drbd2 [28612])<br>Sep 11 15:01:06 wirt2v kernel: drbd drbd2: receiver (re)started<br>Sep 11 15:01:06 wirt2v kernel: drbd drbd2: conn( Unconnected -&gt; WFConnection )<br>Sep 11 15:01:06 wirt2v crmd[27038]:   error: pcmkRegisterNode: Triggered assert at xml.c:594 : node-&gt;type == XML_ELEMENT_NODE<br>Sep 11 15:01:06 wirt2v crmd[27038]:  notice: Operation Drbd2_start_0: ok (node=wirt2v, call=102, rc=0, cib-update=62, confirmed=true)<br>Sep 11 15:01:06 wirt2v crmd[27038]:  notice: Operation Drbd2_notify_0: ok (node=wirt2v, call=103, rc=0, cib-update=0, confirmed=true)<br>Sep 11 15:01:06 wirt2v kernel: drbd drbd2: Handshake successful: Agreed network protocol version 101<br>Sep 11 15:01:06 wirt2v kernel: drbd drbd2: Feature flags enabled on protocol level: 0x7 TRIM THIN_RESYNC WRITE_SAME.<br>Sep 11 15:01:06 wirt2v kernel: drbd drbd2: conn( WFConnection -&gt; WFReportParams )<br>Sep 11 15:01:06 wirt2v kernel: drbd drbd2: Starting ack_recv thread (from drbd_r_drbd2 [28622])<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: drbd_sync_handshake:<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: self 9A297D737BE3FBC6:0000000000000000:214339307E5385FE:214239307E5385FF bits:0 flags:0<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: peer 12027A6DEC39CCB7:9A297D737BE3FBC7:214339307E5385FF:214239307E5385FF bits:0 flags:0<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: uuid_compare()=-1 by rule 50<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: peer( Unknown -&gt; Primary ) conn( WFReportParams -&gt; WFBitMapT ) disk( UpToDate -&gt; Outdated ) pdsk( DUnknown -&gt; UpToDate )<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: receive bitmap stats [Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: send bitmap stats [Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: conn( WFBitMapT -&gt; WFSyncUUID )<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: updated sync uuid 9A2A7D737BE3FBC6:0000000000000000:214339307E5385FE:214239307E5385FF<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: helper command: /sbin/drbdadm before-resync-target minor-2<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: helper command: /sbin/drbdadm before-resync-target minor-2 exit code 0 (0x0)<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: conn( WFSyncUUID -&gt; SyncTarget ) disk( Outdated -&gt; Inconsistent )<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: Began resync as SyncTarget (will sync 0 KB [0 bits set]).<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: Resync done (total 1 sec; paused 0 sec; 0 K/sec)<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: updated UUIDs 12027A6DEC39CCB6:0000000000000000:9A2A7D737BE3FBC6:9A297D737BE3FBC7<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: conn( SyncTarget -&gt; Connected ) disk( Inconsistent -&gt; UpToDate )<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: helper command: /sbin/drbdadm after-resync-target minor-2<br>Sep 11 15:01:06 wirt2v kernel: block drbd2: helper command: /sbin/drbdadm after-resync-target minor-2 exit code 0 (0x0)<br>Sep 11 15:01:07 wirt2v crmd[27038]:  notice: Operation dlm_start_0: ok (node=wirt2v, call=101, rc=0, cib-update=63, confirmed=true)<br>Sep 11 15:01:10 wirt2v crmd[27038]:  notice: Operation Drbd2_notify_0: ok (node=wirt2v, call=105, rc=0, cib-update=0, confirmed=true)<br>Sep 11 15:01:10 wirt2v kernel: block drbd2: role( Secondary -&gt; Primary )<br>Sep 11 15:01:10 wirt2v crmd[27038]:   error: pcmkRegisterNode: Triggered assert at xml.c:594 : node-&gt;type == XML_ELEMENT_NODE<br>Sep 11 15:01:10 wirt2v crmd[27038]:  notice: Operation Drbd2_promote_0: ok (node=wirt2v, call=106, rc=0, cib-update=65, confirmed=true)<br>Sep 11 15:01:10 wirt2v crmd[27038]:  notice: Operation Drbd2_notify_0: ok (node=wirt2v, call=107, rc=0, cib-update=0, confirmed=true)<br>Sep 11 15:01:10 wirt2v Filesystem(Virtfs2)[28772]: INFO: Running start for /dev/drbd2 on /virtfs2<br>Sep 11 15:01:10 wirt2v kernel: GFS2: fsid=klasterek:drbd2: Trying to join cluster &quot;lock_dlm&quot;, &quot;klasterek:drbd2&quot;<br>Sep 11 15:01:10 wirt2v kernel: dlm: Using TCP for communications<br>Sep 11 15:01:10 wirt2v kernel: dlm: connecting to 1<br>Sep 11 15:01:10 wirt2v kernel: dlm: got connection from 1<br>Sep 11 15:01:11 wirt2v kernel: GFS2: fsid=klasterek:drbd2: Joined cluster. Now mounting FS...<br>Sep 11 15:01:11 wirt2v kernel: GFS2: fsid=klasterek:drbd2.1: jid=1, already locked for use<br>Sep 11 15:01:11 wirt2v kernel: GFS2: fsid=klasterek:drbd2.1: jid=1: Looking at journal...<br>Sep 11 15:01:11 wirt2v kernel: GFS2: fsid=klasterek:drbd2.1: jid=1: Done<br>Sep 11 15:01:11 wirt2v crmd[27038]:  notice: Operation Virtfs2_start_0: ok (node=wirt2v, call=108, rc=0, cib-update=66, confirmed=true)<br><br>==============================================================                                                                                       <br><br><br></div>