<div dir="ltr">Hi Klaus, <div><br></div><div>Thanks a lot, I will try to delete the stop monitor.</div><div><br></div><div>Nevertheless, I have 6 domains configured exactly the same... Is there any reason why just this domain has this behaviour ?</div><div><br></div><div>Thanks a lot.</div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-02-16 11:12 GMT+01:00 Klaus Wenninger <span dir="ltr">&lt;<a href="mailto:kwenning@redhat.com" target="_blank">kwenning@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 02/16/2017 11:02 AM, Oscar Segarra wrote:<br>
&gt; Hi Kaluss<br>
&gt;<br>
&gt; Which is your proposal to fix this behavior?<br>
<br>
</span>First you can try to remove the monitor op for role=stopped.<br>
Then the startup-probing will probably still fail but for that<br>
the behaviour is different.<br>
The startup-probing can be disabled globally via cluster-property<br>
enable-startup-probes that is defaulting to true.<br>
But be aware that the cluster then wouldn&#39;t be able to react<br>
properly if services are already up when pacemaker is starting.<br>
It should be possible to disable the probing on a per resource<br>
or node basis as well iirc. But I can&#39;t tell you out of my mind<br>
how that worked - there was a discussion a few weeks ago<br>
on the list iirc.<br>
<br>
Regards,<br>
Klaus<br>
<span class=""><br>
&gt;<br>
&gt; Thanks a lot!<br>
&gt;<br>
&gt;<br>
&gt; El 16 feb. 2017 10:57 a. m., &quot;Klaus Wenninger&quot; &lt;<a href="mailto:kwenning@redhat.com">kwenning@redhat.com</a><br>
</span>&gt; &lt;mailto:<a href="mailto:kwenning@redhat.com">kwenning@redhat.com</a>&gt;&gt; escribió:<br>
<div><div class="h5">&gt;<br>
&gt;     On 02/16/2017 09:05 AM, Oscar Segarra wrote:<br>
&gt;     &gt; Hi,<br>
&gt;     &gt;<br>
&gt;     &gt; In my environment I have deployed 5 VirtualDomains as one can<br>
&gt;     see below:<br>
&gt;     &gt; [root@vdicnode01 ~]# pcs status<br>
&gt;     &gt; Cluster name: vdic-cluster<br>
&gt;     &gt; Stack: corosync<br>
&gt;     &gt; Current DC: vdicnode01-priv (version 1.1.15-11.el7_3.2-e174ec8) -<br>
&gt;     &gt; partition with quorum<br>
&gt;     &gt; Last updated: Thu Feb 16 09:02:53 2017          Last change: Thu Feb<br>
&gt;     &gt; 16 08:20:53 2017 by root via crm_attribute on vdicnode02-priv<br>
&gt;     &gt;<br>
&gt;     &gt; 2 nodes and 14 resources configured: 5 resources DISABLED and 0<br>
&gt;     &gt; BLOCKED from being started due to failures<br>
&gt;     &gt;<br>
&gt;     &gt; Online: [ vdicnode01-priv vdicnode02-priv ]<br>
&gt;     &gt;<br>
&gt;     &gt; Full list of resources:<br>
&gt;     &gt;<br>
&gt;     &gt;  nfs-vdic-mgmt-vm-vip   (ocf::heartbeat:IPaddr):        Started<br>
&gt;     &gt; vdicnode01-priv<br>
&gt;     &gt;  Clone Set: nfs_setup-clone [nfs_setup]<br>
&gt;     &gt;      Started: [ vdicnode01-priv vdicnode02-priv ]<br>
&gt;     &gt;  Clone Set: nfs-mon-clone [nfs-mon]<br>
&gt;     &gt;      Started: [ vdicnode01-priv vdicnode02-priv ]<br>
&gt;     &gt;  Clone Set: nfs-grace-clone [nfs-grace]<br>
&gt;     &gt;      Started: [ vdicnode01-priv vdicnode02-priv ]<br>
&gt;     &gt;  vm-vdicone01   (ocf::heartbeat:VirtualDomain)<wbr>: FAILED (disabled)[<br>
&gt;     &gt; vdicnode02-priv vdicnode01-priv ]<br>
&gt;     &gt;  vm-vdicsunstone01      (ocf::heartbeat:VirtualDomain)<wbr>: FAILED<br>
&gt;     &gt; vdicnode01-priv (disabled)<br>
&gt;     &gt;  vm-vdicdb01    (ocf::heartbeat:VirtualDomain)<wbr>: FAILED (disabled)[<br>
&gt;     &gt; vdicnode02-priv vdicnode01-priv ]<br>
&gt;     &gt;  vm-vdicudsserver       (ocf::heartbeat:VirtualDomain)<wbr>: FAILED<br>
&gt;     &gt; (disabled)[ vdicnode02-priv vdicnode01-priv ]<br>
&gt;     &gt;  vm-vdicudstuneler      (ocf::heartbeat:VirtualDomain)<wbr>: FAILED<br>
&gt;     &gt; vdicnode01-priv (disabled)<br>
&gt;     &gt;  Clone Set: nfs-vdic-images-vip-clone [nfs-vdic-images-vip]<br>
&gt;     &gt;      Stopped: [ vdicnode01-priv vdicnode02-priv ]<br>
&gt;     &gt;<br>
&gt;     &gt; Failed Actions:<br>
&gt;     &gt; * vm-vdicone01_monitor_20000 on vdicnode02-priv &#39;not installed&#39; (5):<br>
&gt;     &gt; call=2322, status=complete, exitreason=&#39;Configuration file<br>
&gt;     &gt; /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not exist or is not<br>
&gt;     readable.&#39;,<br>
&gt;     &gt;     last-rc-change=&#39;Thu Feb 16 09:02:07 2017&#39;, queued=0ms, exec=21ms<br>
&gt;     &gt; * vm-vdicsunstone01_monitor_<wbr>20000 on vdicnode02-priv &#39;not installed&#39;<br>
&gt;     &gt; (5): call=2310, status=complete, exitreason=&#39;Configuration file<br>
&gt;     &gt; /mnt/nfs-vdic-mgmt-vm/<wbr>vdicsunstone01.xml does not exist or is not<br>
&gt;     &gt; readable.&#39;,<br>
&gt;     &gt;     last-rc-change=&#39;Thu Feb 16 09:02:07 2017&#39;, queued=0ms, exec=37ms<br>
&gt;     &gt; * vm-vdicdb01_monitor_20000 on vdicnode02-priv &#39;not installed&#39; (5):<br>
&gt;     &gt; call=2320, status=complete, exitreason=&#39;Configuration file<br>
&gt;     &gt; /mnt/nfs-vdic-mgmt-vm/<wbr>vdicdb01.xml does not exist or is not<br>
&gt;     readable.&#39;,<br>
&gt;     &gt;     last-rc-change=&#39;Thu Feb 16 09:02:07 2017&#39;, queued=0ms, exec=35ms<br>
&gt;     &gt; * vm-vdicudsserver_monitor_20000 on vdicnode02-priv &#39;not installed&#39;<br>
&gt;     &gt; (5): call=2321, status=complete, exitreason=&#39;Configuration file<br>
&gt;     &gt; /mnt/nfs-vdic-mgmt-vm/<wbr>vdicudsserver.xml does not exist or is not<br>
&gt;     &gt; readable.&#39;,<br>
&gt;     &gt;     last-rc-change=&#39;Thu Feb 16 09:02:07 2017&#39;, queued=0ms, exec=42ms<br>
&gt;     &gt; * vm-vdicudstuneler_monitor_<wbr>20000 on vdicnode01-priv &#39;not installed&#39;<br>
&gt;     &gt; (5): call=1987183, status=complete, exitreason=&#39;Configuration file<br>
&gt;     &gt; /mnt/nfs-vdic-mgmt-vm/<wbr>vdicudstuneler.xml does not exist or is not<br>
&gt;     &gt; readable.&#39;,<br>
&gt;     &gt;     last-rc-change=&#39;Thu Feb 16 04:00:25 2017&#39;, queued=0ms, exec=30ms<br>
&gt;     &gt; * vm-vdicdb01_monitor_20000 on vdicnode01-priv &#39;not installed&#39; (5):<br>
&gt;     &gt; call=2550049, status=complete, exitreason=&#39;Configuration file<br>
&gt;     &gt; /mnt/nfs-vdic-mgmt-vm/<wbr>vdicdb01.xml does not exist or is not<br>
&gt;     readable.&#39;,<br>
&gt;     &gt;     last-rc-change=&#39;Thu Feb 16 08:13:37 2017&#39;, queued=0ms, exec=44ms<br>
&gt;     &gt; * nfs-mon_monitor_10000 on vdicnode01-priv &#39;unknown error&#39; (1):<br>
&gt;     &gt; call=1984009, status=Timed Out, exitreason=&#39;none&#39;,<br>
&gt;     &gt;     last-rc-change=&#39;Thu Feb 16 04:24:30 2017&#39;, queued=0ms, exec=0ms<br>
&gt;     &gt; * vm-vdicsunstone01_monitor_<wbr>20000 on vdicnode01-priv &#39;not installed&#39;<br>
&gt;     &gt; (5): call=2552050, status=complete, exitreason=&#39;Configuration file<br>
&gt;     &gt; /mnt/nfs-vdic-mgmt-vm/<wbr>vdicsunstone01.xml does not exist or is not<br>
&gt;     &gt; readable.&#39;,<br>
&gt;     &gt;     last-rc-change=&#39;Thu Feb 16 08:14:07 2017&#39;, queued=0ms, exec=22ms<br>
&gt;     &gt; * vm-vdicone01_monitor_20000 on vdicnode01-priv &#39;not installed&#39; (5):<br>
&gt;     &gt; call=2620052, status=complete, exitreason=&#39;Configuration file<br>
&gt;     &gt; /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not exist or is not<br>
&gt;     readable.&#39;,<br>
&gt;     &gt;     last-rc-change=&#39;Thu Feb 16 09:02:53 2017&#39;, queued=0ms, exec=45ms<br>
&gt;     &gt; * vm-vdicudsserver_monitor_20000 on vdicnode01-priv &#39;not installed&#39;<br>
&gt;     &gt; (5): call=2550052, status=complete, exitreason=&#39;Configuration file<br>
&gt;     &gt; /mnt/nfs-vdic-mgmt-vm/<wbr>vdicudsserver.xml does not exist or is not<br>
&gt;     &gt; readable.&#39;,<br>
&gt;     &gt;     last-rc-change=&#39;Thu Feb 16 08:13:37 2017&#39;, queued=0ms, exec=48ms<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; Al VirtualDomain resources are configured the same:<br>
&gt;     &gt;<br>
&gt;     &gt; [root@vdicnode01 cluster]# pcs resource show vm-vdicone01<br>
&gt;     &gt;  Resource: vm-vdicone01 (class=ocf provider=heartbeat<br>
&gt;     type=VirtualDomain)<br>
&gt;     &gt;   Attributes: hypervisor=qemu:///system<br>
&gt;     &gt; config=/mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml<br>
&gt;     &gt; migration_network_suffix=tcp:/<wbr>/ migration_transport=ssh<br>
&gt;     &gt;   Meta Attrs: allow-migrate=true target-role=Stopped<br>
&gt;     &gt;   Utilization: cpu=1 hv_memory=512<br>
&gt;     &gt;   Operations: start interval=0s timeout=90<br>
&gt;     &gt; (vm-vdicone01-start-interval-<wbr>0s)<br>
&gt;     &gt;               stop interval=0s timeout=90<br>
&gt;     (vm-vdicone01-stop-interval-<wbr>0s)<br>
&gt;     &gt;               monitor interval=20s role=Stopped<br>
&gt;     &gt; (vm-vdicone01-monitor-<wbr>interval-20s)<br>
&gt;     &gt;               monitor interval=30s<br>
&gt;     (vm-vdicone01-monitor-<wbr>interval-30s)<br>
&gt;     &gt; [root@vdicnode01 cluster]# pcs resource show vm-vdicdb01<br>
&gt;     &gt;  Resource: vm-vdicdb01 (class=ocf provider=heartbeat<br>
&gt;     type=VirtualDomain)<br>
&gt;     &gt;   Attributes: hypervisor=qemu:///system<br>
&gt;     &gt; config=/mnt/nfs-vdic-mgmt-vm/<wbr>vdicdb01.xml<br>
&gt;     &gt; migration_network_suffix=tcp:/<wbr>/ migration_transport=ssh<br>
&gt;     &gt;   Meta Attrs: allow-migrate=true target-role=Stopped<br>
&gt;     &gt;   Utilization: cpu=1 hv_memory=512<br>
&gt;     &gt;   Operations: start interval=0s timeout=90<br>
&gt;     (vm-vdicdb01-start-interval-<wbr>0s)<br>
&gt;     &gt;               stop interval=0s timeout=90<br>
&gt;     (vm-vdicdb01-stop-interval-0s)<br>
&gt;     &gt;               monitor interval=20s role=Stopped<br>
&gt;     &gt; (vm-vdicdb01-monitor-interval-<wbr>20s)<br>
&gt;     &gt;               monitor interval=30s<br>
&gt;     (vm-vdicdb01-monitor-interval-<wbr>30s)<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; Nevertheless, one of the virtual domains is logging hardly and<br>
&gt;     &gt; fulfilling my hard disk:<br>
&gt;     &gt;<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116359]:    2017/02/16_08:52:27 INFO:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not readable,<br>
&gt;     &gt; resource considered stopped.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116401]:    2017/02/16_08:52:27 ERROR:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
&gt;     exist<br>
&gt;     &gt; or is not readable.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116423]:    2017/02/16_08:52:27 INFO:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not readable,<br>
&gt;     &gt; resource considered stopped.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116444]:    2017/02/16_08:52:27 ERROR:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
&gt;     exist<br>
&gt;     &gt; or is not readable.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116466]:    2017/02/16_08:52:27 INFO:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not readable,<br>
&gt;     &gt; resource considered stopped.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116487]:    2017/02/16_08:52:27 ERROR:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
&gt;     exist<br>
&gt;     &gt; or is not readable.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116509]:    2017/02/16_08:52:27 INFO:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not readable,<br>
&gt;     &gt; resource considered stopped.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116530]:    2017/02/16_08:52:27 ERROR:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
&gt;     exist<br>
&gt;     &gt; or is not readable.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116552]:    2017/02/16_08:52:27 INFO:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not readable,<br>
&gt;     &gt; resource considered stopped.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116573]:    2017/02/16_08:52:27 ERROR:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
&gt;     exist<br>
&gt;     &gt; or is not readable.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116595]:    2017/02/16_08:52:27 INFO:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not readable,<br>
&gt;     &gt; resource considered stopped.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116616]:    2017/02/16_08:52:27 ERROR:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
&gt;     exist<br>
&gt;     &gt; or is not readable.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116638]:    2017/02/16_08:52:27 INFO:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not readable,<br>
&gt;     &gt; resource considered stopped.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116659]:    2017/02/16_08:52:27 ERROR:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
&gt;     exist<br>
&gt;     &gt; or is not readable.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116681]:    2017/02/16_08:52:27 INFO:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not readable,<br>
&gt;     &gt; resource considered stopped.<br>
&gt;     &gt; VirtualDomain(vm-vdicone01)[<wbr>116702]:    2017/02/16_08:52:27 ERROR:<br>
&gt;     &gt; Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
&gt;     exist<br>
&gt;     &gt; or is not readable.<br>
&gt;     &gt; [root@vdicnode01 cluster]# pcs status<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; Note! Is it normal the error as I have not mounted the nfs<br>
&gt;     &gt; resource /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml yet.<br>
&gt;<br>
&gt;     Well that is probably the explanation already:<br>
&gt;     The resource should be stopped. The config-file is not available.<br>
&gt;     But the resource needs the config-file to verify that it is really<br>
&gt;     stopped.<br>
&gt;     So the probe is failing and as you have a monitoring op for<br>
&gt;     role=&quot;stopped&quot; it<br>
&gt;     is doing that over and over again.<br>
&gt;<br>
&gt;     &gt;<br>
&gt;     &gt; ¿Is there any explanation for this heavy logging?<br>
&gt;     &gt;<br>
&gt;     &gt; Thanks a lot!<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; ______________________________<wbr>_________________<br>
&gt;     &gt; Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
</div></div>&gt;     &lt;mailto:<a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a>&gt;<br>
&gt;     &gt; <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<span class="">&gt;     &lt;<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a>&gt;<br>
&gt;     &gt;<br>
&gt;     &gt; Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
&gt;     &gt; Getting started:<br>
&gt;     <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
&gt;     &lt;<a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a>&gt;<br>
&gt;     &gt; Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;     ______________________________<wbr>_________________<br>
&gt;     Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
</span>&gt;     &lt;mailto:<a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a>&gt;<br>
&gt;     <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<div class="HOEnZb"><div class="h5">&gt;     &lt;<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a>&gt;<br>
&gt;<br>
&gt;     Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
&gt;     Getting started:<br>
&gt;     <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
&gt;     &lt;<a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a>&gt;<br>
&gt;     Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;<br>
&gt;<br>
<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>