<div style="font-family: Arial; font-size: 13;">Thanks, got it.</div><div style="font-family: Arial; font-size: 13;">So, is it better to use &quot;two_node: 1&quot; or, as suggested else where, or &quot;no-quorum-policy=stop&quot;?</div><div style="font-family: Arial; font-size: 13;"><br></div><div style="font-family: Arial; font-size: 13;">About fencing, the machine I&#39;m going to implement the 2-nodes cluster is a dual machine with shared disks backend.</div><div style="font-family: Arial; font-size: 13;">Each node has two 10Gb ethernets dedicated to the public ip and the admin console.</div><div style="font-family: Arial; font-size: 13;">Then there is a third 100Mb ethernet connecing the two machines internally.</div><div style="font-family: Arial; font-size: 13;">I was going to use this last one as fencing via ssh, but looks like this way I&#39;m not gonna have ip/pool/zone movements if one of the nodes freezes or halts without shutting down pacemaker clean.</div><div style="font-family: Arial; font-size: 13;">What should I use instead?</div><div style="font-family: Arial; font-size: 13;"><br></div><div style="font-family: Arial; font-size: 13;">Thanks for your help,</div><div style="font-family: Arial; font-size: 13;">Gabriele<br><br><div id="wt-mailcard"><div style="font-family: Arial;">----------------------------------------------------------------------------------------<br></div><div style="font-family: Arial;"><b>Sonicle S.r.l. </b>: <a href="http://www.sonicle.com/" target="_new">http://www.sonicle.com</a></div><div style="font-family: Arial;"><b>Music: </b><a href="http://www.gabrielebulfon.com/" target="_new">http://www.gabrielebulfon.com</a></div><div style="font-family: Arial;"><b>Quantum Mechanics : </b><a href="http://www.cdbaby.com/cd/gabrielebulfon" target="_new">http://www.cdbaby.com/cd/gabrielebulfon</a></div></div><tt><br><br><br>----------------------------------------------------------------------------------<br><br>Da: Ken Gaillot &lt;kgaillot@redhat.com&gt;<br>A: users@clusterlabs.org <br>Data: 31 agosto 2016 17.25.05 CEST<br>Oggetto: Re: [ClusterLabs] ip clustering strange behaviour<br><br></tt><blockquote style="BORDER-LEFT: #000080 2px solid; MARGIN-LEFT: 5px; PADDING-LEFT: 5px"><tt>On 08/30/2016 01:52 AM, Gabriele Bulfon wrote:<br>&gt; Sorry for reiterating, but my main question was:<br>&gt; <br>&gt; why does node 1 removes its own IP if I shut down node 2 abruptly?<br>&gt; I understand that it does not take the node 2 IP (because the<br>&gt; ssh-fencing has no clue about what happened on the 2nd node), but I<br>&gt; wouldn&#39;t expect it to shut down its own IP...this would kill any service<br>&gt; on both nodes...what am I wrong?<br><br>Assuming you&#39;re using corosync 2, be sure you have &quot;two_node: 1&quot; in<br>corosync.conf. That will tell corosync to pretend there is always<br>quorum, so pacemaker doesn&#39;t need any special quorum settings. See the<br>votequorum(5) man page for details. Of course, you need fencing in this<br>setup, to handle when communication between the nodes is broken but both<br>are still up.<br><br>&gt; ----------------------------------------------------------------------------------------<br>&gt; *Sonicle S.r.l. *: http://www.sonicle.com &lt;http://www.sonicle.com/&gt;<br>&gt; *Music: *http://www.gabrielebulfon.com &lt;http://www.gabrielebulfon.com/&gt;<br>&gt; *Quantum Mechanics : *http://www.cdbaby.com/cd/gabrielebulfon<br>&gt; <br>&gt; ------------------------------------------------------------------------<br>&gt; <br>&gt; <br>&gt; *Da:* Gabriele Bulfon &lt;gbulfon@sonicle.com&gt;<br>&gt; *A:* kwenning@redhat.com Cluster Labs - All topics related to<br>&gt; open-source clustering welcomed &lt;users@clusterlabs.org&gt;<br>&gt; *Data:* 29 agosto 2016 17.37.36 CEST<br>&gt; *Oggetto:* Re: [ClusterLabs] ip clustering strange behaviour<br>&gt; <br>&gt; <br>&gt;     Ok, got it, I hadn&#39;t gracefully shut pacemaker on node2.<br>&gt;     Now I restarted, everything was up, stopped pacemaker service on<br>&gt;     host2 and I got host1 with both IPs configured. ;)<br>&gt; <br>&gt;     But, though I understand that if I halt host2 with no grace shut of<br>&gt;     pacemaker, it will not move the IP2 to Host1, I don&#39;t expect host1<br>&gt;     to loose its own IP! Why?<br>&gt; <br>&gt;     Gabriele<br>&gt; <br>&gt;     ----------------------------------------------------------------------------------------<br>&gt;     *Sonicle S.r.l. *: http://www.sonicle.com &lt;http://www.sonicle.com/&gt;<br>&gt;     *Music: *http://www.gabrielebulfon.com &lt;http://www.gabrielebulfon.com/&gt;<br>&gt;     *Quantum Mechanics : *http://www.cdbaby.com/cd/gabrielebulfon<br>&gt; <br>&gt; <br>&gt; <br>&gt;     ----------------------------------------------------------------------------------<br>&gt; <br>&gt;     Da: Klaus Wenninger &lt;kwenning@redhat.com&gt;<br>&gt;     A: users@clusterlabs.org<br>&gt;     Data: 29 agosto 2016 17.26.49 CEST<br>&gt;     Oggetto: Re: [ClusterLabs] ip clustering strange behaviour<br>&gt; <br>&gt;         On 08/29/2016 05:18 PM, Gabriele Bulfon wrote:<br>&gt;         &gt; Hi,<br>&gt;         &gt;<br>&gt;         &gt; now that I have IPaddr work, I have a strange behaviour on my test<br>&gt;         &gt; setup of 2 nodes, here is my configuration:<br>&gt;         &gt;<br>&gt;         &gt; ===STONITH/FENCING===<br>&gt;         &gt;<br>&gt;         &gt; primitive xstorage1-stonith stonith:external/ssh-sonicle op<br>&gt;         monitor<br>&gt;         &gt; interval=&quot;25&quot; timeout=&quot;25&quot; start-delay=&quot;25&quot; params<br>&gt;         hostlist=&quot;xstorage1&quot;<br>&gt;         &gt;<br>&gt;         &gt; primitive xstorage2-stonith stonith:external/ssh-sonicle op<br>&gt;         monitor<br>&gt;         &gt; interval=&quot;25&quot; timeout=&quot;25&quot; start-delay=&quot;25&quot; params<br>&gt;         hostlist=&quot;xstorage2&quot;<br>&gt;         &gt;<br>&gt;         &gt; location xstorage1-stonith-pref xstorage1-stonith -inf: xstorage1<br>&gt;         &gt; location xstorage2-stonith-pref xstorage2-stonith -inf: xstorage2<br>&gt;         &gt;<br>&gt;         &gt; property stonith-action=poweroff<br>&gt;         &gt;<br>&gt;         &gt;<br>&gt;         &gt;<br>&gt;         &gt; ===IP RESOURCES===<br>&gt;         &gt;<br>&gt;         &gt;<br>&gt;         &gt; primitive xstorage1_wan1_IP ocf:heartbeat:IPaddr params<br>&gt;         ip=&quot;1.2.3.4&quot;<br>&gt;         &gt; cidr_netmask=&quot;255.255.255.0&quot; nic=&quot;e1000g1&quot;<br>&gt;         &gt; primitive xstorage2_wan2_IP ocf:heartbeat:IPaddr params<br>&gt;         ip=&quot;1.2.3.5&quot;<br>&gt;         &gt; cidr_netmask=&quot;255.255.255.0&quot; nic=&quot;e1000g1&quot;<br>&gt;         &gt;<br>&gt;         &gt; location xstorage1_wan1_IP_pref xstorage1_wan1_IP 100: xstorage1<br>&gt;         &gt; location xstorage2_wan2_IP_pref xstorage2_wan2_IP 100: xstorage2<br>&gt;         &gt;<br>&gt;         &gt; ===================<br>&gt;         &gt;<br>&gt;         &gt; So I plumbed e1000g1 with unconfigured IP on both machines and<br>&gt;         started<br>&gt;         &gt; corosync/pacemaker, and after some time I got all nodes online and<br>&gt;         &gt; started, with IP configured as virtual interfaces (e1000g1:1 and<br>&gt;         &gt; e1000g1:2) one in host1 and one in host2.<br>&gt;         &gt;<br>&gt;         &gt; Then I halted host2, and I expected to have host1 started with<br>&gt;         both<br>&gt;         &gt; IPs configured on host1.<br>&gt;         &gt; Instead, I got host1 started with the IP stopped and removed (only<br>&gt;         &gt; e1000g1 unconfigured), host2 stopped saying IP started (!?).<br>&gt;         &gt; Not exactly what I expected...<br>&gt;         &gt; What&#39;s wrong?<br>&gt; <br>&gt;         How did you stop host2? Graceful shutdown of pacemaker? If not ...<br>&gt;         Anyway ssh-fencing is just working if the machine is still<br>&gt;         running ...<br>&gt;         So it will stay unclean and thus pacemaker is thinking that<br>&gt;         the IP might still be running on it. So this is actually the<br>&gt;         expected<br>&gt;         behavior.<br>&gt;         You might add a watchdog via sbd if you don&#39;t have other fencing<br>&gt;         hardware at hand ...<br>&gt;         &gt;<br>&gt;         &gt; Here is the crm status after I stopped host 2:<br>&gt;         &gt;<br>&gt;         &gt; 2 nodes and 4 resources configured<br>&gt;         &gt;<br>&gt;         &gt; Node xstorage2: UNCLEAN (offline)<br>&gt;         &gt; Online: [ xstorage1 ]<br>&gt;         &gt;<br>&gt;         &gt; Full list of resources:<br>&gt;         &gt;<br>&gt;         &gt; xstorage1-stonith (stonith:external/ssh-sonicle): Started<br>&gt;         xstorage2<br>&gt;         &gt; (UNCLEAN)<br>&gt;         &gt; xstorage2-stonith (stonith:external/ssh-sonicle): Stopped<br>&gt;         &gt; xstorage1_wan1_IP (ocf::heartbeat:IPaddr): Stopped<br>&gt;         &gt; xstorage2_wan2_IP (ocf::heartbeat:IPaddr): Started xstorage2<br>&gt;         (UNCLEAN)<br>&gt;         &gt;<br>&gt;         &gt;<br>&gt;         &gt; Gabriele<br>&gt;         &gt;<br>&gt;         &gt;<br>&gt;         ----------------------------------------------------------------------------------------<br>&gt;         &gt; *Sonicle S.r.l. *: http://www.sonicle.com<br>&gt;         &lt;http://www.sonicle.com/&gt;<br>&gt;         &gt; *Music: *http://www.gabrielebulfon.com<br>&gt;         &lt;http://www.gabrielebulfon.com/&gt;<br>&gt;         &gt; *Quantum Mechanics : *http://www.cdbaby.com/cd/gabrielebulfon<br><br>_______________________________________________<br>Users mailing list: Users@clusterlabs.org<br>http://clusterlabs.org/mailman/listinfo/users<br><br>Project Home: http://www.clusterlabs.org<br>Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf<br>Bugs: http://bugs.clusterlabs.org<br><br><br></tt></blockquote></div>