<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 09/05/2016 03:02 PM, Gabriele Bulfon
      wrote:<br>
    </div>
    <blockquote
      cite="mid:13621319.53.1473080520636.JavaMail.sonicle@www"
      type="cite">
      <div style="font-family: Arial; font-size: 13;">I read docs, looks
        like sbd fencing is more about iscsi/fc exposed storage
        resources.<br>
        Here I have real shared disks (seen from solaris with the format
        utility as normal sas disks, but on both nodes).<br>
        They are all jbod disks, that ZFS organizes in raidz/mirror
        pools, so I have 5 disks on one pool in one node, and the other
        5 disks on another pool in one node.<br>
        How can sbd work in this situation? Has it already been
        used/tested on a Solaris env with ZFS ?<br>
      </div>
    </blockquote>
    <br>
    You wouldn't have to have discs at all with sbd. You can just use it
    for pacemaker<br>
    to be monitored by a hardware-watchdog.<br>
    But if you want to add discs it shouldn't really matter how they are
    accessed as<br>
    long as you can concurrently read/write the block-devices.
    Configuration of <br>
    caching in the controllers might be an issue as well.<br>
    I'm e.g. currently testing with a simple kvm setup using following
    virsh-config<br>
    for the shared block-device:<br>
    <br>
    &lt;disk type='file' device='disk'&gt;<br>
          &lt;driver name='qemu' type='raw' cache='none'/&gt;<br>
          &lt;source file='SHARED_IMAGE_FILE'/&gt;<br>
          &lt;target dev='vdb' bus='virtio'/&gt;<br>
          &lt;shareable/&gt;<br>
          &lt;address type='pci' domain='0x0000' bus='0x00' slot='0x15'
    function='0x0'/&gt;<br>
     &lt;/disk&gt;<br>
    <br>
    Don't know about test-coverage for sbd on Solaris. Actually it
    should be independent<br>
    of which file-system you are using as you would anyway use a
    partition without <br>
    filesystem for sbd.<br>
    <br>
    <blockquote
      cite="mid:13621319.53.1473080520636.JavaMail.sonicle@www"
      type="cite">
      <div style="font-family: Arial; font-size: 13;"><br>
        BTW, is there any other possibility other than sbd.<br>
        <br>
      </div>
    </blockquote>
    <br>
    Probably - see Kens' suggestions.<br>
    Excuse me thinking a little unidimensional at the moment<br>
    working on some sbd-issue ;-)<br>
    And not having a proper fencing-device a watchdog is the last resort
    to have something<br>
    working reliably. And pacemakers' way to do watchdog is sbd...<br>
    <br>
    <blockquote
      cite="mid:13621319.53.1473080520636.JavaMail.sonicle@www"
      type="cite">
      <div style="font-family: Arial; font-size: 13;">Last but not
        least, is there any way to let ssh-fencing be considered good?<br>
        At the moment, with ssh-fencing, if I shut down the second node,
        I get all second resources in UNCLEAN state, not taken by the
        first one.<br>
        If I reboot the second , I only get the node on again, but
        resources remain stopped.<br>
      </div>
    </blockquote>
    <br>
    Strange... What do the logs say about the fencing-action being
    successful or not?<br>
    <br>
    <blockquote
      cite="mid:13621319.53.1473080520636.JavaMail.sonicle@www"
      type="cite">
      <div style="font-family: Arial; font-size: 13;"><br>
        I remember my tests with heartbeat react different (halt would
        move everything to node1 and get back everything on restart)<br>
        <br>
        Gabriele<br>
        <br>
        <div id="wt-mailcard">
          <div style="font-family: Arial;">----------------------------------------------------------------------------------------<br>
          </div>
          <div style="font-family: Arial;"><b>Sonicle S.r.l. </b>: <a
              moz-do-not-send="true" href="http://www.sonicle.com/"
              target="_new"><a class="moz-txt-link-freetext" href="http://www.sonicle.com">http://www.sonicle.com</a></a></div>
          <div style="font-family: Arial;"><b>Music: </b><a
              moz-do-not-send="true"
              href="http://www.gabrielebulfon.com/" target="_new"><a class="moz-txt-link-freetext" href="http://www.gabrielebulfon.com">http://www.gabrielebulfon.com</a></a></div>
          <div style="font-family: Arial;"><b>Quantum Mechanics : </b><a
              moz-do-not-send="true"
              href="http://www.cdbaby.com/cd/gabrielebulfon"
              target="_new"><a class="moz-txt-link-freetext" href="http://www.cdbaby.com/cd/gabrielebulfon">http://www.cdbaby.com/cd/gabrielebulfon</a></a></div>
        </div>
        <tt><br>
          <br>
          <br>
----------------------------------------------------------------------------------<br>
          <br>
          Da: Klaus Wenninger <a class="moz-txt-link-rfc2396E" href="mailto:kwenning@redhat.com">&lt;kwenning@redhat.com&gt;</a><br>
          A: <a class="moz-txt-link-abbreviated" href="mailto:users@clusterlabs.org">users@clusterlabs.org</a> <br>
          Data: 5 settembre 2016 12.21.25 CEST<br>
          Oggetto: Re: [ClusterLabs] ip clustering strange behaviour<br>
          <br>
        </tt>
        <blockquote style="BORDER-LEFT: #000080 2px solid; MARGIN-LEFT:
          5px; PADDING-LEFT: 5px"><tt>On 09/05/2016 11:20 AM, Gabriele
            Bulfon wrote:<br>
            &gt; The dual machine is equipped with a syncro controller
            LSI 3008 MPT SAS3.<br>
            &gt; Both nodes can see the same jbod disks (10 at the
            moment, up to 24).<br>
            &gt; Systems are XStreamOS / illumos, with ZFS.<br>
            &gt; Each system has one ZFS pool of 5 disks, with different
            pool names<br>
            &gt; (data1, data2).<br>
            &gt; When in active / active, the two machines run different
            zones and<br>
            &gt; services on their pools, on their networks.<br>
            &gt; I have custom resource agents (tested on
            pacemaker/heartbeat, now<br>
            &gt; porting to pacemaker/corosync) for ZFS pools and zones
            migration.<br>
            &gt; When I was testing pacemaker/heartbeat, when
            ssh-fencing discovered<br>
            &gt; the other node to be down (cleanly or abrupt halt), it
            was<br>
            &gt; automatically using IPaddr and our ZFS agents to take
            control of<br>
            &gt; everything, mounting the other pool and running any
            configured zone in it.<br>
            &gt; I would like to do the same with pacemaker/corosync.<br>
            &gt; The two nodes of the dual machine have an inernal lan
            connecting them,<br>
            &gt; a 100Mb ethernet: maybe this is enough reliable to
            trust ssh-fencing?<br>
            &gt; Or is there anything I can do to ensure at the
            controller level that<br>
            &gt; the pool is not in use on the other node?<br>
            <br>
            It is not just about the reliability of the
            networking-connection why<br>
            ssh-fencing might be<br>
            suboptimal. Something with the IP-Stack config (dynamic due
            to moving<br>
            resources)<br>
            might have gone wrong. And resources might be somehow
            hanging so that<br>
            the node<br>
            can't be brought down gracefully. Thus my suggestion to add
            a watchdog<br>
            (so available)<br>
            via sbd.<br>
            <br>
            &gt;<br>
            &gt; Gabriele<br>
            &gt;<br>
            &gt;
----------------------------------------------------------------------------------------<br>
            &gt; *Sonicle S.r.l. *: <a class="moz-txt-link-freetext" href="http://www.sonicle.com">http://www.sonicle.com</a>
            <a class="moz-txt-link-rfc2396E" href="http://www.sonicle.com/">&lt;http://www.sonicle.com/&gt;</a><br>
            &gt; *Music: *<a class="moz-txt-link-freetext" href="http://www.gabrielebulfon.com">http://www.gabrielebulfon.com</a>
            <a class="moz-txt-link-rfc2396E" href="http://www.gabrielebulfon.com/">&lt;http://www.gabrielebulfon.com/&gt;</a><br>
            &gt; *Quantum Mechanics :
            *<a class="moz-txt-link-freetext" href="http://www.cdbaby.com/cd/gabrielebulfon">http://www.cdbaby.com/cd/gabrielebulfon</a><br>
            &gt;<br>
            &gt;<br>
            &gt;<br>
            &gt;
----------------------------------------------------------------------------------<br>
            &gt;<br>
            &gt; Da: Ken Gaillot <a class="moz-txt-link-rfc2396E" href="mailto:kgaillot@redhat.com">&lt;kgaillot@redhat.com&gt;</a><br>
            &gt; A: <a class="moz-txt-link-abbreviated" href="mailto:gbulfon@sonicle.com">gbulfon@sonicle.com</a> Cluster Labs - All topics
            related to<br>
            &gt; open-source clustering welcomed
            <a class="moz-txt-link-rfc2396E" href="mailto:users@clusterlabs.org">&lt;users@clusterlabs.org&gt;</a><br>
            &gt; Data: 1 settembre 2016 15.49.04 CEST<br>
            &gt; Oggetto: Re: [ClusterLabs] ip clustering strange
            behaviour<br>
            &gt;<br>
            &gt; On 08/31/2016 11:50 PM, Gabriele Bulfon wrote:<br>
            &gt; &gt; Thanks, got it.<br>
            &gt; &gt; So, is it better to use "two_node: 1" or, as
            suggested else<br>
            &gt; where, or<br>
            &gt; &gt; "no-quorum-policy=stop"?<br>
            &gt;<br>
            &gt; I'd prefer "two_node: 1" and letting pacemaker's
            options default. But<br>
            &gt; see the votequorum(5) man page for what two_node
            implies -- most<br>
            &gt; importantly, both nodes have to be available when the
            cluster starts<br>
            &gt; before it will start any resources. Node failure is
            handled fine once<br>
            &gt; the cluster has started, but at start time, both nodes
            must be up.<br>
            &gt;<br>
            &gt; &gt; About fencing, the machine I'm going to implement
            the 2-nodes<br>
            &gt; cluster is<br>
            &gt; &gt; a dual machine with shared disks backend.<br>
            &gt; &gt; Each node has two 10Gb ethernets dedicated to the
            public ip and the<br>
            &gt; &gt; admin console.<br>
            &gt; &gt; Then there is a third 100Mb ethernet connecing the
            two machines<br>
            &gt; internally.<br>
            &gt; &gt; I was going to use this last one as fencing via
            ssh, but looks<br>
            &gt; like this<br>
            &gt; &gt; way I'm not gonna have ip/pool/zone movements if
            one of the nodes<br>
            &gt; &gt; freezes or halts without shutting down pacemaker
            clean.<br>
            &gt; &gt; What should I use instead?<br>
            &gt;<br>
            &gt; I'm guessing as a dual machine, they share a power
            supply, so that<br>
            &gt; rules<br>
            &gt; out a power switch. If the box has IPMI that can
            individually power<br>
            &gt; cycle each host, you can use fence_ipmilan. If the
            disks are<br>
            &gt; shared via<br>
            &gt; iSCSI, you could use fence_scsi. If the box has a
            hardware watchdog<br>
            &gt; device that can individually target the hosts, you
            could use sbd. If<br>
            &gt; none of those is an option, probably the best you could
            do is run the<br>
            &gt; cluster nodes as VMs on each host, and use fence_xvm.<br>
            &gt;<br>
            &gt; &gt; Thanks for your help,<br>
            &gt; &gt; Gabriele<br>
            &gt; &gt;<br>
            &gt; &gt;<br>
            &gt;
----------------------------------------------------------------------------------------<br>
            &gt; &gt; *Sonicle S.r.l. *: <a class="moz-txt-link-freetext" href="http://www.sonicle.com">http://www.sonicle.com</a>
            <a class="moz-txt-link-rfc2396E" href="http://www.sonicle.com/">&lt;http://www.sonicle.com/&gt;</a><br>
            &gt; &gt; *Music: *<a class="moz-txt-link-freetext" href="http://www.gabrielebulfon.com">http://www.gabrielebulfon.com</a><br>
            &gt; <a class="moz-txt-link-rfc2396E" href="http://www.gabrielebulfon.com/">&lt;http://www.gabrielebulfon.com/&gt;</a><br>
            &gt; &gt; *Quantum Mechanics :
            *<a class="moz-txt-link-freetext" href="http://www.cdbaby.com/cd/gabrielebulfon">http://www.cdbaby.com/cd/gabrielebulfon</a><br>
            &gt; &gt;<br>
            &gt; &gt;<br>
            &gt; &gt;<br>
            &gt; &gt;<br>
            &gt;
----------------------------------------------------------------------------------<br>
            &gt; &gt;<br>
            &gt; &gt; Da: Ken Gaillot <a class="moz-txt-link-rfc2396E" href="mailto:kgaillot@redhat.com">&lt;kgaillot@redhat.com&gt;</a><br>
            &gt; &gt; A: <a class="moz-txt-link-abbreviated" href="mailto:users@clusterlabs.org">users@clusterlabs.org</a><br>
            &gt; &gt; Data: 31 agosto 2016 17.25.05 CEST<br>
            &gt; &gt; Oggetto: Re: [ClusterLabs] ip clustering strange
            behaviour<br>
            &gt; &gt;<br>
            &gt; &gt; On 08/30/2016 01:52 AM, Gabriele Bulfon wrote:<br>
            &gt; &gt; &gt; Sorry for reiterating, but my main question
            was:<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt; why does node 1 removes its own IP if I shut
            down node 2 abruptly?<br>
            &gt; &gt; &gt; I understand that it does not take the node 2
            IP (because the<br>
            &gt; &gt; &gt; ssh-fencing has no clue about what happened
            on the 2nd node),<br>
            &gt; but I<br>
            &gt; &gt; &gt; wouldn't expect it to shut down its own
            IP...this would kill any<br>
            &gt; &gt; service<br>
            &gt; &gt; &gt; on both nodes...what am I wrong?<br>
            &gt; &gt;<br>
            &gt; &gt; Assuming you're using corosync 2, be sure you have
            "two_node: 1" in<br>
            &gt; &gt; corosync.conf. That will tell corosync to pretend
            there is always<br>
            &gt; &gt; quorum, so pacemaker doesn't need any special
            quorum settings.<br>
            &gt; See the<br>
            &gt; &gt; votequorum(5) man page for details. Of course, you
            need fencing<br>
            &gt; in this<br>
            &gt; &gt; setup, to handle when communication between the
            nodes is broken<br>
            &gt; but both<br>
            &gt; &gt; are still up.<br>
            &gt; &gt;<br>
            &gt; &gt; &gt;<br>
            &gt; &gt;<br>
            &gt;
----------------------------------------------------------------------------------------<br>
            &gt; &gt; &gt; *Sonicle S.r.l. *: <a class="moz-txt-link-freetext" href="http://www.sonicle.com">http://www.sonicle.com</a><br>
            &gt; <a class="moz-txt-link-rfc2396E" href="http://www.sonicle.com/">&lt;http://www.sonicle.com/&gt;</a><br>
            &gt; &gt; &gt; *Music: *<a class="moz-txt-link-freetext" href="http://www.gabrielebulfon.com">http://www.gabrielebulfon.com</a><br>
            &gt; &gt; <a class="moz-txt-link-rfc2396E" href="http://www.gabrielebulfon.com/">&lt;http://www.gabrielebulfon.com/&gt;</a><br>
            &gt; &gt; &gt; *Quantum Mechanics :
            *<a class="moz-txt-link-freetext" href="http://www.cdbaby.com/cd/gabrielebulfon">http://www.cdbaby.com/cd/gabrielebulfon</a><br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt;<br>
            &gt; &gt;<br>
            &gt;
            ------------------------------------------------------------------------<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt; *Da:* Gabriele Bulfon
            <a class="moz-txt-link-rfc2396E" href="mailto:gbulfon@sonicle.com">&lt;gbulfon@sonicle.com&gt;</a><br>
            &gt; &gt; &gt; *A:* <a class="moz-txt-link-abbreviated" href="mailto:kwenning@redhat.com">kwenning@redhat.com</a> Cluster Labs - All
            topics related to<br>
            &gt; &gt; &gt; open-source clustering welcomed
            <a class="moz-txt-link-rfc2396E" href="mailto:users@clusterlabs.org">&lt;users@clusterlabs.org&gt;</a><br>
            &gt; &gt; &gt; *Data:* 29 agosto 2016 17.37.36 CEST<br>
            &gt; &gt; &gt; *Oggetto:* Re: [ClusterLabs] ip clustering
            strange behaviour<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt; Ok, got it, I hadn't gracefully shut
            pacemaker on node2.<br>
            &gt; &gt; &gt; Now I restarted, everything was up, stopped
            pacemaker service on<br>
            &gt; &gt; &gt; host2 and I got host1 with both IPs
            configured. ;)<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt; But, though I understand that if I halt host2
            with no grace<br>
            &gt; shut of<br>
            &gt; &gt; &gt; pacemaker, it will not move the IP2 to Host1,
            I don't expect host1<br>
            &gt; &gt; &gt; to loose its own IP! Why?<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt; Gabriele<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt;<br>
            &gt; &gt;<br>
            &gt;
----------------------------------------------------------------------------------------<br>
            &gt; &gt; &gt; *Sonicle S.r.l. *: <a class="moz-txt-link-freetext" href="http://www.sonicle.com">http://www.sonicle.com</a><br>
            &gt; <a class="moz-txt-link-rfc2396E" href="http://www.sonicle.com/">&lt;http://www.sonicle.com/&gt;</a><br>
            &gt; &gt; &gt; *Music: *<a class="moz-txt-link-freetext" href="http://www.gabrielebulfon.com">http://www.gabrielebulfon.com</a><br>
            &gt; &gt; <a class="moz-txt-link-rfc2396E" href="http://www.gabrielebulfon.com/">&lt;http://www.gabrielebulfon.com/&gt;</a><br>
            &gt; &gt; &gt; *Quantum Mechanics :
            *<a class="moz-txt-link-freetext" href="http://www.cdbaby.com/cd/gabrielebulfon">http://www.cdbaby.com/cd/gabrielebulfon</a><br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt;<br>
            &gt; &gt;<br>
            &gt;
----------------------------------------------------------------------------------<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt; Da: Klaus Wenninger
            <a class="moz-txt-link-rfc2396E" href="mailto:kwenning@redhat.com">&lt;kwenning@redhat.com&gt;</a><br>
            &gt; &gt; &gt; A: <a class="moz-txt-link-abbreviated" href="mailto:users@clusterlabs.org">users@clusterlabs.org</a><br>
            &gt; &gt; &gt; Data: 29 agosto 2016 17.26.49 CEST<br>
            &gt; &gt; &gt; Oggetto: Re: [ClusterLabs] ip clustering
            strange behaviour<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt; On 08/29/2016 05:18 PM, Gabriele Bulfon
            wrote:<br>
            &gt; &gt; &gt; &gt; Hi,<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; now that I have IPaddr work, I have a
            strange behaviour on<br>
            &gt; my test<br>
            &gt; &gt; &gt; &gt; setup of 2 nodes, here is my
            configuration:<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; ===STONITH/FENCING===<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; primitive xstorage1-stonith
            stonith:external/ssh-sonicle op<br>
            &gt; &gt; &gt; monitor<br>
            &gt; &gt; &gt; &gt; interval="25" timeout="25"
            start-delay="25" params<br>
            &gt; &gt; &gt; hostlist="xstorage1"<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; primitive xstorage2-stonith
            stonith:external/ssh-sonicle op<br>
            &gt; &gt; &gt; monitor<br>
            &gt; &gt; &gt; &gt; interval="25" timeout="25"
            start-delay="25" params<br>
            &gt; &gt; &gt; hostlist="xstorage2"<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; location xstorage1-stonith-pref
            xstorage1-stonith -inf:<br>
            &gt; xstorage1<br>
            &gt; &gt; &gt; &gt; location xstorage2-stonith-pref
            xstorage2-stonith -inf:<br>
            &gt; xstorage2<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; property stonith-action=poweroff<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; ===IP RESOURCES===<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; primitive xstorage1_wan1_IP
            ocf:heartbeat:IPaddr params<br>
            &gt; &gt; &gt; ip="1.2.3.4"<br>
            &gt; &gt; &gt; &gt; cidr_netmask="255.255.255.0"
            nic="e1000g1"<br>
            &gt; &gt; &gt; &gt; primitive xstorage2_wan2_IP
            ocf:heartbeat:IPaddr params<br>
            &gt; &gt; &gt; ip="1.2.3.5"<br>
            &gt; &gt; &gt; &gt; cidr_netmask="255.255.255.0"
            nic="e1000g1"<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; location xstorage1_wan1_IP_pref
            xstorage1_wan1_IP 100: xstorage1<br>
            &gt; &gt; &gt; &gt; location xstorage2_wan2_IP_pref
            xstorage2_wan2_IP 100: xstorage2<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; ===================<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; So I plumbed e1000g1 with unconfigured
            IP on both machines and<br>
            &gt; &gt; &gt; started<br>
            &gt; &gt; &gt; &gt; corosync/pacemaker, and after some time
            I got all nodes<br>
            &gt; online and<br>
            &gt; &gt; &gt; &gt; started, with IP configured as virtual
            interfaces (e1000g1:1 and<br>
            &gt; &gt; &gt; &gt; e1000g1:2) one in host1 and one in
            host2.<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; Then I halted host2, and I expected to
            have host1 started with<br>
            &gt; &gt; &gt; both<br>
            &gt; &gt; &gt; &gt; IPs configured on host1.<br>
            &gt; &gt; &gt; &gt; Instead, I got host1 started with the IP
            stopped and removed<br>
            &gt; (only<br>
            &gt; &gt; &gt; &gt; e1000g1 unconfigured), host2 stopped
            saying IP started (!?).<br>
            &gt; &gt; &gt; &gt; Not exactly what I expected...<br>
            &gt; &gt; &gt; &gt; What's wrong?<br>
            &gt; &gt; &gt;<br>
            &gt; &gt; &gt; How did you stop host2? Graceful shutdown of
            pacemaker? If not ...<br>
            &gt; &gt; &gt; Anyway ssh-fencing is just working if the
            machine is still<br>
            &gt; &gt; &gt; running ...<br>
            &gt; &gt; &gt; So it will stay unclean and thus pacemaker is
            thinking that<br>
            &gt; &gt; &gt; the IP might still be running on it. So this
            is actually the<br>
            &gt; &gt; &gt; expected<br>
            &gt; &gt; &gt; behavior.<br>
            &gt; &gt; &gt; You might add a watchdog via sbd if you don't
            have other fencing<br>
            &gt; &gt; &gt; hardware at hand ...<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; Here is the crm status after I stopped
            host 2:<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; 2 nodes and 4 resources configured<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; Node xstorage2: UNCLEAN (offline)<br>
            &gt; &gt; &gt; &gt; Online: [ xstorage1 ]<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; Full list of resources:<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; xstorage1-stonith
            (stonith:external/ssh-sonicle): Started<br>
            &gt; &gt; &gt; xstorage2<br>
            &gt; &gt; &gt; &gt; (UNCLEAN)<br>
            &gt; &gt; &gt; &gt; xstorage2-stonith
            (stonith:external/ssh-sonicle): Stopped<br>
            &gt; &gt; &gt; &gt; xstorage1_wan1_IP
            (ocf::heartbeat:IPaddr): Stopped<br>
            &gt; &gt; &gt; &gt; xstorage2_wan2_IP
            (ocf::heartbeat:IPaddr): Started xstorage2<br>
            &gt; &gt; &gt; (UNCLEAN)<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt; Gabriele<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt; &gt;<br>
            &gt; &gt; &gt;<br>
            &gt; &gt;<br>
            &gt;
----------------------------------------------------------------------------------------<br>
            &gt; &gt; &gt; &gt; *Sonicle S.r.l. *:
            <a class="moz-txt-link-freetext" href="http://www.sonicle.com">http://www.sonicle.com</a><br>
            &gt; &gt; &gt; <a class="moz-txt-link-rfc2396E" href="http://www.sonicle.com/">&lt;http://www.sonicle.com/&gt;</a><br>
            &gt; &gt; &gt; &gt; *Music: *<a class="moz-txt-link-freetext" href="http://www.gabrielebulfon.com">http://www.gabrielebulfon.com</a><br>
            &gt; &gt; &gt; <a class="moz-txt-link-rfc2396E" href="http://www.gabrielebulfon.com/">&lt;http://www.gabrielebulfon.com/&gt;</a><br>
            &gt; &gt; &gt; &gt; *Quantum Mechanics :
            *<a class="moz-txt-link-freetext" href="http://www.cdbaby.com/cd/gabrielebulfon">http://www.cdbaby.com/cd/gabrielebulfon</a><br>
            &gt;<br>
            &gt;<br>
            &gt;<br>
            &gt;<br>
            &gt;<br>
            &gt; _______________________________________________<br>
            &gt; Users mailing list: <a class="moz-txt-link-abbreviated" href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
            &gt; <a class="moz-txt-link-freetext" href="http://clusterlabs.org/mailman/listinfo/users">http://clusterlabs.org/mailman/listinfo/users</a><br>
            &gt;<br>
            &gt; Project Home: <a class="moz-txt-link-freetext" href="http://www.clusterlabs.org">http://www.clusterlabs.org</a><br>
            &gt; Getting started:
            <a class="moz-txt-link-freetext" href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
            &gt; Bugs: <a class="moz-txt-link-freetext" href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a><br>
            <br>
            <br>
            _______________________________________________<br>
            Users mailing list: <a class="moz-txt-link-abbreviated" href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
            <a class="moz-txt-link-freetext" href="http://clusterlabs.org/mailman/listinfo/users">http://clusterlabs.org/mailman/listinfo/users</a><br>
            <br>
            Project Home: <a class="moz-txt-link-freetext" href="http://www.clusterlabs.org">http://www.clusterlabs.org</a><br>
            Getting started:
            <a class="moz-txt-link-freetext" href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
            Bugs: <a class="moz-txt-link-freetext" href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a><br>
            <br>
            <br>
          </tt></blockquote>
      </div>
    </blockquote>
    <br>
  </body>
</html>