[ClusterLabs] Beginner lost with promotable "group" design

Adam Cecile acecile at le-vert.net
Wed Jan 31 09:41:28 EST 2024


On 1/17/24 16:33, Ken Gaillot wrote:
> On Wed, 2024-01-17 at 14:23 +0100, Adam Cécile wrote:
>> Hello,
>>
>>
>> I'm trying to achieve the following setup with 3 hosts:
>>
>> * One master gets a shared IP, then remove default gw, add another
>> gw,
>> start a service
>>
>> * Two slaves should have none of them but add a different default gw
>>
>> I managed quite easily to get the master workflow running with
>> ordering
>> constraints but I don't understand how I should move forward with
>> the
>> slave configuration.
>>
>> I think I must create a promotable resource first then assign my
>> other
>> resources with started/stopped  setting depending on the promote
>> status
>> of the node. Is that correct ? How to create a promotable
>> "placeholder"
>> where I can later attach my existing resources ?
> A promotable resource would be appropriate if the service should run on
> all nodes, but one node runs with a special setting. That doesn't sound
> like what you have.
>
> If you just need the service to run on one node, the shared IP,
> service, and both gateways can be regular resources. You just need
> colocation constraints between them:
>
> - colocate service and external default route with shared IP
> - clone the internal default route and anti-colocate it with shared IP
>
> If you want the service to be able to run even if the IP can't, make
> its colocation score finite (or colocate the IP and external route with
> the service).
>
> Ordering is separate. You can order the shared IP, service, and
> external route however needed. Alternatively, you can put the three of
> them in a group (which does both colocation and ordering, in sequence),
> and anti-colocate the cloned internal route with the group.
>
>> Sorry for the stupid question but I really don't understand what type
>> of
>> elements I should create...
>>
>>
>> Thanks in advance,
>>
>> Regards, Adam.
>>
>>
>> PS: Bonus question should I use "pcs" or "crm" ? It seems both
>> command
>> seem to be equivalent and documentations use sometime one or another
>>
> They are equivalent -- it's a matter of personal preference (and often
> what choices your distro give you).

Hello,

Thanks a lot for your suggestion, it seems I have something that work 
correctly now, final configuration is:


pcs property set stonith-enabled=false

pcs resource create Internal-IPv4 ocf:heartbeat:IPaddr2 ip=10.0.0.254 
nic=eth0 cidr_netmask=24 op monitor interval=30
pcs resource create Public-IPv4 ocf:heartbeat:IPaddr2 ip=1.2.3.4 
nic=eth1 cidr_netmask=28 op monitor interval=30
pcs resource create Public-IPv4-Gateway ocf:heartbeat:Route 
destination=0.0.0.0/0 device=eth1 gateway=1.2.3.14 op monitor interval=30

pcs constraint colocation add Internal-IPv4 with Public-IPv4 score=+INFINITY
pcs constraint colocation add Public-IPv4 with Public-IPv4-Gateway 
score=+INFINITY

pcs constraint order Internal-IPv4 then Public-IPv4
pcs constraint order Public-IPv4 then Public-IPv4-Gateway

pcs resource create Internal-IPv4-Gateway ocf:heartbeat:Route 
destination=0.0.0.0/0 device=eth0 gateway=10.0.0.254 op monitor 
interval=30 --force
pcs resource clone Internal-IPv4-Gateway

pcs constraint colocation add Internal-IPv4-Gateway-clone with 
Internal-IPv4 score=-INFINITY

pcs stonith create vmfence fence_vmware_rest 
pcmk_host_map="gw-1:gw-1;gw-2:gw-2;gw-3:gw-3" ip=10.1.2.3 ssl=1 
username=corosync at vsphere.local password=p4ssw0rd ssl_insecure=1

pcs property set stonith-enabled=true


Any comment regarding this configuration ?


I have a quick one regarding fencing. I disconnected eth0 from gw-3 and 
the VM has been restarted automatically, so I guess it's the fencing 
agent that kicked in. However, I left the VM in such state (so it's seen 
offline by other nodes) and I thought it would end up being powered off 
for good. However, it seems fencing agent is keeping it powered on. Is 
that expected ?


Best regards, Adam.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20240131/59ad126c/attachment.htm>


More information about the Users mailing list