[ClusterLabs] Multi site pgsql

ProfiVPS Support support at profivps.hu
Wed Jul 19 05:50:02 EDT 2023


Hello everyone,

   I'm stuck with a multi-site cluster configuration and I'd really 
appreciate some help/pointers.

   The scenario: I'd like to build two (whole) clusters in two different 
DCs, one of them acting as Master and the other as standby/Slave. Both 
clusters have a proxy, some web nodes and only one db node. I'm not 
planning to add HA for pgsql inside the clusters, failing over to the 
other cluster in case something goes south seems to be acceptable.

   So I ended up with pcs + booth.

     - Currently both clusters start up, the main one starts a master 
psql service when the ticket is granted, the slave starts a replicating 
psql.

     - When ticket is revoked, master DB is stopped.

     - Slave DB is not getting promoted and I understand it doesnt even 
have a reason to do so.

   Basically what I need is the slave to be promoted to master when the 
ticket is granted to it.

  I'm not fully sure that Im not in the wrong, to achieve this I had to 
use two different pgsql resource configuration on the two clusters.

  Master cluster:

pcs -f cib.xml resource create pgsql ocf:heartbeat:pgsql \
pgctl="/usr/lib/postgresql/15/bin/pg_ctl" \
psql="/usr/bin/psql" \
pgdata="/var/lib/postgresql/15/main/" \
node_list="sa_psql" \
logfile="/var/log/postgresql/postgresql-15-main.log" \
socketdir="/var/run/postgresql/" \
op monitor interval="11s" \
op monitor interval="10s" role="Master" \
op start timeout="60s" \
op stop timeout="60s" \
--group pgsql_group

# forcing pgsql to only reside only on the psql node
pcs -f cib.xml constraint location pgsql prefers la_psql=INFINITY
pcs -f cib.xml constraint location pgsql prefers la_worker1=-INFINITY
pcs -f cib.xml constraint location pgsql prefers la_hagw=-INFINITY

# Fence on ticket loss if we were promoted
pcs -f cib.xml constraint ticket set pgsql role=Promoted setoptions 
loss-policy=fence ticket=sqlticket

  Slave cluster :

pcs -f cib.xml resource create pgsql ocf:heartbeat:pgsql \
pgctl="/usr/lib/postgresql/15/bin/pg_ctl" \
psql="/usr/bin/psql" \
pgdata="/var/lib/postgresql/15/main/" \
node_list="sa_psql" \
logfile="/var/log/postgresql/postgresql-15-main.log" \
socketdir="/var/run/postgresql/" \
restore_command="cp /var/lib/pgsql/pg_archive/%f %p" \
master_ip="_master_ip_" \
repuser="repuser" \
rep_mode="slave" \
replication_slot_name="replica_1_slot" \
primary_conninfo_opt="password=***** keepalives_idle=60 
keepalives_interval=5 keepalives_count=5" \
op monitor interval="31s" \
op monitor interval="30s" role="Promoted" \
op start timeout="60s" \
op stop timeout="60s" \
op promote timeout="120s" \
--group pgsql_group

# Force pgsql to only run on the sql node
pcs -f cib.xml constraint location pgsql prefers sa_psql=INFINITY
pcs -f cib.xml constraint location pgsql prefers sa_worker1=-INFINITY
pcs -f cib.xml constraint location pgsql prefers sa_hagw=-INFINITY

# Without this service wouldnt satart
pcs -f cib.xml constraint ticket set pgsql role=Promoted setoptions 
loss-policy=demote ticket=sqlticket

When configuration is pushed and ticket granted, they start up in m/s 
straming replication mode. However, debug-promote on slave psql returns 
with:
Operation force-promote for pgsql (ocf:heartbeat:pgsql) returned 6 (not 
configured: Not in a replication mode.)

Which is strange, bc:
is_replication() {
   if [ "$OCF_RESKEY_rep_mode" != "none" -a "$OCF_RESKEY_rep_mode" != 
"slave" ]; then
     return 0
   fi
   return 1
}

  So I'm pretty much stuck.
  I'm also not sure that booth is a definite must here, sometimes I feel 
I was better off putting all of them into one bit cluster with an 
external tie breaker. But now I got sooo much time invested, I'd love to 
see it through.

All help is greatly appreciated!

Thank you,
András

---
Olcsó Virtuális szerver:
http://www.ProfiVPS.hu

Támogatás: Support at ProfiVPS.hu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20230719/567abd74/attachment.htm>


More information about the Users mailing list