3. Cluster-Wide Configuration
3.1. Configuration Layout
The cluster is defined by the Cluster Information Base (CIB), which uses XML notation. The simplest CIB, an empty one, looks like this:
An empty configuration
<cib crm_feature_set="3.6.0" validate-with="pacemaker-3.5" epoch="1" num_updates="0" admin_epoch="0">
<configuration>
<crm_config/>
<nodes/>
<resources/>
<constraints/>
</configuration>
<status/>
</cib>
The empty configuration above contains the major sections that make up a CIB:
cib
: The entire CIB is enclosed with acib
element. Certain fundamental settings are defined as attributes of this element.configuration
: This section – the primary focus of this document – contains traditional configuration information such as what resources the cluster serves and the relationships among them.crm_config
: cluster-wide configuration optionsnodes
: the machines that host the clusterresources
: the services run by the clusterconstraints
: indications of how resources should be placed
status
: This section contains the history of each resource on each node. Based on this data, the cluster can construct the complete current state of the cluster. The authoritative source for this section is the local executor (pacemaker-execd process) on each cluster node, and the cluster will occasionally repopulate the entire section. For this reason, it is never written to disk, and administrators are advised against modifying it in any way.
In this document, configuration settings will be described as properties or options based on how they are defined in the CIB:
Properties are XML attributes of an XML element.
Options are name-value pairs expressed as
nvpair
child elements of an XML element.
Normally, you will use command-line tools that abstract the XML, so the distinction will be unimportant; both properties and options are cluster settings you can tweak.
3.2. CIB Properties
Certain settings are defined by CIB properties (that is, attributes of the
cib
tag) rather than with the rest of the cluster configuration in the
configuration
section.
The reason is simply a matter of parsing. These options are used by the configuration database which is, by design, mostly ignorant of the content it holds. So the decision was made to place them in an easy-to-find location.
Name |
Type |
Default |
Description |
---|---|---|---|
admin_epoch |
0 |
When a node joins the cluster, the cluster asks the node with the
highest ( |
|
epoch |
0 |
The cluster increments this every time the CIB’s configuration section is updated. |
|
num_updates |
0 |
The cluster increments this every time the CIB’s configuration or status sections are updated, and resets it to 0 when epoch changes. |
|
validate-with |
Determines the type of XML validation that will be done on the
configuration. Allowed values are |
||
remote-tls-port |
If set, the CIB manager will listen for anonymously encrypted remote connections on this port, to allow CIB administration from hosts not in the cluster. No key is used, so this should be used only on a protected network where man-in-the-middle attacks can be avoided. |
||
remote-clear-port |
If set to a TCP port number, the CIB manager will listen for remote connections on this port, to allow for CIB administration from hosts not in the cluster. No encryption is used, so this should be used only on a protected network. |
||
cib-last-written |
Indicates when the configuration was last written to disk. Maintained by the cluster; for informational purposes only. |
||
have-quorum |
Indicates whether the cluster has quorum. If false, the cluster’s
response is determined by |
||
dc-uuid |
Node ID of the cluster’s current designated controller (DC). Used and maintained by the cluster. |
||
execution-date |
Time to use when evaluating rules. |
3.3. Cluster Options
Cluster options, as you might expect, control how the cluster behaves when confronted with various situations.
They are grouped into sets within the crm_config
section. In advanced
configurations, there may be more than one set. (This will be described later
in the chapter on Rules where we will show how to have the cluster use
different sets of options during working hours than during weekends.) For now,
we will describe the simple case where each option is present at most once.
You can obtain an up-to-date list of cluster options, including their default
values, by running the man pacemaker-schedulerd
and
man pacemaker-controld
commands.
Name |
Type |
Default |
Description |
---|---|---|---|
cluster-name |
An (optional) name for the cluster as a whole. This is mostly for users’
convenience for use as desired in administration, but can be used in the
Pacemaker configuration in Rules (as the |
||
dc-version |
detected |
Version of Pacemaker on the cluster’s designated controller (DC). Maintained by the cluster, and intended for diagnostic purposes. |
|
cluster-infrastructure |
detected |
The messaging layer with which Pacemaker is currently running. Maintained by the cluster, and intended for informational and diagnostic purposes. |
|
no-quorum-policy |
stop |
What to do when the cluster does not have quorum. Allowed values:
|
|
batch-limit |
0 |
The maximum number of actions that the cluster may execute in parallel across all nodes. The ideal value will depend on the speed and load of your network and cluster nodes. If zero, the cluster will impose a dynamically calculated limit only when any node has high load. If -1, the cluster will not impose any limit. |
|
migration-limit |
-1 |
The number of live migration actions that the cluster is allowed to execute in parallel on a node. A value of -1 means unlimited. |
|
load-threshold |
80% |
Maximum amount of system load that should be used by cluster nodes. The cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit. |
|
node-action-limit |
0 |
Maximum number of jobs that can be scheduled per node. If nonpositive or invalid, double the number of cores is used as the maximum number of jobs per node. PCMK_node_action_limit overrides this option on a per-node basis. |
|
symmetric-cluster |
true |
If true, resources can run on any node by default. If false, a resource is allowed to run on a node only if a location constraint enables it. |
|
stop-all-resources |
false |
Whether all resources should be disallowed from running (can be useful during maintenance or troubleshooting) |
|
stop-orphan-resources |
true |
Whether resources that have been deleted from the configuration should
be stopped. This value takes precedence over
is-managed (that is, even unmanaged resources will
be stopped when orphaned if this value is |
|
stop-orphan-actions |
true |
Whether recurring operations that have been deleted from the configuration should be cancelled |
|
start-failure-is-fatal |
true |
Whether a failure to start a resource on a particular node prevents
further start attempts on that node. If |
|
enable-startup-probes |
true |
Whether the cluster should check the pre-existing state of resources when the cluster starts |
|
maintenance-mode |
false |
If true, the cluster will not start or stop any resource in the cluster,
and any recurring operations (expect those specifying |
|
stonith-enabled |
true |
Whether the cluster is allowed to fence nodes (for example, failed nodes and nodes with resources that can’t be stopped). If true, at least one fence device must be configured before resources are allowed to run. If false, unresponsive nodes are immediately assumed to be running no resources, and resource recovery on online nodes starts without any further protection (which can mean data loss if the unresponsive node still accesses shared storage, for example). See also the requires resource meta-attribute. |
|
stonith-action |
reboot |
Action the cluster should send to the fence agent when a node must be
fenced. Allowed values are |
|
stonith-timeout |
60s |
How long to wait for |
|
stonith-max-attempts |
10 |
How many times fencing can fail for a target before the cluster will no longer immediately re-attempt it. Any value below 1 will be ignored, and the default will be used instead. |
|
have-watchdog |
detected |
Whether watchdog integration is enabled. This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value true is meaningful if diskless SBD is used and stonith-watchdog-timeout is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured. |
|
stonith-watchdog-timeout |
0 |
If nonzero, and the cluster detects If this is set to a positive value, lost nodes are assumed to achieve self-fencing within this much time. This does not require a fencing resource to be explicitly configured, though a fence_watchdog resource can be configured, to limit use to specific nodes. If this is set to 0 (the default), the cluster will never assume watchdog-based self-fencing. If this is set to a negative value, the cluster will use twice the local
value of the Warning: When used, this timeout must be larger than
|
|
concurrent-fencing |
false |
Whether the cluster is allowed to initiate multiple fence actions
concurrently. Fence actions initiated externally, such as via the
|
|
fence-reaction |
stop |
How should a cluster node react if notified of its own fencing? A
cluster node may receive notification of a “succeeded” fencing that
targeted it if fencing is misconfigured, or if fabric fencing is in use
that doesn’t cut cluster communication. Allowed values are |
|
priority-fencing-delay |
0 |
Apply this delay to any fencing targeting the lost nodes with the
highest total resource priority in case we don’t have the majority of
the nodes in our cluster partition, so that the more significant nodes
potentially win any fencing match (especially meaningful in a
split-brain of a 2-node cluster). A promoted resource instance takes the
resource’s priority plus 1 if the resource’s priority is not 0. Any
static or random delays introduced by |
|
node-pending-timeout |
0 |
Fence nodes that do not join the controller process group within this much time after joining the cluster, to allow the cluster to continue managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours. (since 2.1.7) |
|
cluster-delay |
60s |
If the DC requires an action to be executed on another node, it will consider the action failed if it does not get a response from the other node within this time (beyond the action’s own timeout). The ideal value will depend on the speed and load of your network and cluster nodes. |
|
dc-deadtime |
20s |
How long to wait for a response from other nodes when electing a DC. The ideal value will depend on the speed and load of your network and cluster nodes. |
|
cluster-ipc-limit |
500 |
The maximum IPC message backlog before one cluster daemon will disconnect another. This is of use in large clusters, for which a good value is the number of resources in the cluster multiplied by the number of nodes. The default of 500 is also the minimum. Raise this if you see “Evicting client” log messages for cluster daemon process IDs. |
|
pe-error-series-max |
-1 |
The number of scheduler inputs resulting in errors to save. These inputs can be helpful during troubleshooting and when reporting issues. A negative value means save all inputs, and 0 means save none. |
|
pe-warn-series-max |
5000 |
The number of scheduler inputs resulting in warnings to save. These inputs can be helpful during troubleshooting and when reporting issues. A negative value means save all inputs, and 0 means save none. |
|
pe-input-series-max |
4000 |
The number of “normal” scheduler inputs to save. These inputs can be helpful during troubleshooting and when reporting issues. A negative value means save all inputs, and 0 means save none. |
|
enable-acl |
false |
Whether access control lists should be used to authorize CIB modifications |
|
placement-strategy |
default |
How the cluster should assign resources to nodes (see
Utilization and Placement Strategy). Allowed values are |
|
node-health-strategy |
none |
How the cluster should react to node health
attributes. Allowed values are |
|
node-health-base |
0 |
The base health score assigned to a node. Only used when
|
|
node-health-green |
0 |
The score to use for a node health attribute whose value is |
|
node-health-yellow |
0 |
The score to use for a node health attribute whose value is |
|
node-health-red |
-INFINITY |
The score to use for a node health attribute whose value is |
|
cluster-recheck-interval |
15min |
Pacemaker is primarily event-driven, and looks ahead to know when to
recheck the cluster for failure-timeout settings and most time-based
rules (since 2.0.3). However, it will also recheck the cluster after
this amount of inactivity. This has two goals: rules with |
|
shutdown-lock |
false |
The default of false allows active resources to be recovered elsewhere
when their node is cleanly shut down, which is what the vast majority of
users will want. However, some users prefer to make resources highly
available only for failures, with no recovery for clean shutdowns. If
this option is true, resources active on a node when it is cleanly shut
down are kept “locked” to that node (not allowed to run elsewhere) until
they start again on that node after it rejoins (or for at most
|
|
shutdown-lock-limit |
0 |
If |
|
remove-after-stop |
false |
Deprecated Whether the cluster should remove resources from Pacemaker’s executor after they are stopped. Values other than the default are, at best, poorly tested and potentially dangerous. This option is deprecated and will be removed in a future release. |
|
startup-fencing |
true |
Advanced Use Only: Whether the cluster should fence unseen nodes at
start-up. Setting this to false is unsafe, because the unseen nodes
could be active and running resources but unreachable. |
|
election-timeout |
2min |
Advanced Use Only: If a winner is not declared within this much time of starting an election, the node that initiated the election will declare itself the winner. |
|
shutdown-escalation |
20min |
Advanced Use Only: The controller will exit immediately if a shutdown does not complete within this much time. |
|
join-integration-timeout |
3min |
Advanced Use Only: If you need to adjust this value, it probably indicates the presence of a bug. |
|
join-finalization-timeout |
30min |
Advanced Use Only: If you need to adjust this value, it probably indicates the presence of a bug. |
|
transition-delay |
0s |
Advanced Use Only: Delay cluster recovery for the configured interval to allow for additional or related events to occur. This can be useful if your configuration is sensitive to the order in which ping updates arrive. Enabling this option will slow down cluster recovery under all conditions. |