crm_config
section, and, in advanced configurations, there may be more than one set. (This will be described later in the section on Chapter 8, Rules where we will show how to have the cluster use different sets of options during working hours than during weekends.) For now, we will describe the simple case where each option is present at most once.
man pengine
and man crmd
commands.
Table 3.2. Cluster Options
Option | Default | Description |
---|---|---|
dc-version
|
| |
cluster-infrastructure
|
| |
expected-quorum-votes
|
| |
no-quorum-policy
|
stop
|
|
batch-limit
|
0 (30 before version 1.1.11)
| |
migration-limit
|
-1
| |
symmetric-cluster
|
TRUE
| |
stop-all-resources
|
FALSE
| |
stop-orphan-resources
|
TRUE
| |
stop-orphan-actions
|
TRUE
| |
start-failure-is-fatal
|
TRUE
|
Should a failure to start a resource on a particular node prevent further start attempts on that node? If FALSE, the cluster will decide whether the same node is still eligible based on the resource’s current failure count and
migration-threshold (see Section 9.3, “Handling Resource Failure”).
|
enable-startup-probes
|
TRUE
| |
maintenance-mode
|
FALSE
| |
stonith-enabled
|
TRUE
|
Should failed nodes and nodes with resources that can’t be stopped be shot? If you value your data, set up a STONITH device and enable this.
If true, or unset, the cluster will refuse to start resources unless one or more STONITH resources have been configured. If false, unresponsive nodes are immediately assumed to be running no resources, and resource takeover to online nodes starts without any further protection (which means data loss if the unresponsive node still accesses shared storage, for example). See also the
requires meta-attribute in Section 5.4, “Resource Options”.
|
stonith-action
|
reboot
| |
stonith-timeout
|
60s
| |
stonith-max-attempts
|
10
| |
concurrent-fencing
|
FALSE
| |
cluster-delay
|
60s
|
Estimated maximum round-trip delay over the network (excluding action execution). If the TE requires an action to be executed on another node, it will consider the action failed if it does not get a response from the other node in this time (after considering the action’s own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes.
|
dc-deadtime
|
20s
|
The "correct" value will depend on the speed/load of your network and the type of switches used.
|
cluster-recheck-interval
|
15min
|
The Cluster is primarily event-driven, but your configuration can have elements that take effect based on the time of day. To ensure these changes take effect, we can optionally poll the cluster’s status for changes. A value of 0 disables polling. Positive values are an interval (in seconds unless other SI units are specified, e.g. 5min).
|
cluster-ipc-limit
|
500
|
The maximum IPC message backlog before one cluster daemon will disconnect another. This is of use in large clusters, for which a good value is the number of resources in the cluster multiplied by the number of nodes. The default of 500 is also the minimum. Raise this if you see "Evicting client" messages for cluster daemon PIDs in the logs.
|
pe-error-series-max
|
-1
| |
pe-warn-series-max
|
-1
| |
pe-input-series-max
|
-1
| |
placement-strategy
|
default
|
How the cluster should allocate resources to nodes (see Chapter 12, Utilization and Placement Strategy). Allowed values are
default , utilization , balanced , and minimal . (since 1.1.0)
|
node-health-strategy
|
none
|
How the cluster should react to node health attributes (see Section 9.5, “Tracking Node Health”). Allowed values are
none , migrate-on-red , only-green , progressive , and custom .
|
node-health-base
|
0
| |
node-health-green
|
0
| |
node-health-yellow
|
0
| |
node-health-red
|
0
| |
remove-after-stop
|
FALSE
| |
startup-fencing
|
TRUE
| |
election-timeout
|
2min
| |
shutdown-escalation
|
20min
| |
crmd-integration-timeout
|
3min
| |
crmd-finalization-timeout
|
30min
| |
crmd-transition-delay
|
0s
| |
default-resource-stickiness
|
0
| |
is-managed-default
|
TRUE
| |
default-action-timeout
|
20s
|
crm_attribute
tool. To get the current value of cluster-delay
, you can run:
# crm_attribute --query --name cluster-delay
# crm_attribute -G -n cluster-delay
# crm_attribute -G -n cluster-delay scope=crm_config name=cluster-delay value=60s
# crm_attribute -G -n clusta-deway scope=crm_config name=clusta-deway value=(null) Error performing operation: No such device or address
# crm_attribute --name cluster-delay --update 30s
# crm_attribute --name cluster-delay --delete Deleted crm_config option: id=cib-bootstrap-options-cluster-delay name=cluster-delay