Example 4.6. An OCF resource with a non-default start timeout
<primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat"> <operations> <op id="Public-IP-start" name="start" timeout="60s"/> </operations> <instance_attributes id="params-public-ip"> <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/> </instance_attributes> </primitive>
op
element as XML attributes, or in a separate meta_attributes
block as nvpair
elements. XML attributes take precedence over nvpair
elements if both are specified.
Table 4.3. Properties of an Operation
Field | Default | Description |
---|---|---|
id
|
| |
name
|
| |
interval
|
0
|
How frequently (in seconds) to perform the operation. A value of 0 means "when needed". A positive value defines a recurring action, which is typically used with monitor.
|
timeout
|
| |
on-fail
|
Varies by action:
|
The action to take if this action ever fails. Allowed values:
|
enabled
|
TRUE
|
If
false , ignore this operation definition. This is typically used to pause a particular recurring monitor operation; for instance, it can complement the respective resource being unmanaged (is-managed=false ), as this alone will not block any configured monitoring. Disabling the operation does not suppress all actions of the given type. Allowed values: true , false .
|
record-pending
|
TRUE
|
If
true , the intention to perform the operation is recorded so that GUIs and CLI tools can indicate that an operation is in progress. This is best set as an operation default (see Section 4.5.4, “Setting Global Defaults for Operations”). Allowed values: true , false .
|
role
|
|
Run the operation only on node(s) that the cluster thinks should be in the specified role. This only makes sense for recurring
monitor operations. Allowed (case-sensitive) values: Stopped , Started , and in the case of promotable clone resources, Slave and Master .
|
Note
on-fail
is set to demote
, recovery from failure by a successful demote causes the cluster to recalculate whether and where a new instance should be promoted. The node with the failure is eligible, so if master scores have not changed, it will be promoted again.
migration-threshold
for the master role, but the same effect can be achieved with a location constraint using a rule with a node attribute expression for the resource’s fail count.
<rsc_location id="loc1" rsc="my_primitive"> <rule id="rule1" score="-INFINITY" role="Master" boolean-op="or"> <expression id="expr1" attribute="fail-count-my_primitive#promote_0" operation="gte" value="1"/> <expression id="expr2" attribute="fail-count-my_primitive#monitor_10000" operation="gte" value="1"/> </rule> </rsc_location>
my_primitive
resource (note that the primitive name, not the clone name, is used in the rule), and that there is a recurring 10-second-interval monitor configured for the master role (fail count attributes specify the interval in milliseconds).