monitor
operation to the resource’s definition.
Example 5.6. An OCF resource with a recurring health check
<primitive id="Public-IP" class="ocf" type="IPaddr" provider="heartbeat"> <operations> <op id="public-ip-check" name="monitor" interval="60s"/> </operations> <instance_attributes id="params-public-ip"> <nvpair id="public-ip-addr" name="ip" value="192.0.2.2"/> </instance_attributes> </primitive>
Table 5.3. Properties of an Operation
Field | Default | Description |
---|---|---|
id
|
| |
name
|
| |
interval
|
0
|
How frequently (in seconds) to perform the operation. A value of 0 means never. A positive value defines a recurring action, which is typically used with monitor.
|
timeout
|
| |
on-fail
|
restart (except for stop operations, which default to fence when STONITH is enabled and block otherwise)
|
The action to take if this action ever fails. Allowed values:
|
enabled
|
TRUE
|
If
false , ignore this operation definition. This is typically used to pause a particular recurring monitor operation; for instance, it can complement the respective resource being unmanaged (is-managed=false ), as this alone will not block any configured monitoring. Disabling the operation does not suppress all actions of the given type. Allowed values: true , false .
|
record-pending
|
FALSE
| |
role
|
|
Run the operation only on node(s) that the cluster thinks should be in the specified role. This only makes sense for recurring monitor operations. Allowed (case-sensitive) values:
Stopped , Started , and in the case of multi-state resources, Slave and Master .
|
resource-discovery
location constraint property.)
target-role
property can be used for further checking.
interval=10 role=Started
and a second monitor operation with interval=11 role=Stopped
, the cluster will run the first monitor on any nodes it thinks should be running the resource, and the second monitor on any nodes that it thinks should not be running the resource (for the truly paranoid, who want to know when an administrator manually starts a service by mistake).