12.4. Limitations and Workarounds
The type of problem Pacemaker is dealing with here is known as the
knapsack problem and falls into the
NP-complete category of computer science problems — a fancy way of saying "it takes a really long time to solve".
Clearly in a HA cluster, it’s not acceptable to spend minutes, let alone hours or days, finding an optimal solution while services remain unavailable.
So instead of trying to solve the problem completely, Pacemaker uses a best effort algorithm for determining which node should host a particular service. This means it arrives at a solution much faster than traditional linear programming algorithms, but by doing so at the price of leaving some services stopped.
In the contrived example at the start of this section:
rsc-small
would be allocated to node1
rsc-medium
would be allocated to node2
rsc-large
would remain inactive
Which is not ideal.
There are various approaches to dealing with the limitations of pacemaker’s placement strategy:
- Ensure you have sufficient physical capacity.
It might sound obvious, but if the physical capacity of your nodes is (close to) maxed out by the cluster under normal conditions, then failover isn’t going to go well. Even without the utilization feature, you’ll start hitting timeouts and getting secondary failures.
- Build some buffer into the capabilities advertised by the nodes.
Advertise slightly more resources than we physically have, on the (usually valid) assumption that a resource will not use 100% of the configured amount of CPU, memory and so forth all the time. This practice is sometimes called overcommit.
- Specify resource priorities.
If the cluster is going to sacrifice services, it should be the ones you care about (comparatively) the least. Ensure that resource priorities are properly set so that your most important resources are scheduled first.