Avoid two groups of servers making independent decisions, by requiring majority for taking every decision.
In a distributed system, whenever a server takes any action,
it needs to ensure that in the event of a crash the results of the actions are available to the clients.
This can be achieved by replicating the result to other servers in the cluster. But that leads to the question: how many other servers need to confirm the replication before the original server can be confident
that the update is fully recognized. If the original server waits for too many replications, then it will respond slowly – reducing liveness.
But if it doesn’t have enough replications, then the update could be lost – a failure of safety.
It’s critical to balance between the overall system performance and system continuity.
A cluster agrees that it’s received an update when a majority of the nodes in the cluster have acknowledged the update. We call this number a quorum. So if we have a cluster of five nodes, we need a quorum of three. (For a cluster of n nodes, the quorum is n/2 + 1.)
The need for a quorum indicates how many failures can be tolerated – which is the size of the cluster minus the quorum. A cluster of five nodes can tolerate two of them failing. In general, if we want to tolerate ‘f’ failures we need a cluster size of 2f + 1
Consider following two examples that need a quorum:
- Updating data in a cluster of servers.High-Water Mark is used to ensure only data which is guaranteed to be available on the majority of servers is visible to clients.
- Leader election. In Leader and Followers, a leader is selected only if it gets votes from a majority of the servers.
Deciding on number of servers in a cluster
The cluster can function only if majority of servers are up and running.
In systems doing data replication, there are two things to consider:
- The throughput of write operations.
- The number of failures which need to be tolerated.
Every time data is written to the cluster, it needs to be copied to multiple servers. Every additional server adds some overhead to complete this write.
The latency of data write is directly proportional to the number of servers forming the quorum.
As we will see below, doubling the number of servers in a cluster will reduce throughput to half of the value for the original cluster.
The number of server failures tolerated is dependent on the size of the cluster. But just adding one more server to an existing cluster
doesn’t always give more fault tolerance: adding one server to a three server cluster doesn’t increase failure tolerance.
Considering these two factors, most practical quorum-based systems have cluster sizes of three or five. A five-server cluster tolerates two server failures and
has tolerable data write throughput of few thousand requests per second.
Here is an example of how to choose the number of servers, based on the number of tolerated failures
and approximate impact on the throughput.
The throughput column shows approximate relative throughput to highlight how throughput degrades with the number of servers.
The number will vary from system to system.
As an example, readers can refer to the actual throughput data published in Raft Thesis and the original Zookeeper paper.
|Number of Servers||Quorum||Number Of Tolerated Failures||Representative Throughput|
- All the consensus implementations like Zab, Raft, Paxos are quorum based.
Even in systems which don’t use consensus, quorum is used to make sure the latest update is available to at least one server in case of failures or network partition.
For instance, in databases like Cassandra, a database update can be configured to return success only
after a majority of the servers have updated the record successfully.
This page is part of:
Patterns of Distributed Systems
† pattern in progress