Search

Quorum: Node Majority

July 15, 2024
Ivan Ischenko
StarWind Pre-Sales Team Lead. Ivan has a deep knowledge of virtualization, strong background in storage technologies, and solution architecture.
StarWind Pre-Sales Team Lead. Ivan has a deep knowledge of virtualization, strong background in storage technologies, and solution architecture.

Prevents split-brain scenarios and guarantees data consistency by maintaining control during a cluster node failure.

Intro 

Nowadays, most business operations depend directly on the applications and as a result — on their IT infrastructure. Therefore, providing 24/7/365 uptime and high availability for the applications is a definite way to ensure smooth business operation. Also, more businesses are implementing geographically distributed (stretched) clusters that are usually limited in connectivity possibilities between locations that may lead to storage outages and data inconsistency.  

Problem 

Clustered environment is a unified group of servers (nodes), which serves a single entity to applications and ensures high availability, load balancing, and system scalability. At the same time, in clusters that consist of an even number of nodes, if half of the nodes fail, the cluster goes offline, leaving you with no access to data or applications. In stretched clusters, even simple network connectivity issues can result in cluster becoming inoperable. 

In addition, when clustered nodes do not have a proper voting mechanism and lose network communication between each other, a split-brain may occur, leading to data corruption. In this scenario, the only way to perform data recovery is to restore it from backups. In situations, when cluster nodes are distributed between different locations, having redundant network connections between the sites would avoid a single point of failure. However, having several fiber connections is not always possible and often becomes costly. 

Solution 

To ensure the availability of a cluster and all of its VMs and applications when there is an even number of nodes, StarWind introduces the so-called Witness node (using Node Majority mechanism). Witness node is a separate instance that takes part in quorum voting but requires minimum resources for its deployment. Introducing a Witness node into the StarWind environment ensures that a 2-node cluster will continue running in case any of the nodes fails. 

With a separate StarWind Witness node, the possibility of a split-brain is completely eliminated by having another instance that creates quorum and ensures cluster availability. Moreover, a Witness instance can be deployed in cloud to ensure that a distributed cluster across locations will show maximum uptime numbers and will not require additional hardware to maintain. 

Conclusion 

StarWind Node Majority Mechanism ensures data consistency. Moreover, StarWind provides flexible options for setting up a Witness, including cloud witness making it exceptionally simple to implement. No matter what your IT infrastructure configuration is, StarWind is sure to take its performance to maximum, maintaining high constant application uptime and HA regardless of any disaster.