An availability group is set of leaves which store data redundantly to ensure high availability. Each availability group contains a copy of every partition in the system - some as primaries and some as replicas. Currently, SingleStore supports up to two availability groups. You can set the number of availability groups via the redundancy_level
variable on the Master Aggregator. From this point forward, we’ll discuss the redundancy-2 case.
The placement of replica partitions in a cluster can be specified via the leaf_failover_fanout
variable. SingleStore supports two modes for partition placement: paired
and load_balanced
. In paired
mode, each leaf in an availability group has a corresponding pair node in the other availability group. Each leaf has its own primary partitions, which SingleStore synchronizes to its pair as replica partitions. In other words, each leaf backs up its pair and vice versa. For this reason, each leaf stores both primary and replica partitions. In the event of a failure, SingleStore will automatically promote replica partitions on a leaf’s pair.
In load_balanced
mode, primary partitions are evenly placed on leaves. The primary partitions on every leaf in an availability group have their replica partitions spread evenly among a set of leaves in the opposite availability group.
For more information, see Managing High Availability.
By default, the ADD LEAF command will add a leaf into the smaller of the two groups. However, if you know your cluster’s topology in advance, you can specify the group explicitly with the INTO GROUP N
suffix. By grouping together machines that share resources like a network switch or power supply, you can isolate common hardware failures to a single group and dramatically improve the cluster’s uptime.
SingleStore automatically displays which availability group a leaf belongs to in the SHOW LEAVES command.