High Availability Architecture
SingleStoreDB stores data redundantly in a set of leaves, called availability groups, to ensure high availability.
redundancy-2, SingleStoreDB handles node failures by promoting the appropriate replica partitions into masters so that your databases remain online.
When you recover a leaf in redundancy-1, by default it will be reintroduced to the cluster, with all the data that was on it restored.
Similarly, when you recover a leaf in redundancy-2, by default it will be reintroduced to the cluster.
If that is the only instance of that partition (in other words, if the pair of this leaf is also down), the partition will be reintroduced to the cluster.
If there is another instance of that partition on the pair of the leaf, the newly recovered instance will become a replica to the existing partition instance.
If there is any data divergence between the two instances, the partition instance on the newly recovered leaf will be discarded, and a new replica partition instance will be introduced, with data replicated from the existing copy.
If there is another instance of that partition, but it is on a leaf that is not the pair of the recovered leaf, then the recovered partition instance will be marked as an orphan.
You can disable the master aggregator from automatically attaching leaves that become visible by setting a global variable
auto_ to Off.
Every high availability command in SingleStoreDB is online.
High availability commands can only be run on the Master Aggregator.
Last modified: April 3, 2023