Disabling High Availability

To disable High Availability, all leaves in availability group 2 need to be deleted and either removed from the cluster or re-added into availability group 1.

Before You Start

If the cluster is configured with leaf_failover_fanout set to load_balanced, change it to paired before removing the leaves:

SHOW VARIABLES LIKE '%fanout%';
+--------------------------------------+
| Variable_name        | Value         |
+--------------------------------------+
| leaf_failover_fanout | load_balanced |
---------------------------------------+
1 row in set (0.00 sec)
SET GLOBAL leaf_failover_fanout='paired';
Query OK, 0 rows affected (0.01 sec)

Step 1. Remove all leaves in availability group 2

Connect to SingleStore on the master aggregator and list the leaves to find the ones in availability group 2.

sdb-admin list-nodes -r leaf
+------------+------------+-------+------+---------------+--------------+---------+----------------+--------------------+
| MemSQL ID  |    Role    | Host  | Port | Process State | Connectable? | Version | Recovery State | Availability Group |
+------------+------------+-------+------+---------------+--------------+---------+----------------+--------------------+
| 4DAE7D1F54 | Leaf       | node1 | 3310 | Running       | True         | 6.7.7   | Recovering     | 2                  |
| 52CA34CD0C | Leaf       | node1 | 3311 | Running       | True         | 6.7.7   | Online         | 2                  |
| 74BBE83C45 | Leaf       | node1 | 3308 | Running       | True         | 6.7.7   | Online         | 1                  |
| A6D82670D8 | Leaf       | node1 | 3309 | Running       | True         | 6.7.7   | Online         | 1                  |
+------------+------------+-------+------+---------------+--------------+---------+----------------+--------------------+

Then run sdb-admin remove-leaf on each leaf in availability group 2:

sdb-admin remove-leaf --memsql-id 4DAE7D1F54 -y
sdb-admin remove-leaf --memsql-id 52CA34CD0C -y

Run sdb-admin list-nodes again to verify only the two leaves in availability group 1 are listed.

sdb-admin list-nodes -r leaf
+------------+------+-------+------+---------------+--------------+---------+----------------+--------------------+
| MemSQL ID  | Role | Host  | Port | Process State | Connectable? | Version | Recovery State | Availability Group |
+------------+------+-------+------+---------------+--------------+---------+----------------+--------------------+
| 74BBE83C45 | Leaf | node1 | 3308 | Running       | True         | 6.7.7   | Online         | 1                  |
| A6D82670D8 | Leaf | node1 | 3309 | Running       | True         | 6.7.7   | Online         | 1                  |
+------------+------+-------+------+---------------+--------------+---------+----------------+--------------------+

Step 2. Update the redundancy_level value

On the master aggregator run:

SET @@GLOBAL.redundancy_level = 1;

This updates the current configuration and sets the cluster to run in redundancy-1 operation mode.

In addition, update the SingleStore configuration file on the master aggregator to make sure the change is not lost whenever it is restarted.

sdb-admin list-nodes --role master -q | xargs -I % sdb-admin update-config --memsql-id % --key redundancy_level --value 1 --set-global -y

Step 3. Re-add leaves into availability group 1 (optional)

Optionally, you can re-add the deleted leaves into the availability group 1. This also requires rebalancing partitions and clearing the orphan ones.

List the nodes in your cluster so you know which ones to add back in:

sdb-admin list-nodes
+------------+------------+-------+------+---------------+--------------+---------+----------------+--------------------+
| MemSQL ID  |    Role    | Host  | Port | Process State | Connectable? | Version | Recovery State | Availability Group |
+------------+------------+-------+------+---------------+--------------+---------+----------------+--------------------+
| 43F1B836D3 | Master     | node1 | 3306 | Running       | True         | 6.7.7   | Online         |                    |
| E4921A995C | Aggregator | node1 | 3307 | Running       | True         | 6.7.7   | Online         |                    |
| 74BBE83C45 | Leaf       | node1 | 3308 | Running       | True         | 6.7.7   | Online         | 1                  |
| A6D82670D8 | Leaf       | node1 | 3309 | Running       | True         | 6.7.7   | Online         | 1                  |
| 4DAE7D1F54 | Unknown    | node1 | 3310 | Running       | True         | 6.7.7   | Recovering     |                    |
| 52CA34CD0C | Unknown    | node1 | 3311 | Running       | True         | 6.7.7   | Recovering     |                    |
+------------+------------+-------+------+---------------+--------------+---------+----------------+--------------------+

To re-add the leaves into the availability group 1, run the following on the leaf nodes you previous removed:

sdb-admin add-leaf --memsql-id 4DAE7D1F54
sdb-admin add-leaf --memsql-id 52CA34CD0C

Run sdb-admin list-nodes one more time to show the leaves added back into the cluster, but in availability group 1.

sdb-admin list-nodes
+------------+------------+-------+------+---------------+--------------+---------+----------------+--------------------+
| MemSQL ID  |    Role    | Host  | Port | Process State | Connectable? | Version | Recovery State | Availability Group |
+------------+------------+-------+------+---------------+--------------+---------+----------------+--------------------+
| 43F1B836D3 | Master     | node1 | 3306 | Running       | True         | 6.7.7   | Online         |                    |
| E4921A995C | Aggregator | node1 | 3307 | Running       | True         | 6.7.7   | Online         |                    |
| 4DAE7D1F54 | Leaf       | node1 | 3310 | Running       | True         | 6.7.7   | Recovering     | 1                  |
| 52CA34CD0C | Leaf       | node1 | 3311 | Running       | True         | 6.7.7   | Recovering     | 1                  |
| 74BBE83C45 | Leaf       | node1 | 3308 | Running       | True         | 6.7.7   | Online         | 1                  |
| A6D82670D8 | Leaf       | node1 | 3309 | Running       | True         | 6.7.7   | Online         | 1                  |
+------------+------------+-------+------+---------------+--------------+---------+----------------+--------------------+

Next, rebalance the cluster by running REBALANCE ALL DATABASES.

REBALANCE ALL DATABASES;

The cluster will now contain orphan partitions - in fact, the old replica partitions. Orphan partitions can be shown by running SHOW CLUSTER STATUS, and deleted by running on the master aggregator:

CLEAR ORPHAN DATABASES;

Last modified: April 29, 2024

Was this article helpful?

Verification instructions

Note: You must install cosign to verify the authenticity of the SingleStore file.

Use the following steps to verify the authenticity of singlestoredb-server, singlestoredb-toolbox, singlestoredb-studio, and singlestore-client SingleStore files that have been downloaded.

You may perform the following steps on any computer that can run cosign, such as the main deployment host of the cluster.

  1. (Optional) Run the following command to view the associated signature files.

    curl undefined
  2. Download the signature file from the SingleStore release server.

    • Option 1: Click the Download Signature button next to the SingleStore file.

    • Option 2: Copy and paste the following URL into the address bar of your browser and save the signature file.

    • Option 3: Run the following command to download the signature file.

      curl -O undefined
  3. After the signature file has been downloaded, run the following command to verify the authenticity of the SingleStore file.

    echo -n undefined |
    cosign verify-blob --certificate-oidc-issuer https://oidc.eks.us-east-1.amazonaws.com/id/CCDCDBA1379A5596AB5B2E46DCA385BC \
    --certificate-identity https://kubernetes.io/namespaces/freya-production/serviceaccounts/job-worker \
    --bundle undefined \
    --new-bundle-format -
    Verified OK