Cluster Expansion Steps

If your SingleStore workload changes over time, you can scale your cluster to:

  • Improve query performance: Generally, adding database partitions will allow your query to run faster because the data in each partition will be processed in parallel. You must have at least one CPU core per partition to take full advantage of parallel processing for a single query. Depending on your cluster topology, you may want to add more leaf nodes to accommodate the increased number of partitions. Additionally, to improve load balancing of multiple queries, you can add aggregator nodes.

  • Meet increasing memory and storage requirements: You can add additional leaf nodes to increase RAM and disk storage.

The following sections explain how to add database partitions and additional nodes to meet performance/memory/storage scaling requirements.

Add Database Partitions

You can add database partitions using one of the following methods:

  • Use the split partitions feature to double the number of partitions in the database.

  • Export all data, drop the database, recreate the database with the desired number of partitions, recreate all the tables of the database by using CREATE DATABASE...PARTITIONS... command and load data back into the database’s tables.

    Note: Data written after you have exported your data or taken a backup with partition split will not be included in the snapshot of the data that you took when you exported the data or took split snapshot earlier. Since database restore only restores the data included in your snapshot, data written after your snapshot but before the restore will be lost. To avoid losing data, it is highly recommended that you pause the write workload in the period between when the snapshot begins until when snapshot restoration completes.

Caution

To maintain an equal number of partitions per leaf, the number of partitions should be divisible by the number of leaves. For example, a four-leaf cluster has been created with four partitions per leaf (16 partitions total). If only two leaf nodes are added, the total number of leaves is six; however, 16 / 6 is not an integer, meaning there cannot be an equal number of partitions on each leaf. This will cause data skew. If four additional leaves are added instead, then the number of partitions (16) is divisible by the number of leaves (8), which means there can be an equal number of partitions on each leaf (two).

Also, if you keep the existing number of partitions, then you don’t have to do any reloading; only rebalancing would be required.

Use BACKUP WITH SPLIT PARTITIONS

Use the BACKUP DATABASE WITH SPLIT PARTITIONS command to double the number of partitions in your database backup so that when the database backup is restored, you will have twice the number of partitions as you had before your split backup.

SingleStore recommends creating backups with split partitions under resource governance. Not backing up with split partitions risks completely overwhelming your cluster.

To back up to a local drive or NFS:

BACKUP [DATABASE] db_name WITH SPLIT PARTITIONS [BY 2] TO "backup_path"

To back up to S3, Azure, or Google Cloud Storage (GCS):

BACKUP [DATABASE] db_name WITH SPLIT PARTITIONS [BY 2] TO [S3 | AZURE | GCS] "backup_path"
[CONFIG configuration_json]
[CREDENTIALS credentials_json]

Note: For more information on CONFIG and CREDENTIALS for S3, Azure, and GCS, refer to the BACKUP DATABASE topic.

Warning

Partition splitting on columnstore tables can be a CPU-intensive operation. SingleStore recommends that partition splitting be performed while there is no workload or a minimal workload running on the cluster.

Remarks

  • Partitions can only be split by 2. The BY 2 clause is optional.

  • The BACKUP WITH SPLIT PARTITIONS command supports all target locations that are supported by the normal backup command such as the local filesystem, NFS, S3, Azure, and GCS.

  • Be sure to verify that you have enough disk space temporarily on your backup location since partition split doubles the columnstore disk usage within the backup files. For rowstore tables, backup space usage doesn’t change significantly. For example, for 1R amount of rowstore data, you will have ~1R amount of data in your backup location after the partition split. While for 1N amount of columnstore data, you will have 2N worth of columnstore data in your backup location after partition split since it copies all the blobs of one partition into the backups of both partitions resulting from the split.

  • You must have SUPER privileges to execute the BACKUP WITH SPLIT PARTITIONS command.

  • You cannot make a backup of an unlimited storage database with BACKUP WITH SPLIT PARTITIONS.

Use Case

Suppose you have a database CustomerDb and you want to double the number of its partitions by performing a backup to a local drive. Follow these steps:

  1. Stop the workload on the database before splitting the partitions. You should pause the write workload to avoid losing writes that are not included in the backup. Note: If you want to add leaves, then you should add them before stopping the workload on the database and starting the split backup process. This ensures that restore can run on the final and larger number of leaves and the restored database can start off with balanced partitions. See step 5 for details on adding leaves.

  2. Perform a backup database with split partitions. To take a split backup locally or to an NFS location, run the following command:

    BACKUP DATABASE CustomerDb WITH SPLIT PARTITIONS TO/path/to/backup_location’;
  3. After the backup has completed successfully, drop the database. Note: If you have sufficient disk space, then you may optionally restore the backup to a different database first, for example testDB, to verify that the backup can be restored. You can then drop the original database CustomerDb, restore it with its original name from the new backup, and then drop testDB.

    DROP DATABASE CustomerDb;
  4. Restore the split backup database.

    RESTORE DATABASE CustomerDb FROM/path/to/backup_location’;
  5. You may add more leaves to increase capacity or parallelism. If you forget to add leaves before taking the split backup, you can wait for the restore to complete, then add the leaves, and run the REBALANCE PARTITIONS command to spread the partitions evenly across the leaves. To ADD LEAF, run the following command:

    ADD LEAF user[:'password']@'host'[:port] [INTO GROUP {1|2}]

How to back up to S3, GCP, or Azure is explained here.

Adding Nodes (Leaves and Aggregators)

Before You Begin

Confirm host and leaf configurations are the same across new and existing leaves. Configurations for sizing (RAM, cores) and OS tuning (ulimit settings, THP settings, etc.) should match to ensure uniform query performance and cluster health.

See System Requirements and Recommendations for more details.

It’s also important to consider what non-sync engine variables have been modified when increasing your cluster size. Any modified non-sync variable settings will not be automatically applied to new nodes when they are added to the cluster; therefore, when scaling your cluster, each new node non-sync variables must be updated with the same values as other nodes.

Also, generate cluster report using the sdb-report collect command prior to adding nodes. This report will be useful for troubleshooting in case any issues arise during the cluster expansion process.

Add Additional Leaf Nodes to Your Cluster

  1. Log into the host that has the SingleStore management tools installed on it. You will use this host to add new nodes to your cluster.

  2. If your new nodes will reside on a new host (or set of hosts) that wasn’t previously part of the cluster, then you must register the host(s). If you use a certificate to SSH into your hosts, you can use it here with the optional --identity-file flag and sdb-toolbox-config will use it when connecting to the host.

    sdb-toolbox-config register-host --host <hostname|IP>
  3. Download and install the singlestoredb-server package (containing the SingleStore engine and memsqlctl) onto the target host(s). Ensure you add a new leaf with the same version as the current installed on your platform.

    sdb-deploy install --host <hostname-or-IP-address> --version <singlestoredb-server-version>

    For example:

    sdb-deploy install --host 192.168.111.18 --version 7.3.19

    Note

    • If you don’t specify a value for the --version flag, the latest released version of the singlestoredb-server package will be installed.

    • If you added more than one host to your cluster, you can pass in additional --host flags (e.g. --host 172.17.0.1 --host 172.17.0.2).

  4. Run sdb-admin create-node. Specify the hostname that you registered earlier and set a secure password for the root database user. If the default port of 3306 is already being used by a SingleStore node on the target host, specify another port with --port PORT.

    sdb-admin create-node --host <hostname|IP> --password <secure-password>
  5. Assign the new node a role such as aggregator or leaf using either add-aggregator or add-leaf, respectively. If you are running in high availability (i.e. your master aggregator is set to --redundancy-level 2), you can specify an availability group for your new leaf nodes, --availability-group {1,2} or let it be auto-assigned to the smaller of the two groups.

    sdb-admin add-leaf --memsql-id <MemSQL_ID>
  6. Verify the new node has been added successfully, by running sdb-admin list-nodes again.

Redistribute Data Across the Cluster

  1. Once you have successfully added your new nodes, connect to the host running the master aggregator. The singlestore-client package contains a client application that you can use to connect to your database by simply running singlestore at the command prompt.

  2. As a best practice, run the SNAPSHOT DATABASE command to bring transaction logs up to date. As a general rule, taking a snapshot is helpful before any maintenance/clustering operation where data is being moved or recovered as it compresses files by combining multiple statements into one.

  3. Run REBALANCE ALL DATABASES.

  4. Check memory and disk usage through SingleStore Studio.

  5. On any aggregator, enter the SQL interface and output SHOW CLUSTER STATUS into a file you can examine to make sure partitions are evenly distributed.

  6. Check for data skew by running the following query in the SQL interface on any aggregator:

SELECT
DATABASE_NAME,
TABLE_NAME,
FLOOR(AVG(ROWS)) AS avg_rows,
ROUND(STDDEV(ROWS)/AVG(ROWS),3) * 100 AS row_skew,
FLOOR(AVG(MEMORY_USE)) AS avg_memory,
ROUND(STDDEV(MEMORY_USE)/AVG(MEMORY_USE),3) * 100 AS memory_skew
FROM INFORMATION_SCHEMA.TABLE_STATISTICS
GROUP BY 1, 2
HAVING SUM(ROWS) > 10000
ORDER BY row_skew DESC;

Refer to the Detecting and Resolving Data Skew topic for more information.

Note

For more information refer to the blog, Backup With Split Partitions: Robust Partition Split via Backup

Cluster Expansion in DR Replication

Expanding either the primary or secondary cluster in disaster recovery (DR) replication is the same process as if DR replication were not enabled. Either cluster can be expanded without impacting the other.

Note

These steps apply to clusters that are managed with SingleStore Tools.

In this example, the primary cluster has 2 hosts, 2 leaves, and 8 partitions. Note that the following commands are run from the command line on the Master Aggregator host (54.90.20.80 in this example cluster).

Expand the Primary Cluster

sdb-admin list-nodes
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID  |  Role  |      Host      | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| D3F1BD9E0E | Master | 54.90.20.80    | 3306 | Running       | True         | 7.0.19  | Online         |                    | 0.0.0.0      |
| 5FF108F2F0 | Leaf   | 100.26.238.187 | 3307 | Running       | True         | 7.0.19  | Online         | 2                  | 0.0.0.0      |
| 2B397AE7A1 | Leaf   | 54.90.20.80    | 3307 | Running       | True         | 7.0.19  | Online         | 1                  | 0.0.0.0      |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
show leaves;
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| Host           | Port | Availability_Group | Pair_Host      | Pair_Port | State  | Opened_Connections | Average_Roundtrip_Latency_ms | NodeId | Grace_Period_In_seconds |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| 100.26.238.187 | 3307 |                  2 | 54.90.20.80    |      3307 | online |                 15 |                        0.213 |      3 |                    NULL |
| 54.90.20.80    | 3307 |                  1 | 100.26.238.187 |      3307 | online |                  9 |                        0.177 |      4 |                    NULL |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
show partitions on memsql_demo;
+---------+----------------+------+---------+--------+
| Ordinal | Host           | Port | Role    | Locked |
+---------+----------------+------+---------+--------+
|       0 | 54.90.20.80    | 3307 | Master  |      0 |
|       0 | 100.26.238.187 | 3307 | Replica |      0 |
|       1 | 54.90.20.80    | 3307 | Master  |      0 |
|       1 | 100.26.238.187 | 3307 | Replica |      0 |
|       2 | 54.90.20.80    | 3307 | Master  |      0 |
|       2 | 100.26.238.187 | 3307 | Replica |      0 |
|       3 | 54.90.20.80    | 3307 | Master  |      0 |
|       3 | 100.26.238.187 | 3307 | Replica |      0 |
|       4 | 54.90.20.80    | 3307 | Master  |      0 |
|       4 | 100.26.238.187 | 3307 | Replica |      0 |
|       5 | 54.90.20.80    | 3307 | Master  |      0 |
|       5 | 100.26.238.187 | 3307 | Replica |      0 |
|       6 | 54.90.20.80    | 3307 | Master  |      0 |
|       6 | 100.26.238.187 | 3307 | Replica |      0 |
|       7 | 54.90.20.80    | 3307 | Master  |      0 |
|       7 | 100.26.238.187 | 3307 | Replica |      0 |
+---------+----------------+------+---------+--------+

Add New Nodes

Add two new nodes to the primary cluster.

sdb-admin create-node --host 54.90.20.80 -p<secure-password> --port 3308
Toolbox is about to perform the following actions on host 54.90.20.80:
  · Run 'memsqlctl create-node --port 3308 --password ●●●●●●'

Would you like to continue? [y/N]: y
+------------------------------------------+-------------+
|                MemSQL ID                 |     Host    |
+------------------------------------------+-------------+
| EC862275668F1529AC8F9D78F87896E04BD5DD84 | 54.90.20.80 |
+------------------------------------------+-------------+
sdb-admin create-node --host 100.26.238.187 -p<secure-password> --port 3308
Toolbox is about to perform the following actions on host 100.26.238.187:
  · Run 'memsqlctl create-node --port 3308 --password ●●●●●●'

Would you like to continue? [y/N]: y
+------------------------------------------+----------------+
|                MemSQL ID                 |      Host      |
+------------------------------------------+----------------+
| B8C6534BAE86981FDA49C337390CCA00F89478C9 | 100.26.238.187 |
+------------------------------------------+----------------+

Designate the New Nodes as Leaves

sdb-admin list-nodes
+------------+---------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID  |  Role   |      Host      | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+---------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| D3F1BD9E0E | Master  | 54.90.20.80    | 3306 | Running       | True         | 7.0.19  | Online         |                    | 0.0.0.0      |
| 5FF108F2F0 | Leaf    | 100.26.238.187 | 3307 | Running       | True         | 7.0.19  | Online         | 2                  | 0.0.0.0      |
| 2B397AE7A1 | Leaf    | 54.90.20.80    | 3307 | Running       | True         | 7.0.19  | Online         | 1                  | 0.0.0.0      |
| B8C6534BAE | Unknown | 100.26.238.187 | 3308 | Running       | True         | 7.0.19  | Online         |                    | 0.0.0.0      |
| EC86227566 | Unknown | 54.90.20.80    | 3308 | Running       | True         | 7.0.19  | Online         |                    | 0.0.0.0      |
+------------+---------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
sdb-admin add-leaf -p<secure-password> --memsql-id B8C6534BAE
Toolbox will perform the following actions on host 54.90.20.80:
  · Run 'memsqlctl add-leaf --host 100.26.238.187 --port 3308 --user root --password ●●●●●●'

Would you like to continue? [y/N]: y
✓ Successfully ran 'memsqlctl add-leaf'
Operation completed successfully
sdb-admin add-leaf -p<secure-password> --memsql-id EC86227566
Toolbox will perform the following actions on host 54.90.20.80:
  · Run 'memsqlctl add-leaf --host 54.90.20.80 --port 3308 --user root --password ●●●●●●'

Would you like to continue? [y/N]: y
✓ Successfully ran 'memsqlctl add-leaf'
Operation completed successfully

There are now two new additional leaves on the primary cluster, bringing the total to four. While these four leaves hold all of the cluster’s partitions, these two new leaves don’t yet contain any data.

sdb-admin list-nodes
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID  |  Role  |      Host      | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| D3F1BD9E0E | Master | 54.90.20.80    | 3306 | Running       | True         | 7.0.19  | Online         |                    | 0.0.0.0      |
| 5FF108F2F0 | Leaf   | 100.26.238.187 | 3307 | Running       | True         | 7.0.19  | Online         | 2                  | 0.0.0.0      |
| B8C6534BAE | Leaf   | 100.26.238.187 | 3308 | Running       | True         | 7.0.19  | Online         | 1                  | 0.0.0.0      |
| 2B397AE7A1 | Leaf   | 54.90.20.80    | 3307 | Running       | True         | 7.0.19  | Online         | 1                  | 0.0.0.0      |
| EC86227566 | Leaf   | 54.90.20.80    | 3308 | Running       | True         | 7.0.19  | Online         | 2                  | 0.0.0.0      |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
show leaves;
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| Host           | Port | Availability_Group | Pair_Host      | Pair_Port | State  | Opened_Connections | Average_Roundtrip_Latency_ms | NodeId | Grace_Period_In_seconds |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| 100.26.238.187 | 3307 |                  2 | 54.90.20.80    |      3307 | online |                 13 |                        0.238 |      3 |                    NULL |
| 54.90.20.80    | 3307 |                  1 | 100.26.238.187 |      3307 | online |                  7 |                        0.180 |      4 |                    NULL |
| 100.26.238.187 | 3308 |                  1 | 54.90.20.80    |      3308 | online |                  2 |                        0.167 |      5 |                    NULL |
| 54.90.20.80    | 3308 |                  2 | 100.26.238.187 |      3308 | online |                  2 |                        0.134 |      6 |                    NULL |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
show partitions on memsql_demo;
+---------+----------------+------+---------+--------+
| Ordinal | Host           | Port | Role    | Locked |
+---------+----------------+------+---------+--------+
|       0 | 100.26.238.187 | 3307 | Master  |      0 |
|       0 | 54.90.20.80    | 3307 | Replica |      0 |
|       1 | 100.26.238.187 | 3307 | Master  |      0 |
|       1 | 54.90.20.80    | 3307 | Replica |      0 |
|       2 | 100.26.238.187 | 3307 | Master  |      0 |
|       2 | 54.90.20.80    | 3307 | Replica |      0 |
|       3 | 100.26.238.187 | 3307 | Master  |      0 |
|       3 | 54.90.20.80    | 3307 | Replica |      0 |
|       4 | 100.26.238.187 | 3307 | Master  |      0 |
|       4 | 54.90.20.80    | 3307 | Replica |      0 |
|       5 | 100.26.238.187 | 3307 | Master  |      0 |
|       5 | 54.90.20.80    | 3307 | Replica |      0 |
|       6 | 100.26.238.187 | 3307 | Master  |      0 |
|       6 | 54.90.20.80    | 3307 | Replica |      0 |
|       7 | 100.26.238.187 | 3307 | Master  |      0 |
|       7 | 54.90.20.80    | 3307 | Replica |      0 |
+---------+----------------+------+---------+--------+

Deploy Data to the New Leaves

As a best practice, run the SNAPSHOT DATABASE command to bring transaction logs up to date. As a general rule, taking a snapshot is helpful before any maintenance/clustering operation where data is being moved or recovered as it compresses files by combining multiple statements into one.

Rebalance the partitions on the primary cluster to deploy them to the new leaves.

While the primary cluster is rebalancing, the secondary cluster is connected, but replication is momentarily paused as the primary cluster switches masters. After this pause, writes and replication will proceed as usual through the remainder of the rebalance. This is the same behavior as if the primary cluster were restarted.

rebalance partitions on memsql_demo;
Query OK, 1 row affected (22.43 sec)
show partitions on memsql_demo;
+---------+----------------+------+---------+--------+
| Ordinal | Host           | Port | Role    | Locked |
+---------+----------------+------+---------+--------+
|       0 | 100.26.238.187 | 3308 | Master  |      0 |
|       0 | 54.90.20.80    | 3308 | Replica |      0 |
|       1 | 54.90.20.80    | 3308 | Master  |      0 |
|       1 | 100.26.238.187 | 3308 | Replica |      0 |
|       2 | 100.26.238.187 | 3308 | Master  |      0 |
|       2 | 54.90.20.80    | 3308 | Replica |      0 |
|       3 | 54.90.20.80    | 3308 | Master  |      0 |
|       3 | 100.26.238.187 | 3308 | Replica |      0 |
|       4 | 100.26.238.187 | 3307 | Replica |      0 |
|       4 | 54.90.20.80    | 3307 | Master  |      0 |
|       5 | 100.26.238.187 | 3307 | Replica |      0 |
|       5 | 54.90.20.80    | 3307 | Master  |      0 |
|       6 | 100.26.238.187 | 3307 | Master  |      0 |
|       6 | 54.90.20.80    | 3307 | Replica |      0 |
|       7 | 100.26.238.187 | 3307 | Master  |      0 |
|       7 | 54.90.20.80    | 3307 | Replica |      0 |
+---------+----------------+------+---------+--------+

This has no effect on the secondary cluster. There are still 8 partitions in the database, which are still on the same leaves on the secondary cluster.

Cluster Before

show partitions on memsql_repl;
+---------+----------------+------+---------+--------+
| Ordinal | Host           | Port | Role    | Locked |
+---------+----------------+------+---------+--------+
|       0 | 54.236.46.17   | 3307 | Master  |      0 |
|       0 | 18.234.106.201 | 3307 | Replica |      0 |
|       1 | 54.236.46.17   | 3307 | Master  |      0 |
|       1 | 18.234.106.201 | 3307 | Replica |      0 |
|       2 | 54.236.46.17   | 3307 | Master  |      0 |
|       2 | 18.234.106.201 | 3307 | Replica |      0 |
|       3 | 54.236.46.17   | 3307 | Master  |      0 |
|       3 | 18.234.106.201 | 3307 | Replica |      0 |
|       4 | 54.236.46.17   | 3307 | Master  |      0 |
|       4 | 18.234.106.201 | 3307 | Replica |      0 |
|       5 | 54.236.46.17   | 3307 | Master  |      0 |
|       5 | 18.234.106.201 | 3307 | Replica |      0 |
|       6 | 54.236.46.17   | 3307 | Master  |      0 |
|       6 | 18.234.106.201 | 3307 | Replica |      0 |
|       7 | 54.236.46.17   | 3307 | Master  |      0 |
|       7 | 18.234.106.201 | 3307 | Replica |      0 |
+---------+----------------+------+---------+--------+

Cluster After

show partitions on memsql_repl;
+---------+----------------+------+---------+--------+
| Ordinal | Host           | Port | Role    | Locked |
+---------+----------------+------+---------+--------+
|       0 | 54.236.46.17   | 3307 | Master  |      0 |
|       0 | 18.234.106.201 | 3307 | Replica |      0 |
|       1 | 54.236.46.17   | 3307 | Master  |      0 |
|       1 | 18.234.106.201 | 3307 | Replica |      0 |
|       2 | 54.236.46.17   | 3307 | Master  |      0 |
|       2 | 18.234.106.201 | 3307 | Replica |      0 |
|       3 | 54.236.46.17   | 3307 | Master  |      0 |
|       3 | 18.234.106.201 | 3307 | replica |      0 |
|       4 | 54.236.46.17   | 3307 | Master  |      0 |
|       4 | 18.234.106.201 | 3307 | Replica |      0 |
|       5 | 54.236.46.17   | 3307 | Master  |      0 |
|       5 | 18.234.106.201 | 3307 | Replica |      0 |
|       6 | 54.236.46.17   | 3307 | Master  |      0 |
|       6 | 18.234.106.201 | 3307 | Replica |      0 |
|       7 | 54.236.46.17   | 3307 | Master  |      0 |
|       7 | 18.234.106.201 | 3307 | Replica |      0 |
+---------+----------------+------+---------+--------+

Expand the Secondary Cluster

As a rule, if you expand the primary cluster’s storage capacity and/or memory, you must do the same to the secondary cluster.

As the primary cluster has been expanded, the secondary cluster will require additional space to store the additional data from the primary cluster.

Expanding the secondary cluster also provides more cores for read/analytics workloads.

In the example above, memsql_demo from the primary cluster was replicated to memsql_repl on the secondary cluster. The secondary cluster has 2 hosts, 2 leaves. As memsql_demo has 8 partitions, memsql_repl also has 8 partitions.

sdb-admin list-nodes
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID  |  Role  |      Host      | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| A5B544AC0D | Master | 18.234.106.201 | 3306 | Running       | True         | 7.0.19  | Online         |                    | 0.0.0.0      |
| CAE7ABC3BF | Leaf   | 18.234.106.201 | 3307 | Running       | True         | 7.0.19  | Online         | 1                  | 0.0.0.0      |
| 6CF5699EE0 | Leaf   | 54.236.46.17   | 3307 | Running       | True         | 7.0.19  | Online         | 2                  | 0.0.0.0      |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
show leaves;
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| Host           | Port | Availability_Group | Pair_Host      | Pair_Port | State  | Opened_Connections | Average_Roundtrip_Latency_ms | NodeId | Grace_Period_In_seconds |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| 54.236.46.17   | 3307 |                  2 | 18.234.106.201 |      3307 | online |                  4 |                        0.193 |      3 |                    NULL |
| 18.234.106.201 | 3307 |                  1 | 54.236.46.17   |      3307 | online |                  2 |                        0.136 |      4 |                    NULL |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
show partitions on memsql_repl;
+---------+----------------+------+---------+--------+
| Ordinal | Host           | Port | Role    | Locked |
+---------+----------------+------+---------+--------+
|       0 | 54.236.46.17   | 3307 | Master  |      0 |
|       0 | 18.234.106.201 | 3307 | Replica |      0 |
|       1 | 54.236.46.17   | 3307 | Master  |      0 |
|       1 | 18.234.106.201 | 3307 | Replica |      0 |
|       2 | 54.236.46.17   | 3307 | Master  |      0 |
|       2 | 18.234.106.201 | 3307 | Replica |      0 |
|       3 | 54.236.46.17   | 3307 | Master  |      0 |
|       3 | 18.234.106.201 | 3307 | Replica |      0 |
|       4 | 54.236.46.17   | 3307 | Master  |      0 |
|       4 | 18.234.106.201 | 3307 | Replica |      0 |
|       5 | 54.236.46.17   | 3307 | Master  |      0 |
|       5 | 18.234.106.201 | 3307 | Replica |      0 |
|       6 | 54.236.46.17   | 3307 | Master  |      0 |
|       6 | 18.234.106.201 | 3307 | Replica |      0 |
|       7 | 54.236.46.17   | 3307 | Master  |      0 |
|       7 | 18.234.106.201 | 3307 | Replica |      0 |
+---------+----------------+------+---------+--------+

Add New Nodes

Add two new nodes to the secondary cluster.

sdb-admin create-node --host 18.234.106.201 -p<secure-password> --port 3308
Toolbox is about to perform the following actions on host 18.234.106.201:
  · Run 'memsqlctl create-node --port 3308 --password ●●●●●●'

Would you like to continue? [y/N]: y
+------------------------------------------+----------------+
|                MemSQL ID                 |      Host      |
+------------------------------------------+----------------+
| C43AAB03E4174A660CA1AE2B98C60A4D2CD59D68 | 18.234.106.201 |
+------------------------------------------+----------------+
sdb-admin create-node --host 18.234.106.201 -p<secure-password> --port 3309
Toolbox is about to perform the following actions on host 18.234.106.201:
  · Run 'memsqlctl create-node --port 3309 --password ●●●●●●'

Would you like to continue? [y/N]: y
+------------------------------------------+----------------+
|                MemSQL ID                 |      Host      |
+------------------------------------------+----------------+
| 183E23A7577F113A7918B0E87299832D83A96F9B | 18.234.106.201 |
+------------------------------------------+----------------+
sdb-admin list-nodes
+------------+---------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID  |  Role   |      Host      | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+---------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| A5B544AC0D | Master  | 18.234.106.201 | 3306 | Running       | True         | 7.0.19  | Online         |                    | 0.0.0.0      |
| CAE7ABC3BF | Leaf    | 18.234.106.201 | 3307 | Running       | True         | 7.0.19  | Online         | 1                  | 0.0.0.0      |
| 6CF5699EE0 | Leaf    | 54.236.46.17   | 3307 | Running       | True         | 7.0.19  | Online         | 2                  | 0.0.0.0      |
| C43AAB03E4 | Unknown | 18.234.106.201 | 3308 | Running       | True         | 7.0.19  | Online         |                    | 0.0.0.0      |
| 183E23A757 | Unknown | 18.234.106.201 | 3309 | Running       | True         | 7.0.19  | Online         |                    | 0.0.0.0      |
+------------+---------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+

Designate the New Nodes as Leaves

sdb-admin add-leaf -p<secure-password> --memsql-id C43AAB03E4
Toolbox will perform the following actions on host 18.234.106.201:
  · Run 'memsqlctl add-leaf --host 18.234.106.201 --port 3308 --user root --password ●●●●●●'

Would you like to continue? [y/N]: y
✓ Successfully ran 'memsqlctl add-leaf'
Operation completed successfully
sdb-admin add-leaf -p<secure-password> --memsql-id 183E23A757
Toolbox will perform the following actions on host 18.234.106.201:
  · Run 'memsqlctl add-leaf --host 18.234.106.201 --port 3309 --user root --password ●●●●●●'

Would you like to continue? [y/N]: y
✓ Successfully ran 'memsqlctl add-leaf'
Operation completed successfully
sdb-admin list-nodes
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID  |  Role  |      Host      | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| A5B544AC0D | Master | 18.234.106.201 | 3306 | Running       | True         | 7.0.19  | Online         |                    | 0.0.0.0      |
| CAE7ABC3BF | Leaf   | 18.234.106.201 | 3307 | Running       | True         | 7.0.19  | Online         | 1                  | 0.0.0.0      |
| C43AAB03E4 | Leaf   | 18.234.106.201 | 3308 | Running       | True         | 7.0.19  | Online         | 1                  | 0.0.0.0      |
| 183E23A757 | Leaf   | 18.234.106.201 | 3309 | Running       | True         | 7.0.19  | Online         | 2                  | 0.0.0.0      |
| 6CF5699EE0 | Leaf   | 54.236.46.17   | 3307 | Running       | True         | 7.0.19  | Online         | 2                  | 0.0.0.0      |
+------------+--------+----------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
show leaves;
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| Host           | Port | Availability_Group | Pair_Host      | Pair_Port | State  | Opened_Connections | Average_Roundtrip_Latency_ms | NodeId | Grace_Period_In_seconds |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+
| 54.236.46.17   | 3307 |                  2 | 18.234.106.201 |      3307 | online |                  2 |                        0.199 |      3 |                    NULL |
| 18.234.106.201 | 3307 |                  1 | 54.236.46.17   |      3307 | online |                  1 |                        0.133 |      4 |                    NULL |
| 18.234.106.201 | 3308 |                  1 | 18.234.106.201 |      3309 | online |                  1 |                        0.257 |      5 |                    NULL |
| 18.234.106.201 | 3309 |                  2 | 18.234.106.201 |      3308 | online |                  2 |                        1.290 |      6 |                    NULL |
+----------------+------+--------------------+----------------+-----------+--------+--------------------+------------------------------+--------+-------------------------+

Even though the primary cluster has been rebalanced, and there are new leaves on the secondary cluster, the new leaves don’t have any partitions and thus cannot store any data.

show partitions on memsql_repl;
+---------+----------------+------+---------+--------+
| Ordinal | Host           | Port | Role    | Locked |
+---------+----------------+------+---------+--------+
|       0 | 54.236.46.17   | 3307 | Master  |      0 |
|       0 | 18.234.106.201 | 3307 | Replica |      0 |
|       1 | 54.236.46.17   | 3307 | Master  |      0 |
|       1 | 18.234.106.201 | 3307 | Replica |      0 |
|       2 | 54.236.46.17   | 3307 | Master  |      0 |
|       2 | 18.234.106.201 | 3307 | Replica |      0 |
|       3 | 54.236.46.17   | 3307 | Master  |      0 |
|       3 | 18.234.106.201 | 3307 | Replica |      0 |
|       4 | 54.236.46.17   | 3307 | Master  |      0 |
|       4 | 18.234.106.201 | 3307 | Replica |      0 |
|       5 | 54.236.46.17   | 3307 | Master  |      0 |
|       5 | 18.234.106.201 | 3307 | Replica |      0 |
|       6 | 54.236.46.17   | 3307 | Master  |      0 |
|       6 | 18.234.106.201 | 3307 | Replica |      0 |
|       7 | 54.236.46.17   | 3307 | Master  |      0 |
|       7 | 18.234.106.201 | 3307 | Replica |      0 |
+---------+----------------+------+---------+--------+

Deploy Data to the New Leaves

As a best practice, run the SNAPSHOT DATABASE command to bring transaction logs up to date. As a general rule, taking a snapshot is helpful before any maintenance/clustering operation where data is being moved or recovered as it compresses files by combining multiple statements into one.

Rebalance the partitions on the secondary cluster to deploy them to the new leaves.

While the secondary cluster is rebalancing, replication is momentarily paused. After this pause, writes and replication will proceed as usual through the remainder of the rebalance. This is the same behavior as if the secondary cluster were restarted, but will likely take longer depending on how much columnstore data is in the database.

rebalance partitions on memsql_repl;
Query OK, 1 row affected (8.44 sec)
show partitions on memsql_repl;
+---------+----------------+------+---------+--------+
| Ordinal | Host           | Port | Role    | Locked |
+---------+----------------+------+---------+--------+
|       0 | 18.234.106.201 | 3308 | Master  |      0 |
|       0 | 18.234.106.201 | 3309 | Replica |      0 |
|       1 | 18.234.106.201 | 3309 | Master  |      0 |
|       1 | 18.234.106.201 | 3308 | Replica |      0 |
|       2 | 18.234.106.201 | 3308 | Master  |      0 |
|       2 | 18.234.106.201 | 3309 | Replica |      0 |
|       3 | 18.234.106.201 | 3309 | Master  |      0 |
|       3 | 18.234.106.201 | 3308 | Replica |      0 |
|       4 | 54.236.46.17   | 3307 | Replica |      0 |
|       4 | 18.234.106.201 | 3307 | Master  |      0 |
|       5 | 54.236.46.17   | 3307 | Replica |      0 |
|       5 | 18.234.106.201 | 3307 | Master  |      0 |
|       6 | 54.236.46.17   | 3307 | Master  |      0 |
|       6 | 18.234.106.201 | 3307 | Replica |      0 |
|       7 | 54.236.46.17   | 3307 | Master  |      0 |
|       7 | 18.234.106.201 | 3307 | Replica |      0 |
+---------+----------------+------+---------+--------+

Note that the partition count remains the same.

Last modified: April 29, 2024

Was this article helpful?