Upgrade SingleStore

Note

SingleStore betas and release candidates cannot be upgraded unless explicitly stated in the release notes.

SingleStore cannot be upgraded via tarball if it was initially installed/deployed using a package manager and vice versa.

Step 1: Upgrade Toolbox

SingleStore recommends upgrading to the latest version of Toolbox prior to upgrading your cluster.

With Internet Access

Red Hat

sudo yum install singlestoredb-toolbox -y

Debian

  1. As SingleStore packages are signed to ensure integrity, the GPG key must be added to this host.

    To add the GPG key and verify that the SingleStore signing key has been added, run either of the following commands:

    wget -O - 'https://release.memsql.com/release-aug2018.gpg' 2>/dev/null | sudo apt-key add - && apt-key list

    Without using apt-key:

    wget -q -O - 'https://release.memsql.com/release-aug2018.gpg' | sudo tee /etc/apt/trusted.gpg.d/memsql.asc 1>/dev/null
  2. Upgrade Toolbox.

    sudo apt install singlestoredb-toolbox -y

Tarball

  1. Download the latest version of Toolbox and extract it via tar xzvf.

    singlestoredb-toolbox

  2. Change to the singlestoredb-toolbox directory and run all Toolbox commands from this directory.

Without Internet Access

Use one of the following buttons to download the latest RPM, Debian, or tarball singlestoredb-toolbox file to a location accessible by your cluster.

Red Hat

sudo yum install /path/to/singlestoredb-toolbox.rpm -y

Debian

sudo apt install /path/to/singlestoredb-toolbox.deb -y

Tarball

  1. Extract the singlestoredb-toolbox tarball via tar xzvf.

  2. Change to the singlestoredb-toolbox directory and run all Toolbox commands from this directory.

Step 2: Upgrade SingleStore

Warning

Critical cluster operations, such as an upgrade, must not be interrupted.

Do not shut down your cluster prior to starting the upgrade. If the cluster or individual nodes are offline when the upgrade is started, the upgrade will fail.

If upgrading from SingleStore 7.0 and later with DR clusters created via Replication, SingleStore recommends that you upgrade your DR secondary cluster(s) one at a time, and then upgrade your primary cluster last so that replication will continue to work after each upgrade.

If an SSH connection to a server is interrupted or lost during an upgrade, it can leave a cluster in a non-standard state. Therefore, SingleStore recommends using terminal multiplexers such as tmux or screen to run an upgrade session. This would make upgrade (or any other operation) not dependent on the connected SSH session and allow you to reattach to a running session.

You cannot downgrade from your current version.

There are two available options for upgrading a cluster:

  • Offline Upgrade

    The simplest and preferred upgrade option is an offline upgrade. It is the least error-prone and easiest to execute; however, it requires downtime as all of the nodes in the cluster are upgraded at the same time. Your cluster will be shut down and restarted over the course of the upgrade.

    If the cluster is running with high availability (HA), you also have the option to perform an incremental online upgrade, which maintains cluster availability throughout the upgrade process.

  • Online Upgrade

    For high availability (HA) clusters only. With this option, the cluster will not be shut down over the course of the upgrade. Nodes will be restarted in a specific sequence to ensure that DML-based workloads will still function.

    An online upgrade may fail if either a long-running workload that writes to the database or a workload that manipulates SingleStore files (such as an automated backup or maintenance script) is running on the target cluster. SingleStore recommends performing an online upgrade only after these workloads have completed.

Toolbox versions 1.11.7 and later provide the option to retry a failed online upgrade. Should an online upgrade fail, an offline upgrade will be attempted.

When upgrading your cluster:

  • If you do not specify a version, your cluster will be upgraded to the latest version and patch release of SingleStore.

  • If you specify a major version, your cluster will be upgraded to the latest patch release of that version.

  • To upgrade to a specific version and patch release, use the --version option

Refer to SingleStore release notes for available patch versions and sdb-deploy upgrade for more information.

Prior to upgrading your cluster, SingleStore recommends that you take a backup as a standard precautionary measure. Refer to Back Up and Restore Data for more information. The sdb-deploy upgrade command will perform a snapshot of all databases prior to upgrade.

Note

When upgrading the cluster to SingleStore 8.1.26 and later, the exporter process may fail to start with the following message:

memsql_exporter.go:1001 failed reading ini file: open /etc/memsql/memsql_exporter.cnf: permission denied

This is due to the root user owning the memsql_exporter.cnf file whereas Toolbox commands run as the memsql user. Changing the ownership of the memsql_exporter.cnf file to the memsql:memsql user will properly configure monitoring and allow the exporter process to start.

Pre-Upgrade Confirmation

Note:  The cluster will not be upgraded when running this command.

Red Hat & Debian

Confirm that the cluster can be upgraded.

sdb-deploy upgrade --precheck-only

Tarball

  1. Change to the singlestoredb-toolbox directory.

  2. Confirm that the cluster can be upgraded.

    ./sdb-deploy upgrade --precheck-only

Typical output from a cluster that is ready to be upgraded:

Toolbox will perform the following actions:
  · Download singlestoredb-server x.yy.zx

Would you like to continue? [y/N]: y
✓ Downloaded singlestoredb-server production:latest
Toolbox is about to perform following checks:
  · Cluster health checks:
    - Check that all nodes are online and healthy
    - Check that all partitions are healthy
  · Check that there are no pending rebalance operations
  · Take snapshots of all databases

Would you like to continue? [y/N]: y
Checking cluster status
✓ Nodes are online
✓ Partitions are healthy

✓ Snapshots completed
✓ All checks passed successfully
Operation completed successfully

Clusters with Internet Access

Offline Upgrade

Red Hat & Debian

sdb-deploy upgrade --version 8.9

Tarball

  1. Change to the singlestoredb-toolbox directory.

  2. Upgrade the cluster.

    ./sdb-deploy upgrade --version 8.9

Online Upgrade

Red Hat & Debian

sdb-deploy upgrade --online --version 8.9

Tarball

  1. Change to the singlestoredb-toolbox directory.

  2. Upgrade the cluster.

    ./sdb-deploy upgrade --online --version 8.9

Clusters without Internet Access

Use one of the following buttons to download the latest RPM, Debian, or tarball singlestoredb-server file to a location accessible by your cluster. This file contains both the SingleStore binary and the low-level management tool, memsqlctl.

Run the sdb-deploy upgrade command and reference the appropriate file in the --file-path option. Running sdb-deploy upgrade (versus upgrading the package via the package manager) will perform an offline restart of all the nodes to ensure the cluster is using the new version.

Offline Upgrade

Red Hat

sdb-deploy upgrade --file-path /path/to/singlestoredb-server.rpm

Debian

sdb-deploy upgrade --file-path /path/to/singlestoredb-server.deb

Tarball

  1. Change to the singlestoredb-toolbox directory.

  2. Upgrade the cluster.

    ./sdb-deploy upgrade --file-path /path/to/singlestoredb-server.tar.gz

Online Upgrade

Red Hat

sdb-deploy upgrade --online --file-path /path/to/singlestoredb-server.rpm

Debian

sdb-deploy upgrade --online --file-path /path/to/singlestoredb-server.deb

Tarball

  1. Change to the singlestoredb-toolbox directory.

  2. Upgrade the cluster.

    ./sdb-deploy upgrade --online --file-path /path/to/singlestoredb-server.tar.gz

Confirm that the Upgrade Succeeded

Toolbox displays the progress of the upgrade and reports whether the upgrade succeeded. While typically not required, you may also perform an in-depth review of the post-upgrade cluster to reaffirm that the upgrade succeeded.

  1. Confirm that all nodes are online and healthy.

    The State column should display online for each node.

    sdb-admin show-cluster
    ✓ Successfully ran 'memsqlctl show-cluster'
    +---------------------+-----------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
    |        Role         |   Host    | Port | Availability Group | Pair Host | Pair Port | State  | Opened Connections | Average Roundtrip Latency ms | NodeId | Master Aggregator |
    +---------------------+-----------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
    | Leaf                | 127.0.0.1 | 3307 | 1                  | null      | null      | online | 2                  |                              | 2      |                   |
    | Aggregator (Leader) | 127.0.0.1 | 3306 |                    | null      | null      | online | 0                  | null                         | 1      | Yes               |
    +---------------------+-----------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
  2. Confirm that all databases are healthy.

    The summary column should display healthy for each database.

    sudo memsqlctl query --sql "SELECT * FROM information_schema.MV_DISTRIBUTED_DATABASES_STATUS;"
    +---------------+----------------+--------------------+---------+--------+-------------+------------+---------+------------+---------------+---------+---------------+
    | database_name | num_partitions | num_sub_partitions | summary | online | replicating | recovering | pending | transition | unrecoverable | offline | sync_mismatch |
    +---------------+----------------+--------------------+---------+--------+-------------+------------+---------+------------+---------------+---------+---------------+
    | test          | 16             | 64                 | healthy | 16     | 16          | 0          | 0       | 0          | 0             | 0       | 0             |
    +---------------+----------------+--------------------+---------+--------+-------------+------------+---------+------------+---------------+---------+---------------+
  3. Confirm that all nodes reflect the version specified in the sdb-deploy upgrade command.

    The Version column displays the version that each node is running.

    sdb-admin list-nodes
    +------------+--------+-----------+------+---------------+--------------+---------+----------------+--------------------+--------------+
    | MemSQL ID  |  Role  |   Host    | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
    +------------+--------+-----------+------+---------------+--------------+---------+----------------+--------------------+--------------+
    | CBDC2807B7 | Master | 127.0.0.1 | 3306 | Running       | True         | 8.9.1   | Online         |                    | 127.0.0.1    |
    | EC33CC5A08 | Leaf   | 127.0.0.1 | 3307 | Running       | True         | 8.9.1   | Online         | 1                  | 127.0.0.1    |
    +------------+--------+-----------+------+---------------+--------------+---------+----------------+--------------------+--------------+

Roll Back from a Failed Upgrade

Currently, SingleStore does not support downgrading directly. Use the following steps to roll back to an earlier version of SingleStore using the backup made at the beginning of this upgrade guide.

Note that a backup created from a given version of the SingleStore engine can only be restored to the same engine version or later.

Note: For tarball-based deployments, first change to the singlestoredb-toolbox directory and prefix the following commands with ./.

  1. Make a backup of the cluster configuration.

    sdb-deploy generate-cluster-file
  2. Delete all of the nodes in the cluster.

    sdb-admin delete-node --stop --all
  3. Use the following command to roll back to an earlier version of the SingleStore engine by removing the engine version(s) you do not want.

    For example, if upgrading to SingleStore 8.0 fails, remove 8.0.

    sdb-deploy uninstall --version 8.0
  4. Unregister all hosts in the cluster.

    sdb-toolbox-config unregister-host --all
  5. Recreate the cluster using the cluster configuration captured in the cluster file. Note that the cluster file may contain the engine version, so be sure to update the cluster file with the engine version you wish to restore.

    sdb-deploy setup-cluster --cluster-file /path/to/cluster/file
  6. Restore the cluster's data from the backup that was made earlier. Refer to Back Up and Restore Data for more information.

Last modified: November 15, 2024

Was this article helpful?