Upgrade to SingleStore 8.9

Note

Please note the following before upgrading your cluster:

  • SingleStore betas and release candidates cannot be upgraded unless explicitly stated in the release notes.

  • As of SingleStore 7.8, a new reserved prefix for column names has been introduced: _$!$.

    • This prefix is reserved for internal use only. Before upgrading SingleStore, SingleStore recommends renaming existing columns prefixed with " _$!$ " to avoid potential issues in the future.

    • While this is not mandatory, and DML commands will still allow access to existing objects using this prefix, DDL commands will no longer allow column names to be created with this prefix.

  • If any host in the cluster is near or at disk capacity, please increase available storage before upgrading.

  • During the upgrade to SingleStore 8.9, if the value of the fts2_max_connections engine variable is equal to 100000, the value is set to 32.

Important Notes About Upgrading

This topic describes how to upgrade SingleStore. Please read the following information thoroughly before upgrading.

Once the upgrade is complete, refer to the Post-Upgrade Considerations section for additional information on behavioral changes that you should be aware of.

Backups

MemSQL 6.8 and earlier backups cannot be restored in SingleStore 7.8 or later.

If you need to keep older backups for an extended period of time, consider implementing one of the following recommendations:

  1. Maintain a 7.5 or earlier installation in a test or pre-production environment to be used in case you ever need to restore 6.8 and earlier backups.

  2. Upgrade the backups by restoring them into a 7.0 to 7.5 test or pre-production environment, then backing up the data again. Validate the new backups by restoring them. After that discard the older release backups.

Plancache

Plans in the plancache are dependent upon the specific SingleStore patch version, so when you upgrade to a new SingleStore version, all previously compiled plans will be invalidated. This means that any queries run against the upgraded cluster will force a one-time plan compilation, which results in slower query times the first time those queries are run. After the plans have been recompiled, they will be stored again in the plancache and query latency will return to nominal values.

Non-Sync Variables

By default, convert_nonunique_hash_to_skiplist is set to TRUE as of SingleStore 7.3. This means that any non-unique hash index will be recovered as a skiplist index, any newly created table will also have its non-unique hash indexes created as skiplists. For more information about this engine variable see the Non-Sync Variables List.

Secure Connections

As of SingleStore 8.1, OpenSSL 3.0 is now used to establish secure connections to SingleStore. As a consequence, a client certificate that uses SHA or MD5 hash functions in its signature must be replaced with a certificate that uses SHA256 at a minimum, or a secure connection to SingleStore cannot be established. While SingleStore supports TLS v1, TLS v1.1, and TLS v1.2, using TLS v1.2 is recommended. When FIPS is enabled, only TLS v1.2 is supported. Refer to Troubleshoot OpenSSL 3.0 Connections for more information.

Verify Your Cluster is Ready for Upgrade

Warning

Only clusters that are running SingleStore 7.5 and later can upgrade directly to SingleStore 8.9.

If upgrading from SingleStore 7.0 and later with DR clusters created via Replication, SingleStore recommends that you upgrade your DR secondary cluster(s) one at a time, and then upgrade your primary cluster last so that replication will continue to work after each upgrade.

To upgrade from MemSQL 6.7 or 6.8, or from SingleStore 7.0, a three-step upgrade process is recommended.

  1. Upgrade to SingleStore 7.3.

  2. Upgrade to SingleStore 7.5.

  3. Use this guide to upgrade to SingleStore 8.9.

Prior to upgrading your cluster, SingleStore recommends that you take a backup as a standard precautionary measure. See Back Up and Restore Data.

In addition, run the following SQL commands from the Master Aggregator to confirm that the following are true.

Step

SQL Command

All leaf nodes ("leaves") are online.

SHOW LEAVES;

All aggregators are online.

SHOW AGGREGATORS;

There are no partitions with an Orphan role.

SHOW CLUSTER STATUS;

No rebalance or restore redundancy is necessary.

EXPLAIN REBALANCE PARTITIONS;
EXPLAIN RESTORE REDUNDANCY;

After you have backed up your data and verified your cluster is ready, you are ready to upgrade your cluster to the latest version of SingleStore using the SingleStore management tools.

Upgrade Your Cluster

Upgrade Versions and Methods

The tables below depict which versions of SingleStore can be upgraded to SingleStore 8.9 and the method by which the cluster can be upgraded.

  • Offline upgrade: Your SingleStore cluster will be shut down and restarted over the course of the upgrade

  • Online upgrade: Your SingleStore cluster will not be shut down over the course of the upgrade

Upgrade via SingleStore Toolbox

Upgrade from

Offline upgrade

Online upgrade

7.5 and later

Step 1: Upgrade SingleStore Toolbox

To upgrade to SingleStore 7.3, you must have Toolbox 1.5.3 or later installed prior to the SingleStore upgrade process. SingleStore recommends that you use the latest version of Toolbox when upgrading your cluster.

With Internet Access

Red Hat

sudo yum install singlestoredb-toolbox -y

Debian

  1. As SingleStore packages are signed to ensure integrity, the GPG key must be added to this host.

    To add the GPG key and verify that the SingleStore signing key has been added, run either of the following commands:

    wget -O - 'https://release.memsql.com/release-aug2018.gpg' 2>/dev/null | sudo apt-key add - && apt-key list

    Without using apt-key:

    wget -q -O - 'https://release.memsql.com/release-aug2018.gpg' | sudo tee /etc/apt/trusted.gpg.d/memsql.asc 1>/dev/null
  2. Upgrade Toolbox.

    sudo apt install singlestoredb-toolbox -y

Tarball

  1. Download the latest version of Toolbox and extract it via tar xzvf.

    singlestoredb-toolbox

  2. Change to the singlestoredb-toolbox directory and run all Toolbox commands from this directory.

Without Internet Access

Use one of the following buttons to download the latest RPM, Debian, or tarball singlestoredb-toolbox file to a location accessible by your cluster.

Red Hat

sudo yum install /path/to/singlestoredb-toolbox.rpm -y

Debian

sudo apt install /path/to/singlestoredb-toolbox.deb -y

Tarball

  1. Extract the singlestoredb-toolbox tarball via tar xzvf.

  2. Change to the singlestoredb-toolbox directory and run all Toolbox commands from this directory.

Step 2: Upgrade SingleStore

Warning

Critical cluster operations, such as an upgrade, must not be interrupted.

Do not shut down your cluster prior to starting the upgrade. If the cluster or individual nodes are offline when the upgrade is started, the upgrade will fail.

If upgrading from SingleStore 7.0 and later with DR clusters created via Replication, SingleStore recommends that you upgrade your DR secondary cluster(s) one at a time, and then upgrade your primary cluster last so that replication will continue to work after each upgrade.

If an SSH connection to a server is interrupted or lost during an upgrade, it can leave a cluster in a non-standard state. Therefore, SingleStore recommends using terminal multiplexers such as tmux or screen to run an upgrade session. This would make upgrade (or any other operation) not dependent on the connected SSH session and allow you to reattach to a running session.

You cannot downgrade from your current version.

There are two available options for upgrading a cluster:

  • Offline Upgrade

    The simplest and preferred upgrade option is an offline upgrade. It is the least error-prone and easiest to execute; however, it requires downtime as all of the nodes in the cluster are upgraded at the same time. Your cluster will be shut down and restarted over the course of the upgrade.

    If the cluster is running with high availability (HA), you also have the option to perform an incremental online upgrade, which maintains cluster availability throughout the upgrade process.

  • Online Upgrade

    For high availability (HA) clusters only. With this option, the cluster will not be shut down over the course of the upgrade. Nodes will be restarted in a specific sequence to ensure that DML-based workloads will still function.

    An online upgrade may fail if either a long-running workload that writes to the database or a workload that manipulates SingleStore files (such as an automated backup or maintenance script) is running on the target cluster. SingleStore recommends performing an online upgrade only after these workloads have completed.

Toolbox versions 1.11.7 and later provide the option to retry a failed online upgrade. Should an online upgrade fail, an offline upgrade will be attempted.

When upgrading your cluster:

  • If you do not specify a version, your cluster will be upgraded to the latest version and patch release of SingleStore.

  • If you specify a major version, your cluster will be upgraded to the latest patch release of that version.

  • To upgrade to a specific version and patch release, use the --version option

Refer to SingleStore release notes for available patch versions and sdb-deploy upgrade for more information.

Prior to upgrading your cluster, SingleStore recommends that you take a backup as a standard precautionary measure. Refer to Back Up and Restore Data for more information. The sdb-deploy upgrade command will perform a snapshot of all databases prior to upgrade.

Note

When upgrading the cluster to SingleStore 8.1.26 and later, the exporter process may fail to start with the following message:

memsql_exporter.go:1001 failed reading ini file: open /etc/memsql/memsql_exporter.cnf: permission denied

This is due to the root user owning the memsql_exporter.cnf file whereas Toolbox commands run as the memsql user. Changing the ownership of the memsql_exporter.cnf file to the memsql:memsql user will properly configure monitoring and allow the exporter process to start.

Pre-Upgrade Confirmation

Note:  The cluster will not be upgraded when running this command.

Red Hat & Debian

Confirm that the cluster can be upgraded.

sdb-deploy upgrade --precheck-only

Tarball

  1. Change to the singlestoredb-toolbox directory.

  2. Confirm that the cluster can be upgraded.

    ./sdb-deploy upgrade --precheck-only

Typical output from a cluster that is ready to be upgraded:

Toolbox will perform the following actions:
  · Download singlestoredb-server x.yy.zx

Would you like to continue? [y/N]: y
✓ Downloaded singlestoredb-server production:latest
Toolbox is about to perform following checks:
  · Cluster health checks:
    - Check that all nodes are online and healthy
    - Check that all partitions are healthy
  · Check that there are no pending rebalance operations
  · Take snapshots of all databases

Would you like to continue? [y/N]: y
Checking cluster status
✓ Nodes are online
✓ Partitions are healthy

✓ Snapshots completed
✓ All checks passed successfully
Operation completed successfully

Clusters with Internet Access

Offline Upgrade

Red Hat & Debian

sdb-deploy upgrade --version 8.9

Tarball

  1. Change to the singlestoredb-toolbox directory.

  2. Upgrade the cluster.

    ./sdb-deploy upgrade --version 8.9

Online Upgrade

Red Hat & Debian

sdb-deploy upgrade --online --version 8.9

Tarball

  1. Change to the singlestoredb-toolbox directory.

  2. Upgrade the cluster.

    ./sdb-deploy upgrade --online --version 8.9

Clusters without Internet Access

Use one of the following buttons to download the latest RPM, Debian, or tarball singlestoredb-server file to a location accessible by your cluster. This file contains both the SingleStore binary and the low-level management tool, memsqlctl.

Run the sdb-deploy upgrade command and reference the appropriate file in the --file-path option. Running sdb-deploy upgrade (versus upgrading the package via the package manager) will perform an offline restart of all the nodes to ensure the cluster is using the new version.

Offline Upgrade

Red Hat

sdb-deploy upgrade --file-path /path/to/singlestoredb-server.rpm

Debian

sdb-deploy upgrade --file-path /path/to/singlestoredb-server.deb

Tarball

  1. Change to the singlestoredb-toolbox directory.

  2. Upgrade the cluster.

    ./sdb-deploy upgrade --file-path /path/to/singlestoredb-server.tar.gz

Online Upgrade

Red Hat

sdb-deploy upgrade --online --file-path /path/to/singlestoredb-server.rpm

Debian

sdb-deploy upgrade --online --file-path /path/to/singlestoredb-server.deb

Tarball

  1. Change to the singlestoredb-toolbox directory.

  2. Upgrade the cluster.

    ./sdb-deploy upgrade --online --file-path /path/to/singlestoredb-server.tar.gz

Confirm that the Upgrade Succeeded

Toolbox displays the progress of the upgrade and reports whether the upgrade succeeded. While typically not required, you may also perform an in-depth review of the post-upgrade cluster to reaffirm that the upgrade succeeded.

  1. Confirm that all nodes are online and healthy.

    The State column should display online for each node.

    sdb-admin show-cluster
    ✓ Successfully ran 'memsqlctl show-cluster'
    +---------------------+-----------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
    |        Role         |   Host    | Port | Availability Group | Pair Host | Pair Port | State  | Opened Connections | Average Roundtrip Latency ms | NodeId | Master Aggregator |
    +---------------------+-----------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
    | Leaf                | 127.0.0.1 | 3307 | 1                  | null      | null      | online | 2                  |                              | 2      |                   |
    | Aggregator (Leader) | 127.0.0.1 | 3306 |                    | null      | null      | online | 0                  | null                         | 1      | Yes               |
    +---------------------+-----------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
  2. Confirm that all databases are healthy.

    The summary column should display healthy for each database.

    sudo memsqlctl query --sql "SELECT * FROM information_schema.MV_DISTRIBUTED_DATABASES_STATUS;"
    +---------------+----------------+--------------------+---------+--------+-------------+------------+---------+------------+---------------+---------+---------------+
    | database_name | num_partitions | num_sub_partitions | summary | online | replicating | recovering | pending | transition | unrecoverable | offline | sync_mismatch |
    +---------------+----------------+--------------------+---------+--------+-------------+------------+---------+------------+---------------+---------+---------------+
    | test          | 16             | 64                 | healthy | 16     | 16          | 0          | 0       | 0          | 0             | 0       | 0             |
    +---------------+----------------+--------------------+---------+--------+-------------+------------+---------+------------+---------------+---------+---------------+
  3. Confirm that all nodes reflect the version specified in the sdb-deploy upgrade command.

    The Version column displays the version that each node is running.

    sdb-admin list-nodes
    +------------+--------+-----------+------+---------------+--------------+---------+----------------+--------------------+--------------+
    | MemSQL ID  |  Role  |   Host    | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
    +------------+--------+-----------+------+---------------+--------------+---------+----------------+--------------------+--------------+
    | CBDC2807B7 | Master | 127.0.0.1 | 3306 | Running       | True         | 8.9.1   | Online         |                    | 127.0.0.1    |
    | EC33CC5A08 | Leaf   | 127.0.0.1 | 3307 | Running       | True         | 8.9.1   | Online         | 1                  | 127.0.0.1    |
    +------------+--------+-----------+------+---------------+--------------+---------+----------------+--------------------+--------------+

Roll Back from a Failed Upgrade

Currently, SingleStore does not support downgrading directly. Use the following steps to roll back to an earlier version of SingleStore using the backup made at the beginning of this upgrade guide.

Note that a backup created from a given version of the SingleStore engine can only be restored to the same engine version or later.

  1. Make a backup of the cluster configuration.

    sdb-deploy generate-cluster-file
  2. Delete all of the nodes in the cluster.

    sdb-admin delete-node --stop --all
  3. Use the following command to roll back to an earlier version of the SingleStore engine by removing the engine version(s) you do not want.

    For example, if upgrading to SingleStore 8.9 fails, remove 8.9.

    sdb-deploy uninstall --version 8.9
  4. Unregister all hosts in the cluster.

    sdb-toolbox-config unregister-host --all
  5. Recreate the cluster using the cluster configuration captured in the cluster file. Note that the cluster file may contain the engine version, so be sure to update the cluster file with the engine version you wish to restore.

    sdb-deploy setup-cluster --cluster-file /path/to/cluster/file
  6. Restore the cluster's data from the backup that was made earlier. Refer to Back Up and Restore Data for more information.

Post-Upgrade Considerations

Collect Event Traces

Existing cluster monitoring instances can be configured to collect event traces after upgrading a cluster to SingleStore v8.5 or later. Refer to Query History for more information on how to fully enable this feature.

Run the following command to restart monitoring and collect event traces.

HTTP Connections

sdb-admin start-monitoring \
--database-name metrics \
--collect-event-traces \
--exporter-host <exporter-hostname-or-IP-address> \
--user root \
--password <secure-password> \
--retention-period 10

HTTPS Connections

sdb-admin start-monitoring \
--database-name metrics \
--collect-event-traces \
--exporter-host=<exporter-hostname-or-IP-address> \
--user root \
--password=<secure-password> \
--retention-period 10 \
--ssl-ca=/path/to/ca-cert.pem --or--
--ssl-capath=/ca-directory/including/path

System Behavior

When upgrading to SingleStore 8.9, you should be aware of the following changes to system behavior or default configuration settings. The behavior of a cluster upgraded from an earlier version to SingleStore 8.9 may differ compared to a newly installed cluster on SingleStore 8.9 as described below. Most of the changes fall into two categories:

  • In some versions, the default value for a configuration variable was changed compared to previous versions, but clusters upgraded from earlier versions retain their previous setting, both if it was set to a specific value or if it was not explicitly set and hence using the previous default. In some of these cases, SingleStore recommends to update your configuration to the new default if you were previously using the old default, after appropriate testing.

  • Some new features are automatically enabled by default on newly installed SingleStore 8.9 clusters but not automatically enabled on clusters upgraded from an earlier version to 8.9. In some of these cases, SingleStore recommends to enable the new features, after appropriate testing.

Upgrades to 8.9

  • To reduce your total cost of ownership (TCO), you may be able store data in Universal Storage instead of rowstores. This is because rowstores store their data in RAM, which can be costly. Universal Storage now supports upserts, which were previously only supported in rowstores.

  • You may want to run the command REBALANCE ALL DATABASES. This command rebalances each database in the cluster, in alphabetical order of the database name. When a rebalance runs on a database d, it first considers the placement of the partitions of the other databases in the cluster before rebalancing the partitions of d.

  • You may want to set the cardinality_estimation_level engine variable to '7.3'. This setting uses sampling and histograms together (when both are available) to improve selectivity estimation. The default setting is '7.1'.

  • Changing the value of the data_conversion_compatibility_level engine variable can change the behavior of expressions in computed columns. Refer to the Data Type Conversion section of Data Types for more information.

  • sp_query_dynamic_param should be turned off if an application breaks post-upgrade due to a change in type conversion behavior. See the Example: Changes in Type Conversion Behavior for more information.

  • Upgrading the cluster, with json_extract_string_collation set to auto (default setting), changes the collation settings for JSON_EXTRACT_STRING from json to server. Refer to In-Depth Variable Definitions for information on json_extract_string_collation settings.

Last modified: November 15, 2024

Was this article helpful?