Upgrade to SingleStore 7. 3
On this page
Note
Please note the following before upgrading your cluster:
-
SingleStore betas and release candidates cannot be upgraded unless explicitly stated in the release notes.
-
If any host in the cluster is near or at disk capacity, please increase available storage before upgrading.
Important Notes About Upgrading
This topic describes how to upgrade SingleStore.
Once the upgrade is complete, refer to the Post-Upgrade Considerations section for additional information on behavioral changes that you should be aware of.
Upgrade Duration and Behavior
Anticipate a longer upgrade time for each node.
Plancache
Plans in the plancache are dependent upon the specific SingleStore patch version, so when you upgrade to a new SingleStore version, all previously compiled plans will be invalidated.
Non-Sync Variables
By default, convert_ is set to TRUE as of SingleStore 7.
Verify Your Cluster is Ready for Upgrade
Warning
If upgrading from MemSQL 6.
If upgrading from MemSQL version 6.
If upgrading from MemSQL 6.
Prior to upgrading your cluster, SingleStore recommends that you take a backup as a standard precautionary measure.
In addition, run the following SQL commands from the Master Aggregator to confirm that the following are true.
|
Step |
SQL Command |
|
All leaf nodes ("leaves") are online. |
|
|
All aggregators are online. |
|
|
There are no partitions with an |
|
|
No rebalance or restore redundancy is necessary. |
|
After you have backed up your data and verified your cluster is ready, you are ready to upgrade your cluster to the latest version of SingleStore using the SingleStore management tools.
Upgrade Your Cluster
Upgrade Versions and Methods
The table below depicts which version(s) of SingleStore can be upgraded to SingleStore 7.
-
Offline upgrade: Your SingleStore cluster will be shut down and restarted over the course of the upgrade
-
Online upgrade: Your SingleStore cluster will not be shut down over the course of the upgrade
Upgrade via SingleStore Toolbox
|
Upgrade from |
Offline upgrade |
Online upgrade |
|---|---|---|
|
7. |
✔ |
From 7. |
|
6. |
✔ |
From 6. |
|
6. |
✔ |
From 6. |
|
6. |
✘ |
✘ |
Step 1: Upgrade SingleStore Toolbox
To upgrade to SingleStore 7.
With Internet Access
|
Red Hat |
|
|
Debian |
|
|
Tarball |
|
Without Internet Access
Use one of the following buttons to download the latest RPM, Debian, or tarball singlestoredb-toolbox file to a location accessible by your cluster.
|
Red Hat |
Debian |
Tarball |
|
|
|
|
|
Red Hat |
|
|
Debian |
|
|
Tarball |
|
Step 2: Upgrade SingleStore
Warning
Critical cluster operations, such as an upgrade, must not be interrupted.
Do not shut down your cluster prior to starting the upgrade.
If upgrading from SingleStore 7.
If an SSH connection to a server is interrupted or lost during an upgrade, it can leave a cluster in a non-standard state.tmux or screen to run an upgrade session.
You cannot downgrade from your current version.
There are two available options for upgrading a cluster:
-
Offline Upgrade
The simplest and preferred upgrade option is an offline upgrade.
It is the least error-prone and easiest to execute; however, it requires downtime as all of the nodes in the cluster are upgraded at the same time. Your cluster will be shut down and restarted over the course of the upgrade. If the cluster is running with high availability (HA), you also have the option to perform an incremental online upgrade, which maintains cluster availability throughout the upgrade process.
-
Online Upgrade
For high availability (HA) clusters only.
With this option, the cluster will not be shut down over the course of the upgrade. Nodes will be restarted in a specific sequence to ensure that DML-based workloads will still function. An online upgrade may fail if either a long-running workload that writes to the database or a workload that manipulates SingleStore files (such as an automated backup or maintenance script) is running on the target cluster.
SingleStore recommends performing an online upgrade only after these workloads have completed.
Toolbox versions 1.
When upgrading your cluster:
-
If you do not specify a version, your cluster will be upgraded to the latest version and patch release of SingleStore.
-
If you specify a major version, your cluster will be upgraded to the latest patch release of that version.
-
To upgrade to a specific version and patch release, use the
--versionoption -
As of SingleStore 7.
3, synchronous replication is enabled by default on all new databases. This provides an extra layer of resiliency in clusters with high availability enabled. During the upgrade process, you will be prompted to enable synchronous replication on your existing databases, or to leave those databases using the previous asynchronous replication behavior.
Refer to SingleStore release notes for available patch versions and sdb-deploy upgrade for more information.
Prior to upgrading your cluster, SingleStore recommends that you take a backup as a standard precautionary measure.sdb-deploy upgrade command will perform a snapshot of all databases prior to upgrade.
Pre-Upgrade Confirmation
Note: The cluster will not be upgraded when running this command.
|
Red Hat & Debian |
Confirm that the cluster can be upgraded.
|
|
Tarball |
|
Typical output from a cluster that is ready to be upgraded:
Toolbox will perform the following actions:
· Download singlestoredb-server x.yy.zx
Would you like to continue? [y/N]: y
✓ Downloaded singlestoredb-server production:latest
Toolbox is about to perform following checks:
· Cluster health checks:
- Check that all nodes are online and healthy
- Check that all partitions are healthy
· Check that there are no pending rebalance operations
· Take snapshots of all databases
Would you like to continue? [y/N]: y
Checking cluster status
✓ Nodes are online
✓ Partitions are healthy
✓ Snapshots completed
✓ All checks passed successfully
Operation completed successfullyClusters with Internet Access
Offline Upgrade
|
Red Hat & Debian |
|
|
Tarball |
|
Online Upgrade
|
Red Hat & Debian |
|
|
Tarball |
|
Clusters without Internet Access
Use one of the following buttons to download the latest RPM, Debian, or tarball singlestoredb-server file to a location accessible by your cluster.memsqlctl.
|
Red Hat |
Debian |
Tarball |
|
|
|
|
Run the sdb-deploy upgrade command and reference the appropriate file in the --file-path option.sdb-deploy upgrade (versus upgrading the package via the package manager) will perform an offline restart of all the nodes to ensure the cluster is using the new version.
Offline Upgrade
|
Red Hat |
|
|
Debian |
|
|
Tarball |
|
Online Upgrade
|
Red Hat |
|
|
Debian |
|
|
Tarball |
|
Confirm that the Upgrade Succeeded
Toolbox displays the progress of the upgrade and reports whether the upgrade succeeded.
-
Confirm that all nodes are online and healthy.
The
Statecolumn should displayonlinefor each node.sdb-admin show-cluster✓ Successfully ran 'memsqlctl show-cluster' +---------------------+-----------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+ | Role | Host | Port | Availability Group | Pair Host | Pair Port | State | Opened Connections | Average Roundtrip Latency ms | NodeId | Master Aggregator | +---------------------+-----------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+ | Leaf | 127.0.0.1 | 3307 | 1 | null | null | online | 2 | | 2 | | | Aggregator (Leader) | 127.0.0.1 | 3306 | | null | null | online | 0 | null | 1 | Yes | +---------------------+-----------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+ -
Confirm that all databases are healthy.
The
summarycolumn should displayhealthyfor each database.sudo memsqlctl query --sql "SELECT * FROM information_schema.MV_DISTRIBUTED_DATABASES_STATUS;"+---------------+----------------+--------------------+---------+--------+-------------+------------+---------+------------+---------------+---------+---------------+ | database_name | num_partitions | num_sub_partitions | summary | online | replicating | recovering | pending | transition | unrecoverable | offline | sync_mismatch | +---------------+----------------+--------------------+---------+--------+-------------+------------+---------+------------+---------------+---------+---------------+ | test | 16 | 64 | healthy | 16 | 16 | 0 | 0 | 0 | 0 | 0 | 0 | +---------------+----------------+--------------------+---------+--------+-------------+------------+---------+------------+---------------+---------+---------------+ -
Confirm that all nodes reflect the version specified in the
sdb-deploy upgradecommand.The
Versioncolumn displays the version that each node is running.sdb-admin list-nodes+------------+--------+-----------+------+---------------+--------------+---------+----------------+--------------------+--------------+ | MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address | +------------+--------+-----------+------+---------------+--------------+---------+----------------+--------------------+--------------+ | CBDC2807B7 | Master | 127.0.0.1 | 3306 | Running | True | 7.3.xx | Online | | 127.0.0.1 | | EC33CC5A08 | Leaf | 127.0.0.1 | 3307 | Running | True | 7.3.xx | Online | 1 | 127.0.0.1 | +------------+--------+-----------+------+---------------+--------------+---------+----------------+--------------------+--------------+
Roll Back from a Failed Upgrade
Currently, SingleStore does not support downgrading directly.
Note that a backup created from a given version of the SingleStore engine can only be restored to the same engine version or later.
-
Make a backup of the cluster configuration.
sdb-deploy generate-cluster-file -
Delete all of the nodes in the cluster.
sdb-admin delete-node --stop --all -
Use the following command to roll back to an earlier version of the SingleStore engine by removing the engine version(s) you do not want.
For example, if upgrading to SingleStore 7.
3 fails, remove 7. 3. sdb-deploy uninstall --version 7.3 -
Unregister all hosts in the cluster.
sdb-toolbox-config unregister-host --all -
Recreate the cluster using the cluster configuration captured in the cluster file.
Note that the cluster file may contain the engine version, so be sure to update the cluster file with the engine version you wish to restore. sdb-deploy setup-cluster --cluster-file /path/to/cluster/file -
Restore the cluster's data from the backup that was made earlier.
Refer to Back Up and Restore Data for more information.
Post-Upgrade Considerations
When upgrading to SingleStore 7.
-
In some versions, the default value for a configuration variable was changed compared to previous versions, but clusters upgraded from earlier versions retain their previous setting, both if it was set to a specific value or if it was not explicitly set and hence using the previous default.
In some of these cases, SingleStore recommends updating your configuration to the new default if you were previously using the old default, after appropriate testing. -
Some new features are automatically enabled by default on newly installed SingleStore 7.
3 clusters but not automatically enabled on clusters upgraded from an earlier version to 7. 3. In some of these cases, SingleStore recommends enabling the new features, after appropriate testing.
Upgrades to 7. 3
-
To reduce your total cost of ownership (TCO), you may be able store data in Universal Storage instead of rowstores.
This is because rowstores store their data in RAM, which can be costly. Universal Storage now supports upserts, which were previously only supported in rowstores. -
You may want to run the command REBALANCE ALL DATABASES.
This command rebalances each database in the cluster, in alphabetical order of the database name. When a rebalance runs on a database d, it first considers the placement of the partitions of the other databases in the cluster before rebalancing the partitions ofd. -
You may want to set the
cardinality_engine variable toestimation_ level '7..3' This setting uses sampling and histograms together (when both are available) to improve selectivity estimation. The default setting is '7..1' -
Changing the value of the
data_engine variable can change the behavior of expressions in computed columns.conversion_ compatibility_ level Refer to the Data Type Conversion section of Data Types for more information. -
sp_should be turned off if an application breaks post-upgrade due to a change in type conversion behavior.query_ dynamic_ param See the Example: Changes in Type Conversion Behavior for more information. -
Upgrading the cluster, with
json_set toextract_ string_ collation auto(default setting), changes the collation settings forJSON_fromEXTRACT_ STRING jsontoserver.Refer to In-Depth Variable Definitions for information on json_settings.extract_ string_ collation
Upgrades from 6. 8 and Earlier to 7. 0 and Later
Synchronous Replication On by Default
In previous versions of SingleStore, in clusters with high availability enabled, replication between primary partitions and replica partitions happened asynchronously.
Security Change for Resource Pools
Between MemSQL 6.sync_ is enabled.
To ensure current users will be able to access pools immediately after upgrading to 7.USAGE permissions to all existing and future resource pools if sync_ was enabled prior to upgrade (i.GRANT USAGE ON RESOURCE POOL '*' TO <user>@<host> is run internally on upgrade to 7.REVOKE USAGE ON RESOURCE POOL '*' FROM <user>@<host> is run.USAGE permissions to specific resource pools for those users and any other new users created.
Many Existing Engine Variables are Now Sync Variables
The following engine variables from 6.
Global variables
-
auditlog_disk_ sync -
columnstore_disk_ insert_ threshold -
columnstore_flush_ bytes -
columnstore_ingest_ management_ queue_ timeout -
columnstore_segment_ rows -
disk_plan_ expiration_ minutes -
enable_columnstore_ ingest_ management -
enable_disk_ plan_ expiration -
explain_expression_ limit -
forward_aggregator_ plan_ hash -
geo_query_ info -
geo_sphere_ radius -
internal_columnstore_ window_ minimum_ blob_ size -
load_data_ internal_ compression -
load_data_ max_ buffer_ size -
load_data_ read_ size -
load_data_ write_ size -
materialize_ctes -
max_connect_ errors -
max_prepared_ stmt_ count -
multi_insert_ tuple_ count -
pipelines_batches_ metadata_ to_ keep -
pipelines_extractor_ debug_ logging -
pipelines_kafka_ version -
pipelines_max_ concurrent -
pipelines_max_ concurrent_ batch_ partitions -
pipelines_max_ errors_ per_ partition -
pipelines_stderr_ bufsize -
plan_expiration_ minutes -
read_advanced_ counters -
replication_timeout_ ms -
snapshot_trigger_ size -
sync2_timeout -
synchronize_database_ timeout
Session Variables
-
character_set_ server -
collation_connection -
collation_database -
collation_server -
enable_binary_ protocol -
enable_broadcast_ left_ join -
enable_local_ shuffle_ group_ by -
enable_multipartition_ queries -
enable_skiplist_ sampling_ for_ selectivity -
explain_joinplan_ costs -
ignore_insert_ into_ computed_ column -
inlist_precision_ limit -
leaf_pushdown_ default -
leaf_pushdown_ enable_ rowcount -
lock_wait_ timeout -
max_broadcast_ tree_ rowcount -
max_subselect_ aggregator_ rowcount -
optimize_constants -
optimize_expressions_ larger_ than -
optimize_huge_ expressions -
optimize_stmt_ threshold -
optimizer_warnings -
report_mpl_ optimizations -
reshuffle_group_ by_ base_ cost -
sampling_estimates_ for_ complex_ filters -
sql_select_ limit -
statistics_warnings
See the List of Engine Variables for more information on these variables.
Last modified: May 6, 2025