Upgrade to SingleStore 7. 3
On this page
Note
Please note the following before upgrading your cluster:
-
SingleStore betas and release candidates cannot be upgraded unless explicitly stated in the release notes.
-
If any host in the cluster is near or at disk capacity, please increase available storage before upgrading.
Important Notes About Upgrading
This topic describes how to upgrade SingleStore.
Once the upgrade is complete, refer to the Post-Upgrade Considerations section for additional information on behavioral changes that you should be aware of.
Upgrade Duration and Behavior
Anticipate a longer upgrade time for each node.
Plancache
Plans in the plancache are dependent upon the specific SingleStore patch version, so when you upgrade to a new SingleStore version, all previously compiled plans will be invalidated.
Non-Sync Variables
By default, convert_
is set to TRUE
as of SingleStore 7.
Verify Your Cluster is Ready for Upgrade
Warning
If upgrading from MemSQL 6.
If upgrading from MemSQL version 6.
If upgrading from MemSQL 6.
Prior to upgrading your cluster, SingleStore recommends that you take a backup as a standard precautionary measure.
In addition, run the following SQL commands from the Master Aggregator to confirm that the following are true.
Step |
SQL Command |
All leaf nodes ("leaves") are online. |
|
All aggregators are online. |
|
There are no partitions with an |
|
No rebalance or restore redundancy is necessary. |
|
After you have backed up your data and verified your cluster is ready, you are ready to upgrade your cluster to the latest version of SingleStore using the SingleStore management tools.
Upgrade Your Cluster
Upgrade Versions and Methods
The table below depicts which version(s) of SingleStore can be upgraded to SingleStore 7.
-
Offline upgrade: Your SingleStore cluster will be shut down and restarted over the course of the upgrade
-
Online upgrade: Your SingleStore cluster will not be shut down over the course of the upgrade
Upgrade via SingleStore Toolbox
Upgrade from |
Offline upgrade |
Online upgrade |
---|---|---|
7. |
✔ |
From 7. |
6. |
✔ |
From 6. |
6. |
✔ |
From 6. |
6. |
✘ |
✘ |
Step 1: Upgrade SingleStore Toolbox
To upgrade to SingleStore 7.
With Internet Access
Red Hat |
|
Debian |
|
Tarball |
|
Without Internet Access
Use one of the following buttons to download the latest RPM, Debian, or tarball singlestoredb-toolbox
file to a location accessible by your cluster.
Red Hat |
Debian |
Tarball |
|
|
|
Red Hat |
|
Debian |
|
Tarball |
|
Step 2: Upgrade SingleStore
Warning
Critical cluster operations, such as an upgrade, must not be interrupted.
Do not shut down your cluster prior to starting the upgrade.
If upgrading from SingleStore 7.
If an SSH connection to a server is interrupted or lost during an upgrade, it can leave a cluster in a non-standard state.tmux
or screen
to run an upgrade session.
You cannot downgrade from your current version.
There are two available options for upgrading a cluster:
-
Offline Upgrade
The simplest and preferred upgrade option is an offline upgrade.
It is the least error-prone and easiest to execute; however, it requires downtime as all of the nodes in the cluster are upgraded at the same time. Your cluster will be shut down and restarted over the course of the upgrade. If the cluster is running with high availability (HA), you also have the option to perform an incremental online upgrade, which maintains cluster availability throughout the upgrade process.
-
Online Upgrade
For high availability (HA) clusters only.
With this option, the cluster will not be shut down over the course of the upgrade. Nodes will be restarted in a specific sequence to ensure that DML-based workloads will still function. An online upgrade may fail if either a long-running workload that writes to the database or a workload that manipulates SingleStore files (such as an automated backup or maintenance script) is running on the target cluster.
SingleStore recommends performing an online upgrade only after these workloads have completed.
Toolbox versions 1.
When upgrading your cluster:
-
If you do not specify a version, your cluster will be upgraded to the latest version and patch release of SingleStore.
-
If you specify a major version, your cluster will be upgraded to the latest patch release of that version.
-
To upgrade to a specific version and patch release, use the
--version
option -
As of SingleStore 7.
3, synchronous replication is enabled by default on all new databases. This provides an extra layer of resiliency in clusters with high availability enabled. During the upgrade process, you will be prompted to enable synchronous replication on your existing databases, or to leave those databases using the previous asynchronous replication behavior.
Refer to SingleStore release notes for available patch versions and sdb-deploy upgrade for more information.
Prior to upgrading your cluster, SingleStore recommends that you take a backup as a standard precautionary measure.sdb-deploy upgrade
command will perform a snapshot of all databases prior to upgrade.
Pre-Upgrade Confirmation
Note: The cluster will not be upgraded when running this command.
Red Hat & Debian |
Confirm that the cluster can be upgraded.
|
Tarball |
|
Typical output from a cluster that is ready to be upgraded:
Toolbox will perform the following actions:
· Download singlestoredb-server x.yy.zx
Would you like to continue? [y/N]: y
✓ Downloaded singlestoredb-server production:latest
Toolbox is about to perform following checks:
· Cluster health checks:
- Check that all nodes are online and healthy
- Check that all partitions are healthy
· Check that there are no pending rebalance operations
· Take snapshots of all databases
Would you like to continue? [y/N]: y
Checking cluster status
✓ Nodes are online
✓ Partitions are healthy
✓ Snapshots completed
✓ All checks passed successfully
Operation completed successfully
Clusters with Internet Access
Offline Upgrade
Red Hat & Debian |
|
Tarball |
|
Online Upgrade
Red Hat & Debian |
|
Tarball |
|
Clusters without Internet Access
Use one of the following buttons to download the latest RPM, Debian, or tarball singlestoredb-server
file to a location accessible by your cluster.memsqlctl
.
Red Hat |
Debian |
Tarball |
|
|
|
Run the sdb-deploy upgrade
command and reference the appropriate file in the --file-path
option.sdb-deploy upgrade
(versus upgrading the package via the package manager) will perform an offline restart of all the nodes to ensure the cluster is using the new version.
Offline Upgrade
Red Hat |
|
Debian |
|
Tarball |
|
Online Upgrade
Red Hat |
|
Debian |
|
Tarball |
|
Confirm that the Upgrade Succeeded
Toolbox displays the progress of the upgrade and reports whether the upgrade succeeded.
-
Confirm that all nodes are online and healthy.
The
State
column should displayonline
for each node.sdb-admin show-cluster✓ Successfully ran 'memsqlctl show-cluster' +---------------------+-----------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+ | Role | Host | Port | Availability Group | Pair Host | Pair Port | State | Opened Connections | Average Roundtrip Latency ms | NodeId | Master Aggregator | +---------------------+-----------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+ | Leaf | 127.0.0.1 | 3307 | 1 | null | null | online | 2 | | 2 | | | Aggregator (Leader) | 127.0.0.1 | 3306 | | null | null | online | 0 | null | 1 | Yes | +---------------------+-----------+------+--------------------+-----------+-----------+--------+--------------------+------------------------------+--------+-------------------+
-
Confirm that all databases are healthy.
The
summary
column should displayhealthy
for each database.sudo memsqlctl query --sql "SELECT * FROM information_schema.MV_DISTRIBUTED_DATABASES_STATUS;"+---------------+----------------+--------------------+---------+--------+-------------+------------+---------+------------+---------------+---------+---------------+ | database_name | num_partitions | num_sub_partitions | summary | online | replicating | recovering | pending | transition | unrecoverable | offline | sync_mismatch | +---------------+----------------+--------------------+---------+--------+-------------+------------+---------+------------+---------------+---------+---------------+ | test | 16 | 64 | healthy | 16 | 16 | 0 | 0 | 0 | 0 | 0 | 0 | +---------------+----------------+--------------------+---------+--------+-------------+------------+---------+------------+---------------+---------+---------------+
-
Confirm that all nodes reflect the version specified in the
sdb-deploy upgrade
command.The
Version
column displays the version that each node is running.sdb-admin list-nodes+------------+--------+-----------+------+---------------+--------------+---------+----------------+--------------------+--------------+ | MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address | +------------+--------+-----------+------+---------------+--------------+---------+----------------+--------------------+--------------+ | CBDC2807B7 | Master | 127.0.0.1 | 3306 | Running | True | 7.3.xx | Online | | 127.0.0.1 | | EC33CC5A08 | Leaf | 127.0.0.1 | 3307 | Running | True | 7.3.xx | Online | 1 | 127.0.0.1 | +------------+--------+-----------+------+---------------+--------------+---------+----------------+--------------------+--------------+
Roll Back from a Failed Upgrade
Currently, SingleStore does not support downgrading directly.
Note that a backup created from a given version of the SingleStore engine can only be restored to the same engine version or later.
-
Make a backup of the cluster configuration.
sdb-deploy generate-cluster-file -
Delete all of the nodes in the cluster.
sdb-admin delete-node --stop --all -
Use the following command to roll back to an earlier version of the SingleStore engine by removing the engine version(s) you do not want.
For example, if upgrading to SingleStore 7.
3 fails, remove 7. 3. sdb-deploy uninstall --version 7.3 -
Unregister all hosts in the cluster.
sdb-toolbox-config unregister-host --all -
Recreate the cluster using the cluster configuration captured in the cluster file.
Note that the cluster file may contain the engine version, so be sure to update the cluster file with the engine version you wish to restore. sdb-deploy setup-cluster --cluster-file /path/to/cluster/file -
Restore the cluster's data from the backup that was made earlier.
Refer to Back Up and Restore Data for more information.
Post-Upgrade Considerations
When upgrading to SingleStore 7.
-
In some versions, the default value for a configuration variable was changed compared to previous versions, but clusters upgraded from earlier versions retain their previous setting, both if it was set to a specific value or if it was not explicitly set and hence using the previous default.
In some of these cases, SingleStore recommends updating your configuration to the new default if you were previously using the old default, after appropriate testing. -
Some new features are automatically enabled by default on newly installed SingleStore 7.
3 clusters but not automatically enabled on clusters upgraded from an earlier version to 7. 3. In some of these cases, SingleStore recommends enabling the new features, after appropriate testing.
Upgrades to 7. 3
-
To reduce your total cost of ownership (TCO), you may be able store data in Universal Storage instead of rowstores.
This is because rowstores store their data in RAM, which can be costly. Universal Storage now supports upserts, which were previously only supported in rowstores. -
You may want to run the command REBALANCE ALL DATABASES.
This command rebalances each database in the cluster, in alphabetical order of the database name. When a rebalance runs on a database d
, it first considers the placement of the partitions of the other databases in the cluster before rebalancing the partitions ofd
. -
You may want to set the
cardinality_
engine variable toestimation_ level '7.
.3' This setting uses sampling and histograms together (when both are available) to improve selectivity estimation. The default setting is '7.
.1' -
Changing the value of the
data_
engine variable can change the behavior of expressions in computed columns.conversion_ compatibility_ level Refer to the Data Type Conversion section of Data Types for more information. -
sp_
should be turned off if an application breaks post-upgrade due to a change in type conversion behavior.query_ dynamic_ param See the Example: Changes in Type Conversion Behavior for more information. -
Upgrading the cluster, with
json_
set toextract_ string_ collation auto
(default setting), changes the collation settings forJSON_
fromEXTRACT_ STRING json
toserver
.Refer to In-Depth Variable Definitions for information on json_
settings.extract_ string_ collation
Upgrades from 6. 8 and Earlier to 7. 0 and Later
Synchronous Replication On by Default
In previous versions of SingleStore, in clusters with high availability enabled, replication between primary partitions and replica partitions happened asynchronously.
Security Change for Resource Pools
Between MemSQL 6.sync_
is enabled.
To ensure current users will be able to access pools immediately after upgrading to 7.USAGE
permissions to all existing and future resource pools if sync_
was enabled prior to upgrade (i.GRANT USAGE ON RESOURCE POOL '*' TO <user>@<host>
is run internally on upgrade to 7.REVOKE USAGE ON RESOURCE POOL '*' FROM <user>@<host>
is run.USAGE
permissions to specific resource pools for those users and any other new users created.
Many Existing Engine Variables are Now Sync Variables
The following engine variables from 6.
Global variables
-
auditlog_
disk_ sync -
columnstore_
disk_ insert_ threshold -
columnstore_
flush_ bytes -
columnstore_
ingest_ management_ queue_ timeout -
columnstore_
segment_ rows -
disk_
plan_ expiration_ minutes -
enable_
columnstore_ ingest_ management -
enable_
disk_ plan_ expiration -
explain_
expression_ limit -
forward_
aggregator_ plan_ hash -
geo_
query_ info -
geo_
sphere_ radius -
internal_
columnstore_ window_ minimum_ blob_ size -
load_
data_ internal_ compression -
load_
data_ max_ buffer_ size -
load_
data_ read_ size -
load_
data_ write_ size -
materialize_
ctes -
max_
connect_ errors -
max_
prepared_ stmt_ count -
multi_
insert_ tuple_ count -
pipelines_
batches_ metadata_ to_ keep -
pipelines_
extractor_ debug_ logging -
pipelines_
kafka_ version -
pipelines_
max_ concurrent -
pipelines_
max_ concurrent_ batch_ partitions -
pipelines_
max_ errors_ per_ partition -
pipelines_
stderr_ bufsize -
plan_
expiration_ minutes -
read_
advanced_ counters -
replication_
timeout_ ms -
snapshot_
trigger_ size -
sync2_
timeout -
synchronize_
database_ timeout
Session Variables
-
character_
set_ server -
collation_
connection -
collation_
database -
collation_
server -
enable_
binary_ protocol -
enable_
broadcast_ left_ join -
enable_
local_ shuffle_ group_ by -
enable_
multipartition_ queries -
enable_
skiplist_ sampling_ for_ selectivity -
explain_
joinplan_ costs -
ignore_
insert_ into_ computed_ column -
inlist_
precision_ limit -
leaf_
pushdown_ default -
leaf_
pushdown_ enable_ rowcount -
lock_
wait_ timeout -
max_
broadcast_ tree_ rowcount -
max_
subselect_ aggregator_ rowcount -
optimize_
constants -
optimize_
expressions_ larger_ than -
optimize_
huge_ expressions -
optimize_
stmt_ threshold -
optimizer_
warnings -
report_
mpl_ optimizations -
reshuffle_
group_ by_ base_ cost -
sampling_
estimates_ for_ complex_ filters -
sql_
select_ limit -
statistics_
warnings
See the List of Engine Variables for more information on these variables.
Last modified: May 6, 2025