Workspace Scaling
On this page
Resizing and Scaling Compute
Overview
SingleStore Helios has a unique architecture which offers the flexibility to scale resources dynamically, for read and write workloads.
This is because SingleStore is built on a clustered architecture which is distributed across compute resources.
Compute workspaces can be scaled up or down to accommodate changing workloads.
Scaling operations are always online hence the connectivity to the database is not affected during scaling operations.
How Scaling Works
A SingleStore compute “workspace” is made up of individual nodes, which allow an even distribution of jobs across the underlying cloud resources.
There are multiple ways to scale resources depending on the workload requirements.Resized
, Scaled
, or Autoscaled
.
Resizing
Resizing is performed by changing the base size of the compute deployment (for example from S-12 to S-24).
As data is redistributed when resizing, the amount of time to perform a full resize is dependent on the cluster size and the size of the data working set.
Resizing is ideal for workloads which have grown or shrunk over time and are expected to continue operation at the new compute size.
Scaling
Note
This is a Preview feature.
Scaling operations are performed by changing the scaleFactor
of the deployment.scaleFactor
from "1" to "2" or "4".
This operation occurs quickly (minutes) and is designed to be used when resources need to be rapidly scaled up and down to handle dynamic changes in workload needs.
Autoscaling
Note
This is a Preview feature.
Autoscaling is designed to track the active compute workload and automatically scale the deployment based on compute and memory usage.
While many databases limit autoscaling to read-replicas, SingleStore Helios has implemented autoscaling to provide both enhanced write and read performance.
When configuring autoscaling users can turn the feature on or off, and set the maximum amount of vCPU and Memory to be provisioned (2x or 4x of the base amount).
Autoscaling is ideal for dynamic workloads where the user does not know when peaks in workload may occur and can be turned on or off for each compute deployment independently.
Cache Configuration
Setting the Cache Configuration allows compute deployments to leverage greater volumes of Persistent Cache to increase the amount of data (the working set) that can be accessed with extremely low latency.
Increasing the cache configuration, for example from “1x” to “2x” or “4x”), will increase the overall volume of the cache, and automatically distribute data within the cache.
This operation runs online and data is available to be written and read throughout the reconfiguration process, and cache configurations can be increased or decreased as desired.
Resizing and Scaling
Scaling up or down can be triggered through the Cloud Portal or Management API.
Using Cloud Portal
To scale a workspace through the Cloud Portal, navigate to Deployments > Overview, select the workspace card, open the workspace options menu (⋮), and select Resize Workspace.
Using Management API
Resizing up or down through the management API can be done by using WorkspaceUpdate size
.WorkspaceUpdate scaleFactor
, and Cache Configuration can be updated with WorkspaceUpdate cacheConfig
.
High Availability
Critical workloads need to stay online, even when scaling the underlying resources.
Changing the Database Partition Count
You can use either of the following methods to change the partition count in a database:
-
Use the
BACKUP WITH SPLIT PARTITION
command.BACKUP [DATABASE] db_name WITH SPLIT PARTITIONS [BY 2] TO [S3 | AZURE | GCS] "backup_path" [CONFIG configuration_json] [CREDENTIALS credentials_json]
For more information about the syntax options, refer to BACKUP DATABASE.
-
Use the INSERT…SELECT command.
In this method, you must first create a new database with the desired number of partitions and then use
INSERT…SELECT
to copy the tables from the existing database to the new database.For huge tables that take more than a few minutes to copy (this depends on the amount of data and your system's scale), you should move the rows of the table over in large batches, instead of all at once.
For example, you have a monitoring cluster of S-8 for which the recommended partition count is 64.
In this case, create a new database with the required ideal partition count and use the INSERT…SELECT
command to copy the data from the existing database to the new database.
Scaling Impact on Performance
Resizing operations trigger the online addition or removal of compute resources, as well as a redistribution of data to ensure even performance across the compute workspace.
For large deployments with heavy active workloads the time required to complete the resizing operation may increase as a larger volume of active data needs to be redistributed within the deployment.
Billing
Compute consumes compute credits while running.scaleFactor
of the workspace.
Resizing and scaling do not affect the storage costs, as storage is charged based on the average number of monthly GB stored, which does not change when deployments are scaled up or down.
Last modified: November 27, 2024