Important
The SingleStore 9.1 release candidate (RC) gives you the opportunity to preview, evaluate, and provide feedback on new and upcoming features prior to their general availability. In the interim, SingleStore 9.0 is recommended for production workloads, which can later be upgraded to SingleStore 9.1.
Storage Configuration
On this page
Prerequisites
Before deploying your SingleStore cluster, ensure that an appropriate StorageClass is available in your Kubernetes cluster.storageClass value specified in sdb-cluster. must match an existing StorageClass resource.
Run the following command to list available storage classes:
kubectl get storageclass
When selecting a StorageClass for SingleStore, verify the following properties.
-
allowVolumeExpansion-
Must be
truefor production deployments to enable online volume expansion -
If
false, storage size cannot be increased after deployment
-
-
reclaimPolicy-
Deleteautomatically removesPersistentVolumeswhen the cluster is deleted -
Retainpreserves data after cluster deletion, whileDeleteautomatically cleans up volumes.Select the option that meets your operational requirements and data retention policies.
-
-
volumeBindingMode-
Immediateprovisions the volume when thePersistentVolumeClaimis created -
WaitForFirstConsumerdelays provisioning until a pod is scheduled.SingleStore recommends this value for multi-zone deployments
-
To inspect a specific StorageClass:
kubectl describe storageclass <class-name>
Confirm that allowVolumeExpansion is set to true for production deployments.
If a suitable StorageClass is not available, create one with volume expansion enabled.
Apply the configuration:
kubectl apply -f storageclass.yaml
Before deploying the cluster, ensure the following:
-
The
StorageClassexists:kubectl get storageclass <class-name> -
The
StorageClasshasallowVolumeExpansionset totruefor production deployments -
RBAC permissions are correctly configured in
sdb-rbac.yaml
Note
The StorageClass must exist before running kubectl apply -f sdb-cluster..StorageClass requires backing up data, deleting the cluster, and redeploying it.
Fault-Tolerant Storage Configuration
To deploy a fault-tolerant SingleStore cluster, configure the redundancyLevel and appropriate storage resources.
-
redundancyLevel: 1disables replication and is intended for development or testing -
redundancyLevel: 2enables data replication and is required for production deployments
With redundancyLevel: 2, each partition has two replicas distributed across availability groups.
The following is a fault-tolerant configuration example:
apiVersion: memsql.com/v1alpha1kind: MemsqlClustermetadata:name: sdb-cluster-haspec:license: license_keyadminHashedPassword: "hashed_password"nodeImage:repository: singlestore/nodetag: alma-9.1.0redundancyLevel: 2aggregatorSpec:count: 3 # Use odd numbers such as 3, 5, or 7cores: 8coresLimit: 8memoryMB: 32768memoryLimitMB: 32768storageGB: 256storageClass: premium-rwoleafSpec:count: 4 # Four leaves create two availability groupscores: 16coresLimit: 16memoryMB: 131072memoryLimitMB: 131072storageGB: 1024storageClass: premium-rwo
Use an odd number of voting member aggregators - typically one Master Aggregator (MA) and two or more child aggregators (CAs) configured as voting members at the time the cluster is created - to maintain quorum for cluster consensus.
-
Three master aggregators tolerate one failure
-
Five master aggregators tolerate two failures
For child aggregators, SingleStore recommends running at least two to maintain connectivity during cluster maintenance operations.
With redundancyLevel: 2, data is replicated across leaf nodes.
-
Total usable storage is approximately
(leafSpec.count × leafSpec. storageGB)/ 2 -
For example, four leaf nodes with 1 TB each provide approximately 2 TB of usable storage
Refer to Managing High Availability and Disaster Recovery for more information.
Last modified: