System Requirements and Recommendations
On this page
The following are some requirements and recommendations you should follow when provisioning and setting up your hosts to optimize the performance of your cluster.
Cloud Deployment Recommendations
For cloud deployments, all instances should be geographically deployed in a single region.
Here are some recommendations to help optimize for performance:
-
Cross-AZ / Multi-AZ failover is not recommended.
Please see the Recommended Configurations to Tolerate Failure of a Cloud AZ or Nearby Data Center. -
Network throughput - look for guaranteed throughput instead of bursting "up to" amount.
For example, favor "10 Gbps" over "Up to 10 Gbps". -
For memory-intensive workloads, consider the memory optimized SKUs, typically with a ratio of 8 GB of memory per vCPU.
-
Optimize for NUMA (Non-Uniform Memory Access).
Queries across NUMA nodes can be expensive. Optimize for CPU types with a single NUMA node for VM type. -
For storage, cloud providers should use SSD disks at a minimum.
Provisioned SSD with higher IOPS throughput if storage performance is an issue with SSD, though there is a cost trade-off. -
Each leaf node should map to a separate SSD disk.
Parallel I/O is important due to limitations in disk I/O. -
Use a Network Load Balancer (NLB) for TCP (Layer 4) connectivity.
Do not use the classic load balancer option in AWS.
Here are some platform-specific recommendations:
Platform |
Compute |
Storage |
---|---|---|
AWS |
Memory Optimized: r5. i. |
EBS volumes with SSD: gp3 For higher throughput and cost, use provisioned IOPS: io2 |
Azure |
Memory Optimized SKU: eds_ i. |
Managed Disks: LRS only Premium SSD: Ultra SSD for more performance/cost |
GCP |
General Purpose SKU with an 8:1 ratio: N2 series i. For n2-highmem-16 Minimum CPU platform: Intel Ice Lake |
SSD storage type: pd-ssd pd-extreme is a more expensive option for higher throughput provisioned storage |
Recommended Hardware
The following are additional hardware recommendations for optimal performance:
Component |
Recommendation |
---|---|
CPU |
For each host, an x86_ SingleStore is optimized for architectures supporting SSE4. Refer to Recommended CPU Settings (below) for more information. |
Memory |
A minimum of 8GB of RAM available for each aggregator node A minimum of 32GB of RAM available for each leaf node It is strongly recommended to run leaf nodes on hosts that have the same hardware and software specifications. |
Storage |
Provide a storage system for each node with at least 3x the capacity of main memory. SSD storage is recommended for columnstore workloads. Both Refer to Storage Requirements (below) for more information. |
Recommended Operating System Settings
Platform and Kernel
Our recommended platforms are:
-
Red Hat Enterprise Linux (RHEL) / AlmaLinux 7 or later
-
Debian 8 or 9 (version 9 is preferred)
CentOS 6, 7, and 8 are supported but are not recommended for new deployments.
When provisioning your hosts, the minimum Linux kernel version required is 3.
Toolbox Platform and Kernel Checkers
The following Toolbox checkers are run prior to deploying SingleStore.
Note that the associated sdb-report collect and sdb-report check Toolbox commands may also be run from the command line.
Host Setting |
Checker |
Description |
Host Configuration |
---|---|---|---|
cpgroups |
|
Checks if control groups (cgroups) are disabled |
Disabled in non-VM deployments. Run the kernel with the following arguments to disable cgroups:
|
Defunct (Zombie) Processes |
|
Checks if there are defunct (zombie) processes on each host |
Kill all zombie processes on each host before deploying SingleStore |
Filesystem Type |
|
Checks if a host’s filesystem can support SingleStore |
|
Kernel Version |
|
Checks for kernel version consistency |
Kernel versions must be the same for all hosts |
Linux Out-of-Memory Killer |
|
Checks in dmesg for invocations of the Linux out-of-memory killer |
The Linux out-of-memory killer is not running on any host |
Major Page Faults |
|
Checks the number of major page faults per second on each host and determines if it’s acceptable |
Major page faults per second on each host:
|
Orchestrator Processes |
|
Checks if any orchestrator process is found on any host |
Not a requirement nor any action required. |
Proc Filesystem ( |
|
Collects diagnostic files from |
Configure File Descriptor and Maximum Process Limits
A SingleStore cluster uses a substantial number of client and server connections between aggregators and leaf nodes to run queries and cluster operations./etc/security/limits.
file, or directly via shell commands.
Permanently increase the open files limit and the max user processes limit for the memsql
user by editing the /etc/security/limits.
file as the root user and adding the following lines:
memsql soft NOFILE 1024000
memsql hard NOFILE 1024000
memsql soft nproc 128000
memsql hard nproc 128000
Note
Each node must be restarted for the changed ulimit settings to take effect.
The file-max
setting configures the maximum number of file handles (file descriptor limit) for the entire system.file-max
value must be higher than the NOFILE
setting./proc/sys/fs/file-max
.fs.
line in the /etc/sysctl.
file.
Configure the Linux nice Setting
Given how the Linux kernel calculates the maximum nice limit, SingleStore recommends that you modify the /etc/security/limits.
file and set the maximum nice
limit to -10
on each Linux host in the cluster.
To apply this new nice
limit, restart each SingleStore node in the cluster.
Alternatively, you may set the default nice
limit to -10
on each Linux host in the cluster prior to deploying SingleStore.
Configure Linux ulimit Settings
Most Linux operating systems provide ways to control the usage of system resources such as threads, files and network at an individual user or process level./etc/security/limits.
file, in the /etc/security/limits.
file, or directly via shell commands.
Configure Linux vm Settings
SingleStore recommends letting first-party tools, such as sdb-admin
and memsqlctl
, configure your vm
settings to minimize the likelihood of getting memory errors on your hosts.
-
vm.
set tomax_ map_ count 1000000000
-
vm.
set toovercommit_ memory 0
WARNING:vm.
should be set to 0.overcommit_ memory Using values other than 0 is recommended only for systems with swap areas larger than their physical memory. Please consult your distribution documentation. -
vm.
ignore, unlessovercommit_ ratio vm.
is set toovercommit_ memory 2
, in which case, set this to99
.See the warning above for vm.
values other than 0.overcommit_ memory -
vm.
set to either 1% of system RAM or 4 GB, whichever is smallermin_ free_ kbytes -
vm.
set betweenswappiness 1
and10
If the SingleStore Tools cannot set the values for you, you will get an error message stating what the value should be and how to set it./sbin/sysctl
command, as shown below.
sudo sysctl -w vm.max_map_count=1000000000
sudo sysctl -w vm.min_free_kbytes=<either 1% of system RAM or 4 GB, whichever is smaller>
Toolbox vm Checkers
The following Toolbox checkers are run prior to deploying SingleStore.
Host Setting |
Checker |
Description |
Host Configuration |
---|---|---|---|
Max Map Count |
|
Checks that vm. |
The |
Minimum Free Kilobytes |
|
Checks if vm. |
The value of |
Swappiness |
|
Checks the value of vmSwappiness |
The swappiness value ( When set to lower values, the kernel will use less swap space. Recommended: Swappiness should never be set to |
vmOvercommit |
|
Checks if the By design, Linux kills processes that are consuming large amounts of memory when the amount of free memory is deemed to be too low. Overcommit settings that are set too low may cause frequent and unnecessary failures. Refer to Configuring System Memory Capacity for more information. |
Providing virtual memory without guaranteeing physical storage for it
|
Configure Swap Space
SingleStore recommends that you create a swap partition (or swap file on a dedicated device) to serve as an emergency backing store for RAM.
For more information setting up and configuring swap space, please refer to your distribution’s documentation.
After enabling these settings, your hosts will be configured for optimal performance when running one or more SingleStore nodes.
Toolbox Swap Space Checkers
The following Toolbox checkers are run prior to deploying SingleStore.
Host Setting |
Checker |
Description |
Host Configuration |
---|---|---|---|
Swap |
|
Checks if swapping is enabled. |
Recommended: Enabled. The total swap memory on each host should be >= 10% of the total RAM (total physical memory) |
Swap Usage |
|
Checks if the swap space that is actively being used is less than 5% |
|
Configure Transparent Huge Pages
Linux organizes RAM into pages that are usually 4 KB in size.
As SingleStore uses a lot of memory, SingleStore recommends that you disable THP at boot time on all nodes (master aggregator, child aggregators, and leaf nodes) in the cluster.
To disable THP, add the following lines to the end of /etc/rc.
before the exit
line (if present), and reboot the host:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
echo no > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
echo 0 > /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/defrag
echo no > /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/defrag
On Red Hat distributions, THP will be under redhat_
, on other distributions of Linux it will just be transparent_
.ls /sys/kernel/mm/
.
The khugepaged/defrag
option will be 1
or 0
on newer Linux versions (e.yes
or no
on older versions (e.cat /sys/kernel/mm/*transparent_
.1
or 0
, keep the line with echo 0
; if you see yes
or no
, keep the line with echo no
.
You should end up with at least three lines of settings, for enabled
, defrag
, and khugepaged/defrag
.
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
echo 0 > /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/defrag
Typically, you may include all eight settings and THP will still be disabled as expected.
Note
Refer to the documentation for your operating system for more information on how to disable THP.
Toolbox THP Checkers
The following Toolbox checkers are run prior to deploying SingleStore.
Host Setting |
Checker |
Description |
Host Configuration |
---|---|---|---|
Transparent Huge Pages (THP) |
|
Checks if transparent huge pages are disabled on each host in |
Recommended: Disabled. The value of these two settings must be either |
Configure Network Time Protocol Service
Install and run ntpd
to ensure that system time is in sync across all nodes in the cluster.
For Debian distributions (like Ubuntu):
sudo apt-get install ntpd
For Red Hat / CentOS/ AlmaLinux:
sudo yum install ntp
Recommended CPU Settings
Configure Cluster-on-Die Mode
If you are installing SingleStore natively and have access to the BIOS, you should enable Cluster-on-Die in the system BIOS for hosts with Haswell-EP and later x86_
Configure NUMA
If the CPU(s) on your host supports Non-Uniform Memory Access (NUMA), SingleStore can take advantage of that and bind SingleStore nodes to NUMA nodes.
If you do not configure SingleStore this way, performance will be greatly degraded due to expensive cross-NUMA-node memory access.
Note
Linux and numactl
cannot detect when the virtual environment hosts have NUMA.
This configuration is recommended as it allows for VM portability with the associated performance improvements afforded by optimizing your system for NUMA.
SingleStore Tools can do the NUMA binding for you; however, you must have numactl
installed first.
-
Log into each host and install the
numactl
package.For example, for a Debian-based OS: sudo apt-get install numactl -
For Red Hat / CentOS / AlmaLinux, run the following:
sudo yum install numactl -
Check the number of NUMA nodes your hosts by running
numactl --hardware
.For example: numactl --hardwareavailable: 2 nodes (0-1)
The output shows that there are 2 NUMA nodes on this host, numbered 0 and 1.
For additional information, see Configuring SingleStore for NUMA.
Toolbox CPU Checkers
The following Toolbox checkers are run prior to deploying SingleStore.
Host Setting |
Checker |
Description |
Host Configuration |
---|---|---|---|
CPU Features |
|
Read the content of cpuinfo and check that the flags field contains the sse4_ |
Recommended: sse4_ SingleStore is optimized for architectures supporting SSE4. Refer to AVX2 Instruction Set Verification for more information on how to verify if your system supports AVX2. |
CPU Frequency |
|
Collects information about CPU frequency configuration |
The following folders should exist:
and / CPU frequency scaling data is collected from
|
CPU Hyperthreading |
|
Checks that hyperthreading is enabled on each host via the lscpu command (available on each host) |
Hyperthreading is enabled |
CPU Threading Configuration |
|
Collects information about CPU threading configuration lscpu is available on each host |
Hyperthreading is enabled if the number of threads per core > 1 |
CPU Idle and Utilization |
|
Checks the CPU utilization & idle Checks if the CPU is frequently more than 5% idle. If not, this typically indicates that your workload will not have room to grow, and more cores are (will likely be) required |
Percentage of time the CPU is idle:
|
CPU and Memory Bandwidth |
|
Check that CPU and memory bandwidth is appropriate for safe performance on your hosts. |
|
CPU Model |
|
Checks if all CPU models are the same on each host |
|
CPU Power Control |
|
Checks that power saving and turbo mode settings on all hosts are disabled |
|
NUMA Configuration |
|
Checks if SingleStore with NUMA is configured via numactl for optimal performance as described in Configuring SingleStore for NUMA |
Enabled on each leaf node host and configured via
|
Recommended Memory Settings
Toolbox Memory Checkers
The following Toolbox checkers are run prior to deploying SingleStore.
Host Setting |
Checker |
Description |
Host Configuration |
---|---|---|---|
Committed Memory |
|
Checks the committed memory on each host and determines if it’s acceptable |
Committed memory on each host:
|
Maximum Memory Settings |
|
Checks the host’s maximum memory settings |
Maximum memory settings are recommended to be a percentage of the host's total memory, with a ceiling of 90% Recommended:
|
Recommended Storage Settings
POSIX-Compliant
To maintain data durability and resiliency, SingleStore’s data directory (as defined by the datadir engine variable, and which holds database snapshots, transaction logs, and columnstore segments) must reside on a POSIX-compliant filesystem.
SingleStore officially supports ext4 and XFS file systems.
Most Linux filesystems, including ext3, ext4, XFS, and ZFS are POSIX-compliant when mounted on a POSIX-compliant host.
Storage Requirements
Storage guidelines include:
-
The storage location must be on a single contiguous volume, and should never be more than 60% utilized
-
30 MB/s per physical core or vCPU
-
300 input/output operations per second (IOPS) per physical core or vCPU.
For example, for a 16-core host, the associated disk should be capable of 500MB/s and 5000 IOPS. -
For rowstore, the amount of required disk space should be about 5x the amount of RAM.
Rowstore storage capacity is limited by the amount of RAM on the host. Increasing RAM increases the amount of available data storage. -
For columnstore, the amount of required disk space should be about the size of the raw data you are planning to load into the database.
For concurrent loads on columnstore tables, SSD storage will improve performance significantly compared to HDD storage. -
When using high availability (HA), the amount of disk space required will be 2x the size of the raw data.
.
Toolbox Disk Checkers
The following Toolbox checkers are run prior to deploying SingleStore.
Host Setting |
Checker |
Description |
Host Configuration |
---|---|---|---|
Presence of an SSD |
|
Verifies if a host is using an SSD for storage |
Recommended: Each host is using an SSD for storage
|
Disk Bandwidth |
|
Checks that disk bandwidth allows safe operation with a SingleStore cluster |
Read, write, and sync-read speed >= 128. |
Disk Latency |
|
Checks the read and write latency of the disk to determine overall disk performance |
Read/write latency:
|
Disk Storage in Use |
|
Checks the amount of free disk space and determines if it’s approaching the capacity limit |
Free disk space:
|
Recommended Network Settings
Configure Network Settings
Note: Perform the following steps on each host in the cluster.
-
As root, display the current
sysctl
settings and review the values ofrmem_
andmax wmem_
.max sudo sysctl -a | grep mem_max -
Confirm that the receive buffer size (
rmem_
) is 8 MB for all connection types.max If not, add the following line to the /etc/sysctl.
file.conf net.core.rmem_max = 8388608
-
Confirm that the send buffer size (
wmem_
) is 8 MB for all connection types.max If not, add the following line to the /etc/sysctl.
file.conf net.core.wmem_max = 8388608
-
Confirm that the maximum number of connections that can be queued for a socket (
net.
) is at least 1024.core. somaxconn SingleStore will attempt to update this value to
1024
on a node's host when the node starts.After the cluster is up and running, run the following on each host to confirm that this value has been set. sudo sysctl -a | grep net.core.somaxconnIf this value could not be set by SingleStore, add the following line to the host's
/etc/sysctl.
file.conf Note that values lower than 1024
could allow connection requests to overwhelm the host.net.core.somaxconn = 1024
-
Persist these updates across reboots.
sudo sysctl -p /etc/sysctl.conf -
At the next system boot, confirm that the above values have persisted.
Default Network Ports
Depending on the host and its function in deployment, some or all of the following port settings should be enabled on hosts in your cluster.
-
Allow database clients (such as your application) to connect to the SingleStore aggregators
-
Allow all nodes in the cluster to talk to each other over the SingleStore protocol (
3306
) -
Allow you to connect to management and monitoring tools
Protocol |
Port |
Direction |
Description |
---|---|---|---|
TCP |
|
Inbound and Outbound |
Default port used by SingleStore. |
TCP |
|
Inbound and Outbound |
For host access. |
TCP |
|
Outbound |
To retrieve public repo key(s) for package verification. |
TCP |
|
Inbound and Outbound |
Default port for Studio. |
The service port values are configurable if the default values cannot be used in your deployment environment.
We also highly recommend configuring your firewall to prevent other hosts on the Internet from connecting to SingleStore.
Toolbox Network Checkers
The following Toolbox checkers are run prior to deploying SingleStore.
Host Setting |
Checker |
Description |
Host Configuration |
---|---|---|---|
Network Settings |
|
Checks that the network kernel settings |
Recommended: Set each of these values to a minimum of 8 MB (>= 8 MB) |
Self-Managed Columnstore Performance Recommendations
SingleStore supports the ext4
and xfs
filesystems.
If you use NVMe drives, set the following parameters in Linux (make it permanent in /etc/rc.
):
# Set ${DEVICE_NUMBER} for each device
echo 0 > /sys/block/nvme${DEVICE_NUMBER}n1/queue/add_random
echo 1 > /sys/block/nvme${DEVICE_NUMBER}n1/queue/rq_affinity
echo none > /sys/block/nvme${DEVICE_NUMBER}n1/queue/scheduler
echo 1023 > /sys/block/nvme${DEVICE_NUMBER}n1/queue/nr_requests
Last modified: November 18, 2024