SingleStore DB

System Requirements and Recommendations

The following are some requirements and recommendations you should follow when provisioning and setting up your host machines to optimize the performance of your cluster.

With the exception of the hardware and software requirements, all other settings are optional.

Universal Requirements

Each SingleStore DB node requires a host machine with an x86_64 CPU with at least four CPU cores and eight GB of RAM available per node.

When provisioning your host machines, the minimum Linux kernel version required is 3.10 or later.

Our recommended platforms are the following:

  • RHEL/CentOS 6, 7, or 8

  • Debian 8 or 9 (version 9 is preferred)

Network Settings

Note: Perform the following steps on each host in the cluster.

  1. As root, display the current sysctl settings and review the values of rmem_max and wmem_max.

    sysctl -a | grep mem_max
  2. Confirm that the receive buffer size (rmem_max) is 8MB for all connection types. If not, add the following line to the /etc/sysctl.conf file.

    net.core.rmem_max = 8388608
  3. Confirm that the send buffer size (wmem_max) is 8MB for all connection types. If not, add the following line to the /etc/sysctl.conf file.

    net.core.wmem_max = 8388608
  4. Persist these updates across reboots.

    sysctl -p /etc/sysctl.conf
  5. At the next system boot, confirm that the above values have persisted.

Network Ports

Depending on the host machine and its function in deployment, some or all of the following port settings should be enabled on machines in your cluster. These routing and firewall settings must be configured to:

  • Allow database clients (e.g. your application) to connect to the SingleStore DB aggregators

  • Allow all nodes in the cluster to talk to each other over the SingleStore DB protocol (3306)

  • Allow you to connect to management and monitoring tools







Inbound and Outbound

Default port used by SingleStore DB. Required on all nodes for intra-cluster communication. Also required on aggregators for client connections.



Inbound and Outbound

For host machine access. Required between nodes in SingleStore DB tool deployment scenarios. Also useful for remote administration and troubleshooting on the main deployment machine.




To get public repo key for package verification. Required for nodes downloading SingleStore APT or YUM packages.



Inbound and Outbound

Default port for SingleStore DB Studio. (Only required for the host machine running Studio.)

The service port values are configurable if the default values cannot be used in your deployment environment. For more information on how to change them, see the SingleStore DB configuration file, the sdb-toolbox-config register-host command, and SingleStore DB Studio Installation Guide.

We also highly recommend configuring your firewall to prevent other hosts on the Internet from connecting to SingleStore DB.

Cloud Deployment Recommendations

For cloud deployments, all instances should be geographically deployed in a single region.

Here are some general recommendations to help optimize for performance:

  • Network throughput - look for guaranteed throughput instead of bursting "up to" amount. For example, favor "10Gbps" over "Up to 10Gbps".

  • For memory intensive workloads, consider the memory optimized SKUs, typically with a ratio of 8 GB of memory per vCPU.

  • Optimize for NUMA (Non-Uniform Memory Access). Queries across NUMA nodes can be expensive. Optimize for CPU types with single NUMA node for VM type. Typically, more recent Intel versions are more optimized for NUMA compared to AMD, which has smaller NUMA node sizes.

  • For storage, cloud providers should use SSD disks at a minimum. Provisioned SSD with higher IOPS throughput if storage performance an issue with SSD, though there is a cost tradeoff.

  • Each leaf node should map to a separate SSD disk. Parallel I/O is important due to limitations in disk I/O.

Here are some platform-specific recommendations:


  • Compute: Memory Optimized: r5.4xLarge, i.e r5.4xLarge - 16 vCPU and 128 GB of RAM

  • Storage: EBS volumes with SSD - gp2 (For higher throughput and cost, use provisioned IOPS - io2)


  • Compute: Memory Optimized SKU - eds_v5, i.e: Standard_E16ds_v5 - 16 VCPU and 128 GB of RAM

  • Storage: Managed Disks - LRS only. Premium SSD (Ultra SSD for more performance/cost)


  • Compute: General Purpose sku with 8 to 1 ratio: N2 series, i.e. n2-highmem-16 - 16 CPU and 128 GB of RAM

  • Storage: SSD storage type - pd-ssd (pd-extreme is a more expensive option for higher throughput provisioned storage)

Hardware Recommendations

The following are additional hardware recommendations for optimal performance:




At least 8 vCPU per host machine.


At least 4GB per core, 32GB minimum per leaf node.


Provide a storage system for each node with at least 3 times the capacity of main memory. SSD storage is recommended for columnstore workloads.

Here are some considerations when deciding on your hardware:

  • SingleStore DB rowstore storage capacity is limited by the amount of RAM on the host machine. Increasing RAM increases the amount of available data storage.

  • It is strongly recommended to run SingleStore DB leaf nodes on machines that have the same hardware and software specifications.

  • SingleStore DB is optimized for architectures supporting SSE4.2 and AVX2 instruction set extensions, but it will run successfully on x64 systems without these extensions. See our AVX2 Instruction Set Verification for more information on how to verify if your system supports AVX2.

  • For concurrent loads on columnstore tables, SSD storage will improve performance significantly compared to HDD storage.

Enabling Cluster-on-Die (if supported)

If you are installing SingleStore DB natively and have access to the BIOS, you should enable Cluster-on-Die in the system BIOS for machines with Haswell-EP and later x86_64 CPUs. When enabled, this will result in multiple NUMA regions being exposed per processor. SingleStore DB can take advantage of NUMA nodes by binding specific SingleStore DB nodes to those NUMA nodes, which in turn will result in higher SingleStore DB performance.

Software Recommendations

In addition to these basic OS requirements, it is helpful to configure the underlying Linux OS in the following areas to get the most performance using SingleStore DB.

These tuning instructions should be done on each host machine in your cluster.

Configure Linux vm Settings

SingleStore recommends letting first-party tools, such as sdb-admin and memsqlctl, configure your vm settings to minimize the likelihood of getting memory errors on your host machines. The default values used by the tools are the following:

  • vm.max_map_count set to 1000000000

  • vm.overcommit_memory set to 0

  • vm.overcommit_ratio ignore, unless vm.overcommit_memory is set to 2, in which case, set this to 99

  • vm.min_free_kbytes set to either 1% of system RAM or 4 GB, whichever is smaller

  • vm.swappiness set between 1 and 10

If the SingleStore Tools cannot set the values for you, you will get an error message stating what the value should be and how to set it. You can set the values manually using the /sbin/sysctl command, as shown below.

sudo sysctl -w vm.max_map_count=1000000000
sudo sysctl -w vm.min_free_kbytes=658096
Enabling NUMA Support

If the CPU(s) on your host machines supports Non-Uniform Memory Access (NUMA), SingleStore DB can take advantage of that and bind SingleStore DB nodes to NUMA nodes. Binding SingleStore DB nodes to NUMA nodes allows faster access to in-memory data since individual SingleStore DB nodes only access data that’s collocated with their corresponding CPU.

If you do not configure SingleStore DB this way, performance will be greatly degraded due to expensive cross-NUMA-node memory access. Configuring for NUMA should be done as part of the installation process; however, you can reconfigure your deployment later, if necessary.

SingleStore Tools can do the NUMA binding for you; however, you must have numactl installed first. Perform the following steps on each host machine:

  1. Log into each host and install the numactl package. For example, for Debian-based OSes:

    sudo apt-get install numactl

    For Red Hat/CentOS, run the following:

    sudo yum install numactl
  2. Check the number of NUMA nodes your machines by running numactl --hardware. For example:

    numactl --hardware
    available: 2 nodes (0-1)

    The output shows that there are 2 NUMA nodes on this machine, numbered 0 and 1.

For additional information, see Configuring SingleStore DB for NUMA.

Disable Transparent Huge Pages

Linux organizes RAM into pages that are usually 4KB in size. Using transparent huge pages (THP), Linux can instead use 2MB pages or larger. As a background process, THP transparently re-organizes memory used by a process inside the kernel by either merging small pages to huge pages or splitting few huge pages to small pages. This may block memory usage on the memory manager, which may span for a duration of few seconds, and prevent the process from accessing memory. Because SingleStore DB uses a lot of memory, we recommend you disable THP at boot time on all nodes (master aggregator, child aggregators, and leaves) in the cluster. THP lag may result in inconsistent query run times or high system CPU (also known as red CPU).


For information on how to disable THP, see the documentation for your operating system.

Install and Run Network Time Protocol Service

Install and run ntpd to ensure that system time is in sync across all nodes in the cluster.

For Debian-based distributions like Ubuntu:

sudo apt-get install ntpd

For RedHat/CentOS distributions:

sudo yum install ntp
Recommendations for Optimal On-Premise Columnstore Performance

SingleStore support the EXT4 and XFS filesystems. Also, many improvements have been made recently in Linux for NVMe devices, so we recommend using a 3.0+ series kernel. For example, CentOS 7.2 uses the 3.10 kernel.

If you use NVMe drives, set the following parameters in Linux (make it permanent in /etc/rc.local):

# Set ${DEVICE_NUMBER} for each device
echo 0 > /sys/block/nvme${DEVICE_NUMBER}n1/queue/add_random
echo 1 > /sys/block/nvme${DEVICE_NUMBER}n1/queue/rq_affinity
echo none > /sys/block/nvme${DEVICE_NUMBER}n1/queue/scheduler
echo 1023 > /sys/block/nvme${DEVICE_NUMBER}n1/queue/nr_requests
Increase File Descriptor and Maximum Process Limits

A SingleStore DB cluster uses a substantial number of client and server connections between aggregators and leaves to run queries and cluster operations. We recommend setting the Linux file descriptor and maximum process limits to the values listed below to account for these connections. Failing to increase this limit can significantly degrade performance and even cause connection limit errors. The ulimit settings can be configured in the /etc/security/limits.conf file, or directly via shell commands.

Permanently increase the open files limit and the max user processes limit for the memsql user by editing the /etc/security/limits.conf file as the root user and adding the following lines:

memsql    soft    NOFILE    1024000
memsql    hard    NOFILE    1024000
memsql    soft    nproc     128000
memsql    hard    nproc     128000


A SingleStore DB node must be restarted for the changed ulimit settings to take effect.

The file-max setting configures the maximum number of file handles (file descriptor limit) for the entire system. On the contrary, ulimit settings are only enforced on a process level. Hence, the file-max value must be higher than the NOFILE setting. Increase the maximum number of file handles configured for the entire system in /proc/sys/fs/file-max. To make the change permanent, append or modify the fs.file-max line in the /etc/sysctl.conf file.

Configure the Linux ulimit Settings

Most Linux operating systems provide ways to control the usage of system resources such as threads, files and network at an individual user or process level. The per-user limitations for resources are called ulimits, and they prevent single users from consuming too much system resources. For optimal performance, SingleStore recommends setting ulimits to higher values than the default Linux settings. The ulimit settings can be configured in the /etc/security/limits.conf file, or directly via shell commands.

Configure the Linux nice Setting

Given how the Linux kernel calculates the maximum nice limit, we recommend that you modify the /etc/security/limits.conf file and set the maximum nice limit to -10 on each Linux host in the cluster. This will allow the SingleStore DB engine to run some threads at higher priority, such as the garbage collection threads.

To apply this new nice limit, restart each SingleStore DB node in the cluster.

Alternatively, you may set the default nice limit to -10 on each Linux host in the cluster prior to deploying SingleStore DB.

Create Swap Space

It is recommended that you create a swap partition (or swap file on a dedicated device) to serve as an emergency backing store for RAM. SingleStore DB makes extensive use of RAM (especially with rowstore tables), so it is important that the operating system does not immediately start killing processes if SingleStore DB runs out of memory. Because typical machines running SingleStore DB have a large amount of RAM (>32 GB/node), the swap space can be small (<10% of physical RAM).

For more information setting up and configuring swap space, please refer to your distribution’s documentation.

After enabling these settings, your machines will be configured for optimal performance when running one or more SingleStore DB nodes.

Disable cgroups in Non-VM Deployments

For non-VM deployments, disable cgroups. Run the kernel with the following arguments to disable cgroups: intel_pstate=disable cgroup_disable=memory