Memory Errors
On this page
ERROR 1712: Not enough memory available to complete the current request. The request was not processed.
For potential causes and solutions, see Identifying and Reducing Memory Usage.
ERROR: 1720 - Memory usage by SingleStore for tables (XXXXX MB) has reached the value of maximum_ table_ memory
global variable (YYYYY MB). This query cannot be executed.
maximum_global variable (YYYYY MB).table_ memory
For potential causes and solutions, see Identifying and Reducing Memory Usage.
ERROR 2373: Code generation for new statements is disabled because the total number of license units of capacity used on all leaf nodes is XX, which is above the limit of 4 for the SingleStore free license.
Issue
Error 2373 are caused because the total combined RAM allocated to the nodes in your cluster is larger than the 128 GB limit imposed by the free license.
These errors can occur when using a RAM-based free license.
Solutions
There are three potential solutions depending on what your needs are:
-
If you need a cluster with more than 128 GB of RAM, you will need an Enterprise license.
Sign up for a free 30-day trial and create an Enterprise License trial key before deploying a larger cluster. -
Redeploy the cluster on a smaller set of machines that will keep you under the 128 GB limit.
-
Deploy your cluster manually using the Comprehensive Install Guide, and reduce the memory limits on all nodes with the instructions below.
Reduce Memory Limits
By default, each SingleStore node will use 90% of the host’s physical memory, configurable by the setting maximum_
.maximum_
for all nodes exceeds 128 GB, then the installation will fail when adding nodes to the cluster.maximum_
after creating nodes and before adding them with their role, then the installation will succeed.
Before adding a node with either sdb-admin add-leaf
or sdb-admin add-aggregator
, change the maximum amount of memory that will be allocated to that node.
sdb-admin update-config --set-global --key maximum_memory --value <value> [--all|--memsql-id <MEMSQL-ID>]
Set value to some number that will keep the total combined RAM size of the cluster lower than 128 GB.--set-global
to apply the maximum_
setting change for all running sessions without restarting nodes, and choose either the --all
or --memsql-id <MEMSQL-ID>
flag depending on whether the setting should be applied to all nodes, or just one.
Note: Memory calculations should always be rounded down.
Consider the following cluster:
-
One host with 32 GB RAM for the master aggregator
-
Two hosts with 64 GB RAM for the leaves
When SingleStore sets maximum_
to the default 90% of host memory, it will allocate 32 GB * 0.
To put this cluster under the 128 GB limit, set the limit for the leaves to 80% of the host memory capacity and round down (64 GB * 0.
sdb-admin update-config --set-global --key maximum_memory --value 52428 --memsql-id <Leaf-MEMSQL-ID>`
Then, reduce the aggregator’s memory limit to 80% of its host memory capacity and round down (32 GB * 0.
sdb-admin update-config --set-global --key maximum_memory --value 26214 --memsql-id <MA-MEMSQL-ID>`
Now the memory limits sum as: (52428 MB * 2 leaves + 26214 MB) / 1024 GB per MB = 127.
Finish the installation by adding the roles with sdb-admin add-leaf
or sdb-admin add-aggregator
(if there are any child aggregators).
NUMA CPUs and Multiple Nodes per Host
If you have NUMA-capable CPUs on your host machine, and wish to run more than one node per host machine, then your maximum_
value would have to be reduced according to the count of nodes per host.
Memory allocation is calculated as follows:
-
One node, default settings: maximum_
memory = 90% of physical memory -
Two nodes, set maximum_
memory = (90% of physical memory) / 2 per node -
Four nodes, set maximum_
memory = (90% of physical memory) / 4 per node
ERROR 2374: Leaf or aggregator node could not be added because you are using the SingleStore free license which has a limit of 4 license units and after adding the node you would be using XX license units.
Issue
Error 2374 is caused because the total combined RAM allocated to the nodes in your cluster is larger than the 128 GB limit imposed by the free license.
These errors can occur when using a RAM-based free license.
Solutions
There are three potential solutions depending on what your needs are:
-
If you need a cluster with more than 128 GB of RAM, you will need an Enterprise license.
Sign up for a free 30-day trial and create an Enterprise License trial key before deploying a larger cluster. -
Redeploy the cluster on a smaller set of machines that will keep you under the 128 GB limit.
-
Deploy your cluster manually using the Comprehensive Install Guide, and reduce the memory limits on all nodes with the instructions below.
Reduce Memory Limits
By default, each SingleStore node will use 90% of the host’s physical memory, configurable by the setting maximum_
.maximum_
for all nodes exceeds 128 GB, then the installation will fail when adding nodes to the cluster.maximum_
after creating nodes and before adding them with their role, then the installation will succeed.
Before adding a node with either sdb-admin add-leaf
or sdb-admin add-aggregator
, change the maximum amount of memory that will be allocated to that node.
sdb-admin update-config --set-global --key maximum_memory --value <value> [--all|--memsql-id <MEMSQL-ID>]
Set value to some number that will keep the total combined RAM size of the cluster lower than 128 GB.--set-global
to apply the maximum_
setting change for all running sessions without restarting nodes, and choose either the --all
or --memsql-id <MEMSQL-ID>
flag depending on whether the setting should be applied to all nodes, or just one.
Note: Memory calculations should always be rounded down.
Consider the following cluster:
-
One host with 32 GB RAM for the master aggregator
-
Two hosts with 64 GB RAM for the leaves
When SingleStore sets maximum_
to the default 90% of host memory, it will allocate 32 GB * 0.
To put this cluster under the 128 GB limit, set the limit for the leaves to 80% of the host memory capacity and round down (64 GB * 0.
sdb-admin update-config --set-global --key maximum_memory --value 52428 --memsql-id <Leaf-MEMSQL-ID>`
Then, reduce the aggregator’s memory limit to 80% of its host memory capacity and round down (32 GB * 0.
sdb-admin update-config --set-global --key maximum_memory --value 26214 --memsql-id <MA-MEMSQL-ID>`
Now the memory limits sum as: (52428 MB * 2 leaves + 26214 MB) / 1024 GB per MB = 127.
Finish the installation by adding the roles with sdb-admin add-leaf
or sdb-admin add-aggregator
(if there are any child aggregators).
NUMA CPUs and Multiple Nodes per Host
If you have NUMA-capable CPUs on your host machine, and wish to run more than one node per host machine, then your maximum_
value would have to be reduced according to the count of nodes per host.
Memory allocation is calculated as follows:
-
One node, default settings: maximum_
memory = 90% of physical memory -
Two nodes, set maximum_
memory = (90% of physical memory) / 2 per node -
Four nodes, set maximum_
memory = (90% of physical memory) / 4 per node
ERROR: "Nonfatal buffer manager memory allocation failure. The maximum_ memory parameter (XXXXX MB) has been reached.
For potential causes and solutions, see Identifying and Reducing Memory Usage.
Failed to allocate XXXXX bytes of memory from the operating system (Error 12: Cannot allocate memory). This is usually due to a misconfigured operating system or virtualization technology.
This error message indicates a host level misconfiguration in how the kernel is allowed to allocate memory.
-
low
vm.
max_ map_ count -
high
vm.
min_ free_ kbytes -
low
vm.
swappiness -
vm.
overcommit_ memory -
low
vm.
overcommit_ ratio -
inadequate Swap space
For more information about the recommended configuration of these settings, see System Requirements and Recommendations
Last modified: February 29, 2024