Memory Errors

ERROR 1712: Not enough memory available to complete the current request. The request was not processed.

For potential causes and solutions, see Identifying and Reducing Memory Usage.

ERROR: 1720 - Memory usage by SingleStore for tables (XXXXX MB) has reached the value of maximum_table_memory global variable (YYYYY MB). This query cannot be executed.

For potential causes and solutions, see Identifying and Reducing Memory Usage.

ERROR 2373: Code generation for new statements is disabled because the total number of license units of capacity used on all leaf nodes is XX, which is above the limit of 4 for the SingleStore free license.

Issue

Error 2373 are caused because the total combined RAM allocated to the nodes in your cluster is larger than the 128 GB limit imposed by the free license. It is returned when generating plans for new queries after the memory settings have been increased beyond the free tier capacity.

These errors can occur when using a RAM-based free license. Newer licenses are defined in terms of license units with free licenses supporting four units. To understand how to handle capacity limit errors with these new licenses, see the Capacity Limit Error topic.

Solutions

There are three potential solutions depending on what your needs are:

  • If you need a cluster with more than 128 GB of RAM, you will need an Enterprise license. Sign up for a free 30-day trial and create an Enterprise License trial key before deploying a larger cluster.

  • Redeploy the cluster on a smaller set of machines that will keep you under the 128 GB limit.

  • Deploy your cluster manually using the Comprehensive Install Guide, and reduce the memory limits on all nodes with the instructions below.

Reduce Memory Limits

By default, each SingleStore node will use 90% of the host’s physical memory, configurable by the setting maximum_memory. If the sum of maximum_memory for all nodes exceeds 128 GB, then the installation will fail when adding nodes to the cluster. However, if you limit maximum_memory after creating nodes and before adding them with their role, then the installation will succeed.

Before adding a node with either sdb-admin add-leaf or sdb-admin add-aggregator, change the maximum amount of memory that will be allocated to that node. This can be done by running the following:

sdb-admin update-config --set-global --key maximum_memory --value <value> [--all|--memsql-id <MEMSQL-ID>]

Set value to some number that will keep the total combined RAM size of the cluster lower than 128 GB. Use the flag --set-global to apply the maximum_memory setting change for all running sessions without restarting nodes, and choose either the --all or --memsql-id <MEMSQL-ID> flag depending on whether the setting should be applied to all nodes, or just one.

Note: Memory calculations should always be rounded down.

Consider the following cluster:

  • One host with 32 GB RAM for the master aggregator

  • Two hosts with 64 GB RAM for the leaves

When SingleStore sets maximum_memory to the default 90% of host memory, it will allocate 32 GB * 0.90 = 28 GB (rounded down) for the MA host, and 64 * 0.90 = 57 GB for each of the leaves. In total, SingleStore would try to use 28 + 57 + 57 = 142 GB memory in this cluster. This exceeds the 128 GB available with the free license. The leaves have more memory since they will be storing data, while the master aggregator will only be handling a small query load. Though leaves should all have the same memory limit, aggregators can have a lower memory limit, comparatively.

To put this cluster under the 128 GB limit, set the limit for the leaves to 80% of the host memory capacity and round down (64 GB * 0.80 * 1024 MB per GB = 52428 MB). On the main deployment machine run:

sdb-admin update-config --set-global --key maximum_memory --value 52428 --memsql-id <Leaf-MEMSQL-ID>`

Then, reduce the aggregator’s memory limit to 80% of its host memory capacity and round down (32 GB * 0.80 * 1024 MB per GB = 26214 MB). On the main deployment machine run:

sdb-admin update-config --set-global --key maximum_memory --value 26214 --memsql-id <MA-MEMSQL-ID>`

Now the memory limits sum as: (52428 MB * 2 leaves + 26214 MB) / 1024 GB per MB = 127.99 GB. This is just under the 128 GB limit.

Finish the installation by adding the roles with sdb-admin add-leaf or sdb-admin add-aggregator (if there are any child aggregators). This will give you a three node cluster with 127 GB of total available RAM.

NUMA CPUs and Multiple Nodes per Host

If you have NUMA-capable CPUs on your host machine, and wish to run more than one node per host machine, then your maximum_memory value would have to be reduced according to the count of nodes per host.

Memory allocation is calculated as follows:

  • One node, default settings: maximum_memory = 90% of physical memory

  • Two nodes, set maximum_memory = (90% of physical memory) / 2 per node

  • Four nodes, set maximum_memory = (90% of physical memory) / 4 per node

ERROR 2374: Leaf or aggregator node could not be added because you are using the SingleStore free license which has a limit of 4 license units and after adding the node you would be using XX license units.

Issue

Error 2374 is caused because the total combined RAM allocated to the nodes in your cluster is larger than the 128 GB limit imposed by the free license. It is returned when trying to add a new node to the cluster.

These errors can occur when using a RAM-based free license. Newer licenses are defined in terms of license units with free licenses supporting four units. To understand how to handle capacity limit errors with these new licenses, see the Capacity Limit Error topic.

Solutions

There are three potential solutions depending on what your needs are:

  • If you need a cluster with more than 128 GB of RAM, you will need an Enterprise license. Sign up for a free 30-day trial and create an Enterprise License trial key before deploying a larger cluster.

  • Redeploy the cluster on a smaller set of machines that will keep you under the 128 GB limit.

  • Deploy your cluster manually using the Comprehensive Install Guide, and reduce the memory limits on all nodes with the instructions below.

Reduce Memory Limits

By default, each SingleStore node will use 90% of the host’s physical memory, configurable by the setting maximum_memory. If the sum of maximum_memory for all nodes exceeds 128 GB, then the installation will fail when adding nodes to the cluster. However, if you limit maximum_memory after creating nodes and before adding them with their role, then the installation will succeed.

Before adding a node with either sdb-admin add-leaf or sdb-admin add-aggregator, change the maximum amount of memory that will be allocated to that node. This can be done by running the following:

sdb-admin update-config --set-global --key maximum_memory --value <value> [--all|--memsql-id <MEMSQL-ID>]

Set value to some number that will keep the total combined RAM size of the cluster lower than 128 GB. Use the flag --set-global to apply the maximum_memory setting change for all running sessions without restarting nodes, and choose either the --all or --memsql-id <MEMSQL-ID> flag depending on whether the setting should be applied to all nodes, or just one.

Note: Memory calculations should always be rounded down.

Consider the following cluster:

  • One host with 32 GB RAM for the master aggregator

  • Two hosts with 64 GB RAM for the leaves

When SingleStore sets maximum_memory to the default 90% of host memory, it will allocate 32 GB * 0.90 = 28 GB (rounded down) for the MA host, and 64 * 0.90 = 57 GB for each of the leaves. In total, SingleStore would try to use 28 + 57 + 57 = 142 GB memory in this cluster. This exceeds the 128 GB available with the free license. The leaves have more memory since they will be storing data, while the master aggregator will only be handling a small query load. Though leaves should all have the same memory limit, aggregators can have a lower memory limit, comparatively.

To put this cluster under the 128 GB limit, set the limit for the leaves to 80% of the host memory capacity and round down (64 GB * 0.80 * 1024 MB per GB = 52428 MB). On the main deployment machine run:

sdb-admin update-config --set-global --key maximum_memory --value 52428 --memsql-id <Leaf-MEMSQL-ID>`

Then, reduce the aggregator’s memory limit to 80% of its host memory capacity and round down (32 GB * 0.80 * 1024 MB per GB = 26214 MB). On the main deployment machine run:

sdb-admin update-config --set-global --key maximum_memory --value 26214 --memsql-id <MA-MEMSQL-ID>`

Now the memory limits sum as: (52428 MB * 2 leaves + 26214 MB) / 1024 GB per MB = 127.99 GB. This is just under the 128 GB limit.

Finish the installation by adding the roles with sdb-admin add-leaf or sdb-admin add-aggregator (if there are any child aggregators). This will give you a three node cluster with 127 GB of total available RAM.

NUMA CPUs and Multiple Nodes per Host

If you have NUMA-capable CPUs on your host machine, and wish to run more than one node per host machine, then your maximum_memory value would have to be reduced according to the count of nodes per host.

Memory allocation is calculated as follows:

  • One node, default settings: maximum_memory = 90% of physical memory

  • Two nodes, set maximum_memory = (90% of physical memory) / 2 per node

  • Four nodes, set maximum_memory = (90% of physical memory) / 4 per node

ERROR: "Nonfatal buffer manager memory allocation failure. The maximum_memory parameter (XXXXX MB) has been reached.

For potential causes and solutions, see Identifying and Reducing Memory Usage.

Failed to allocate XXXXX bytes of memory from the operating system (Error 12: Cannot allocate memory). This is usually due to a misconfigured operating system or virtualization technology.

This error message indicates a host level misconfiguration in how the kernel is allowed to allocate memory. This causes the kernel to refuse memory allocation requests from the database, even when the database has not exceeded its limit. This can be caused by misconfiguration of any of the following:

  • low vm.max_map_count

  • high vm.min_free_kbytes

  • low vm.swappiness

  • vm.overcommit_memory

  • low vm.overcommit_ratio

  • inadequate Swap space

For more information about the recommended configuration of these settings, see System Requirements and Recommendations

ERROR: "Memory usage by SingleStore for vector indexes (.. Mb) has reached the value of ‘max_vector_index_cache_memory_mb’ global variable (.. Mb)."

Issue

This error indicates that there is not enough room in the vector index cache to insert another index into the cache. That is, there are not sufficient items in the cache that can be evicted to make room for the new index or the new index is larger than the cache size.

Solutions

To address this error, potential solutions are:

  • Use a more compact index such as IVFPQ_FS which compresses vectors and has a smaller memory footprint.

  • Increase the value of max_vector_index_cache_memory_percent to increase the size of the vector index cache.

Last modified: December 4, 2024

Was this article helpful?