Addressing the Maximum Table Memory Error

Note

SingleStore does not support setting the engine variables that are discussed in this topic.

If you see the error message below, the leaf at 'leafhost':leafport has reached maximum_table_memory, which is the maximum cumulative memory in use by all tables on a node. SingleStore will not allow writes to any table once maximum_table_memory is reached (SingleStore will become read-only).

Leaf Error (leafhost:leafport): Memory usage by SingleStore for tables (##### MB) has reached the value of 'maximum_table_memory' global variable (##### MB). This query cannot be executed.

You can resolve this issue with the steps below.

Check for Data Skew

First, determine which nodes are reaching their memory capacity. Do this by checking for data skew across your nodes. The data skew topic discusses how to address data skew, if it is present.

If Data Skew Is not Present

If nodes are reaching maximum_table_memory without skew, the cluster itself is nearing maximum memory capacity. To resolve this, you need to add capacity or delete data. To add capacity, see Administering a Cluster. If additional leaf nodes are not available, use the instructions below to delete data in batches. DELETE queries use 50 bytes of memory per record deleted until the DELETE commits, so the maximum_table_memory needs to be temporarily increased on all nodes during the DELETE.

On each leaf node in your cluster, do the following:

  1. Pause write workloads from an aggregator or your application.

  2. Check current maximum_memory and maximum_table_memory.

    select @@global.maximum_memory
    +-------------------------+
    | @@global.maximum_memory |
    +-------------------------+
    |                   65536 |
    +-------------------------+
    1 row in set (0.00 sec)
    select @@global.maximum_table_memory
    +-------------------------------+
    | @@global.maximum_table_memory |
    +-------------------------------+
    | 57927 |
    +-------------------------------+
    1 row in set (0.00 sec)
  3. Increase maximum_table_memory to 95% of maximum_memory. The default value is 90% of maximum_memory.

    set @@global.maximum_table_memory=62259;
    Query OK, 0 rows affected (0.00 sec)
  4. Execute small batches of DELETE queries. Limit to 100,000 records or less to reduce memory used until commit.

    DELETE FROM mytable limit 100000;
    Query OK, 0 rows affected (1.64 sec)
  5. Restore maximum_table_memory to original value.

    set @@global.maximum_table_memory=57927;
    Query OK, 0 rows affected (0.00 sec)

Last modified: April 26, 2023

Was this article helpful?

Verification instructions

Note: You must install cosign to verify the authenticity of the SingleStore file.

Use the following steps to verify the authenticity of singlestoredb-server, singlestoredb-toolbox, singlestoredb-studio, and singlestore-client SingleStore files that have been downloaded.

You may perform the following steps on any computer that can run cosign, such as the main deployment host of the cluster.

  1. (Optional) Run the following command to view the associated signature files.

    curl undefined
  2. Download the signature file from the SingleStore release server.

    • Option 1: Click the Download Signature button next to the SingleStore file.

    • Option 2: Copy and paste the following URL into the address bar of your browser and save the signature file.

    • Option 3: Run the following command to download the signature file.

      curl -O undefined
  3. After the signature file has been downloaded, run the following command to verify the authenticity of the SingleStore file.

    echo -n undefined |
    cosign verify-blob --certificate-oidc-issuer https://oidc.eks.us-east-1.amazonaws.com/id/CCDCDBA1379A5596AB5B2E46DCA385BC \
    --certificate-identity https://kubernetes.io/namespaces/freya-production/serviceaccounts/job-worker \
    --bundle undefined \
    --new-bundle-format -
    Verified OK