Skip to main content

maximum_table_memory

SingleStoreDB will not allow writes to any table once the cumulative memory in use by all tables in SingleStoreDB reaches maximum_table_memory (SingleStoreDB will become read-only). SELECT and DELETE queries will still be allowed even once the limit is reached. UPDATE, INSERT , CREATE TABLE, ALTER TABLE, CREATE INDEX or DROP INDEX statements will fail with an error message once the limit has been reached. Query compilations are also disallowed once the limit is reached.

This setting is designed to allow SELECT queries to allocate temporary memory for sorting, hash group-by, and so on. The maximum_table_memory must be set to a value lower than maximum_memory. By default, maximum_table_memory is set to 90% of maximum_memory, which translates to about 80% of physical memory on the host machine.

If the maximum_table_memory limit has been reached, DELETE queries can still be executed to remove data from the table; however large DELETE queries may fail if the memory used by SingleStoreDB reaches maximum_memory.

Caution should be taken as DELETE queries allocate extra memory to mark rows as deleted. For rowstore tables, this equates to roughly 40 + 8*number_of_indexes bytes per deleted row. For columnstore tables, the memory usage will be lower because of how rows are marked to be deleted (roughly num_rows_in_table/8 bytes if you delete a row in every segment file in the table).

If the table is narrow, such as containing a small number of int columns, DELETE queries will show up as a relatively large spike in memory usage compared to the size of the table.

The memory for a deleted row is reclaimed after the transaction commits and the memory is freed asynchronously by the garbage collector. For more information, see the DELETE reference topic.

Replicating databases will pause if memory use reaches maximum_table_memory while replicating data. If memory becomes available again - say some data is deleted - replication will automatically continue.