maximum_ table_ memory
SingleStore will not allow writes to any table once the cumulative memory in use by all tables in SingleStore reaches maximum_ (SingleStore will become read-only).SELECT and DELETE queries will still be allowed even once the limit is reached.UPDATE, INSERT , CREATE TABLE, ALTER TABLE, CREATE INDEX or DROP INDEX statements will fail with an error message once the limit has been reached.
This setting is designed to allow SELECT queries to allocate temporary memory for sorting, hash group-by, and so on.maximum_ must be set to a value lower than maximum_.maximum_ is set to 90% of maximum_, which translates to about 80% of physical memory on the host machine.
If the maximum_ limit has been reached, DELETE queries can still be executed to remove data from the table; however large DELETE queries may fail if the memory used by SingleStore reaches maximum_.
Caution should be taken as DELETE queries allocate extra memory to mark rows as deleted.
If the table is narrow, such as containing a small number of int columns, DELETE queries will show up as a relatively large spike in memory usage compared to the size of the table.
The memory for a deleted row is reclaimed after the transaction commits and the memory is freed asynchronously by the garbage collector.
Replicating databases will pause if memory use reaches maximum_ while replicating data.
Last modified: September 19, 2022