List of Engine Variables

The following tables list the sync engine variables and non-sync engine variables. Some variables have an in-depth description that supplements the description found in the table.

When reading or setting engine variables keep in mind that the values ON and 1 are synonymous, as are the values OFF and 0.

Sync Variables Lists

See the engine variables overview to learn how sync variables work.

Note

All of the sync variables listed in the following table are global variables, except when noted as a session variable that can also be set globally in the Description column.

You can set sync variables on the master aggregator, only. After you set a sync variable, the nodes affected by your update (all aggregators, all leaves, or both) depend on the variable’s type and the command you use to set the variable. Each variable’s type is indicated in the last sentence in the Description column in the following table.

Name

Description

Default

active_query_users_blacklist

Shows a JSON array of users whose query runs are not considered active to measure the cluster's idle time. This variable can sync to all nodes, including aggregator and leaf nodes.

aes_default_encryption_mode

You can use this variable to control the supported key size. It takes a value in aes-keylen-mode format, where keylen is the key length in bits and mode is the encryption mode. The value is not case-sensitive. Permitted keylen values are 128 and 256. Permitted mode values are ECB, GCM, and CBC.

List of allowed values are: aes-128-ecb, aes-256-ecb, aes-128-gcm, aes-256-gcm, aes-128-cbcaes-256-cbc

Optionally an encryption mode string in the same format can be provided to the AES_ENCRYPT and AES_DECRYPT functions. If not provided, the default value from this variable will be used. In addition to the key, the initialization vector can be provided to the encryption/decryption function in GCM and CBC encryption modes.

This variable can sync to all nodes, including aggregator and leaf nodes.

aes-128-ecb

aggregator_failure_detection

Deprecated in MemSQL 6.0, this is the timeout used when gathering workload management statistics from leaves. This variable can sync to all aggregators.

ON

allow_materialize_cte_with_union

Allows the query optimizer to materialize and reuse common table expressions without recomputing them when the query contains UNION, UNION ALL , and other SET operations. It is a session variable that is only available in versions 8.1.20 and newer.

FALSE

allow_modifying_sysvars_with_computed_shard_key

Allows modification of global variables even in the presence of computed shard keys. Allowing changes to certain sysvars in the presence of computed shard key, only when this variable is set ON applies to all behavior changing sysvars.

The behavior changing sysvars include:

  • data_conversion_compatibility_level

  • json_compatibility_level

  • highlight_allow_ascii_binary_strings

  • regexp_output_validation_mode

  • json_extract_string_collation

OFF

allow_proc_cpuinfo_errors

Determines whether to allow errors while consuming '/proc/cpuinfo' to append CPU flags to the code generation's hash. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

assume_udfs_deterministic

controls behavior where SingleStore will do extra work to avoid issues caused by UDFs that return different values when repeatedly invoked (e.g., are non-deterministic). This variable defaults to false, preserving the existing behavior (since SingleStore version 8.0), where SingleStore will ensure these computations take place once on an aggregator node. When set to true, SingleStore will avoid this extra layer of coordination (and bottleneck) allowing leaf nodes to independently coordinate. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

auditlog_disk_sync

Specifies if every audit log record is synchronously written and persisted to the disk. By default, it delays the audit log writes to the disk. For more information on this variable, see Configure Audit Logging. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

auditlog_retention_period

Indicates the retention period (in days) for audit log files. This variable can sync to all nodes, including aggregator and leaf nodes.

0 (store files indefinitely)

auditlog_retention_size

Indicates the retention size (in megabytes) for audit log files.

0

auto_attach

Specifies if the aggregator will try to attach a node automatically after it has been marked offline, but starts responding to heartbeats again. This variable can sync to all aggregators.

ON

auto_profile_type

Determines the mode of auto profiling. There are two values: LITE and FULL. Auto profiling must be enabled (see the enable_auto_profile engine variable below) before setting this variable. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

LITE

autostats_flush_interval_secs

Number of seconds a table's incremental autostats can go unused before being flushed to disk. 0 means they will never be flushed. This variable can sync to all nodes, including aggregator and leaf nodes. 

600

attach_rebalance_delay_seconds

Number of seconds to wait after a new node has attached to the cluster before running a rebalance. This mechanism is used to batch up rebalancing if many nodes fail and then come back online within a short period of time. This variable can sync to all aggregators.

120

background_statistics_collection_interval

How often (in seconds) background statistics will check for out-of-date statistics (rowstore only). This variable can sync to all aggregators.

60

background_statistics_collection_threshold

Threshold as a fraction of table row count triggering the collection of autostats. This variable can sync to all aggregators.

0.5

backup_multipart_upload_concurrency

Helps tune backup performance. It controls the number of subprocesses concurrently uploading the parts of a multipart upload done by a backup. A backup of a leaf uses one thread per partition and there is no restriction on the number of threads created (aside from system limitations).

This variable's config depends on the backup_max_threads config.

When backup_max_threads is zero implying all the backups run at full parallelism then this variable can be set in the range 1 to 5 inclusive.

When backup_max_threads is not equal to zero implying a throttle is applied to the number of concurrent backups per leaf, then this variable will be taken as 1, irrespective of the value set.

With an increased number of go routines the backup performance improves but it comes with an increased cost of CPU, memory, and network throughput.

This variable can sync to all nodes, including aggregator and leaf nodes.

3

batch_external_functions

Determines if external functions are batched. The engine variable accepts ALWAYS, AUTO, and NEVER values. See CREATE [OR REPLACE] EXTERNAL FUNCTION for more information. This variable can sync to all nodes, including aggregator and leaf nodes.

ALWAYS

batch_external_functions_size

Deprecated. Use external_functions_batch_size instead.

512

cardinality_estimation_level

Possible values are '6.0', '6.5', '7.0', '7.1', and '7.3'. The behavior of each setting follows: '6.0': Use legacy histograms; '6.5': Use advanced histograms; '7.0': Same use of histograms as 6.5; '7.1': Use histograms for join estimation; '7.3': Use sampling and histograms together (when both are available), to improve selectivity estimation.

This variable can sync to all nodes, including aggregator and leaf nodes.

This engine variable is configurable on SingleStore Helios.

'7.3'

change_count_write_interval

Maximum frequency (in seconds) that change count metadata will be written. Will never occur more frequently than background_statistics_collection_interval. This variable can sync to all aggregators.

1200

character_set_server

The character set of the node. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

For more information, see Specifying Character Set and Collation for Clusters.

utf8

clamp_histogram_date_estimates

Specifies whether to use a heuristic to more accurately estimate date predicates that are larger than the histograms range. Available values are AUTO, OFF, and ON. This variable can sync to all nodes, including aggregator and leaf nodes.

AUTO is ON.

AUTO

collation_connection

Sets the collation that is used on the node. When you set this variable, collation_database and collation_server are automatically set to the same value. Although the collation variables are session variables, you must set the variable globally to avoid undesired behavior. An error is generated if any of these variables is set in a session to a value that does not match the value set globally. Change the value globally first, then change it in the session to match. This variable can sync to all nodes, including aggregator and leaf nodes.

For more information, see Specifying Character Set and Collation for Clusters.

utf8_general_ci

collation_database

Sets the collation that is used on the node. When you set this variable, collation_connection and collation_server are automatically set to the same value. Although the collation variables are session variables, you must set the variable globally to avoid undesired behavior. An error is generated if any of these variables is set in a session to a value that does not match the value set globally. Change the value globally first, then change it in the session to match. This variable can sync to all nodes, including aggregator and leaf nodes.

For more information, see Specifying Character Set and Collation for Clusters.

utf8_general_ci

collation_server

Sets the collation that is used on the node. When you set this variable, collation_connection and collation_database are automatically set to the same value. Although the collation variables are session variables, you must set the variable globally to avoid undesired behavior. An error is generated if any of these variables is set in a session to a value that does not match the value set globally. Change the value globally first, then change it in the session to match. This variable can sync to all nodes, including aggregator and leaf nodes.

For more information, see Specifying Character Set and Collation for Clusters.

utf8_general_ci

collect_average_size

Sends AVG(LENGTH()) queries to collect average serialization data for string columns. This variable can sync to all aggregators.

FALSE

columnstore_disk_insert_threshold

At this threshold (fraction of columnstore_flush_bytes), INSERT, LOAD DATA, and UPDATE queries to the columnstore will write straight to disk. For more information, see Advanced Columnstore Configuration Options. This variable can sync to all nodes, including aggregator and leaf nodes.

0.5

columnstore_flush_bytes

Controls the rowstore-backed segment size for columnstore tables. For more information, see Advanced Columnstore Configuration Options. This variable can sync to all nodes, including aggregator and leaf nodes.

33554432

columnstore_ingest_management_queue_timeout

Timeout (in seconds) to queue a columnstore ingest query before returning an error. This variable can sync to all nodes, including aggregator and leaf nodes.

3600

columnstore_row_value_table_lock_threshold

Sets the threshold at which multiple inserts to a columnstore table with unique keys will switch from row value lock to table lock. The value's metric is the number of rows for each partition in a database. This variable can sync to all nodes, including aggregator and leaf nodes.

1000000 (rows per partition)

columnstore_segment_rows

Controls the maximum row count for a columnstore segment. For more information, see Advanced Columnstore Configuration Options. This variable can sync to all nodes, including aggregator and leaf nodes. Maximum value is 10000000.

1024000

columnstore_small_blob_combination_threshold

Maximum file size (in bytes) that two or more column blobs can be combined into. This variable can sync to all nodes, including aggregator and leaf nodes.

5242880

columnstore_validate_blob_before_merge

Verifies the checksum of a blob before merging it. For more information, see CHECK BLOB CHECKSUM and Columnstore. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

compiled_images_eviction_memory_limit_mb

Sets a limit (in megabytes) for the memory amount used for query object code images. Refer to Managing Plancache Memory and Disk Usage for more information. This variable can sync to all nodes, including aggregator and leaf nodes.

0

compiled_images_eviction_memory_limit_percent

Sets the percentage of maximum_memory that is available for caching query object code images. Refer to Managing Plancache Memory and Disk Usage for more information. This variable can sync to all nodes, including aggregator and leaf nodes.

0.0

compile_only

If ON, SingleStore will compile, but not run, each query it receives. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

consensus_enabled

Determines whether consensus can be configured or not. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

core_on_bad_errnos_in_file_io_apis

Generates a core if a file IO API encounters EFAULT, EBADF , or EINVAL. This variable can sync to all nodes, including aggregator and leaf nodes.

0

core_on_stack_overflow

Generates a core instead of throwing an error if a stack overflow occurs during query execution. This variable can sync to all nodes, including aggregator and leaf nodes.

0

critical_diagnostics_max_payload_size

Limits the size of some critical diagnostics sent to SingleStore. This variable can sync to all nodes, including aggregator and leaf nodes.

65536

data_conversion_compatibility_level

Specifies the level of data conversion behavior to use. Higher levels throw error for integer under/overflow, illegal conversion of strings to integers, and string truncation issues. Values can be either: '8.1', '8.0', '7.5','7.0' , '6.5', or '6.0'. This variable can sync to all nodes, including aggregator and leaf nodes. It’s recommended to set this to the highest available level for new application development. For more information, see Data Type Conversion.

This variable can sync to all nodes, including aggregator and leaf nodes.

'8.0'

deadlock_avoidance_strategy

See Row Locking. This variable can sync to all nodes, including aggregator and leaf nodes.

wait_die

default_autostats_columnstore_cardinality_mode

Sets the default type of cardinality autostats for newly created columnstore tables. Values can be OFF, INCREMENTAL , or PERIODIC. This variable can sync to all aggregators.

INCREMENTAL

default_autostats_columnstore_sampling

Sets the default state for autostats sampling for newly created columnstore tables. This variable can sync to all aggregators.

lON

default_autostats_enabled

Whether autostats is enabled by default on newly created tables. This variable can sync to all aggregators.

ON

default_autostats_histogram_mode

Sets the default type of automatic histograms on newly created tables. Values can be either OFF, CREATE, or UPDATE. This variable can sync to all aggregators.

CREATE

default_autostats_rowstore_cardinality_mode

Sets the default type of cardinality autostats for newly created rowstore tables. Values can be OFF, INCREMENTAL or PERIODIC. This variable can sync to all aggregators.

PERIODIC

default_columnstore_table_lock_threshold

Sets a threshold for the number of rows that are locked before a table lock is acquired when updating or deleting rows in a columnstore table. When set to 0, the threshold of 5000 rows is used. It is a session variable that can be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

0

default_distributed_ddl_timeout

The time (in milliseconds) to wait for a distributed DDL transaction to commit. This value sets the timeout for both ALTER TABLE and BACKUP commands. If the timeout is reached, the transaction is rolled back. This variable can sync to all nodes, including aggregator and leaf nodes.

180000

default_partitions_per_leaf

The default number of partitions a newly created database will be created with. When you set this variable, its value is propagated to other aggregators only.

8

default_storage_engine

A global variable used for MySQL compatibility. This variable can sync to all nodes, including aggregator and leaf nodes.

default_table_type

The default table type to use when creating a new table. The value can be set to columnstore or rowstore. When the value is columnstore you can create a columnstore table using the standard CREATE TABLE syntax. This variable can sync to all aggregators.

columnstore for newly created clusters.

When you upgrade a cluster, the setting of the variable does not change.

default_user_require_ssl

Controls the default value for REQUIRE SSL in the CREATE USER DDL statement.

If it is enabled and REQUIRE is not specified in the CREATE USER statement, SSL will be required by default.

Once it is turned on, it impacts only the newly created users. It will not enforce SSL for existing users.

This variable can sync to all aggregators.

OFF

disable_reference_table_leftjoin_where_pushdown

Controls pushdown logic of pushing down reference table clauses to the leaves. When set to OFF, the logic is enabled. When set to ON, the logic is disabled. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

ON

disable_remove_redundant_gby_rewrite

Prevents the removal of GROUP BY columns when used in an ORDER BY clause. The default of AUTO is equal to OFF. To disable this behavior the variable must be set to ON. This variable can sync to all nodes, including aggregator and leaf nodes.

AUTO

disable_subquery_merge_with_straight_joins

Whether merging subqueries are disabled if they contain STRAIGHT_JOIN. Here, merging subqueries refers to removing unnecessary nested layers from joins including subqueries. For example, the query select * from (select * from table_a straight_join table_b) table_c straight_join table_d would be rewritten as select * from table_a straight_join table_b straight_join table_d if the engine variable disable_subquery_merge_with_straight_joins is set to OFF. If the variable is set to ON, the query would not be rewritten. The default setting AUTO is equivalent to ON in 7.3.

This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

AUTO

disable_update_delete_distributed_transactions

OFF

disk_plan_expiration_minutes

The interval in which a query plan must be read from disk (plancache directory) before it is removed. This variable can sync to all nodes, including aggregator and leaf nodes.

20160 (14 days)

dist_txn_outcomes_keepalive_secs

Controls how long (in seconds) an aggregator caches a distributed transaction outcome. This variable can sync to all nodes, including aggregator and leaf nodes.

1800

distributed_commit_lock_timeout

Deprecated in MemSQL 6.5. Replaced by default_distributed_ddl_timeout.

distributed_heartbeat_timeout

As of MemSQL 6.0, this is the timeout (in seconds) used when gathering workload management statistics from leaves. Before MemSQL 6.0 this was the timeout of the heartbeat query used to trigger failovers. This variable can sync to all aggregators.

10

distributed_optimizer_broadcast_mult

At a high level, this variable makes the optimizer more (lower) or less (higher) likely to consider a broadcast when planning distributed joins. SingleStore will choose a broadcast if it saves us moving more than repartitioning. As the default is the number of database partitions, setting the value to a high number (like 100,000) is an effective way to tell the optimizer to not broadcast data at all.

This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

0

distributed_optimizer_max_join_size

This variable is deprecated but still exists for backward compatibility purposes. As of version 7.3 of SingleStore, it is replaced by two new variables that assist in leveraging significantly increased speed for optimization at large join sizes: distributed_optimizer_unrestricted_search_threshold and singlebox_optimizer_cost_based_threshold. The distributed optimizer works to minimize data movement across the cluster, while the singlebox optimizer makes decisions about how to physically execute a join such as choosing an index. Setting this depreciated variable will result in both new variables being set to the target value.

This variable can sync to all nodes, including aggregator and leaf nodes.

22

distributed_optimizer_min_join_size_run_initial_heuristics

The minimum number of tables that must be present in a join for initial heuristics to be run before join optimization. This means a small cost is incurred for the initial use of these heuristics in excThis is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.hange for faster join optimization.

This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

16

distributed_optimizer_run_legacy_heuristic

Enabling this variable will result in a return to pre-7.3 behavior through the use of heuristics to complete a faster, restricted search for a join plan, once the number of tables in a given join is greater than the threshold set for this with distributed_optimizer_unrestricted_search_threshold. This search is faster, but as the number of tables in a given join increases, the chance of a less optimal query plan also increases.

This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

distributed_optimizer_unrestricted_search_threshold

The maximum number of tables present in a join that will result in an unrestricted search being performed to find the optimal query plan. A query with a greater number of tables will use heuristics to complete a search for a query plan. This restricted search is faster than previous versions of SingleStore. However, this may be at the cost of a less optimal query plan compared to an unrestricted search. distributed_optimizer_run_legacy_heuristic must be set to FALSE for this variable to take effect.

This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

22

dr_min_connection_timeout_ms

Allows the user to set the minimum amount of time to wait for Disaster Recovery (DR) replication in milliseconds (ms). MS range is between 8 and 2000000.

This variable can sync to all nodes, including aggregator and leaf nodes.

8

enable_alias_space_trim

Trims leading spaces for column name aliases. This engine variable is only used for backward compatibility. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

enable_auto_profile

If set ON, eligible queries are profiled. Execution and network times are not collected on the FULL auto-profiling mode. Execution times, network times, memory usage, and network traffic are not collected on the LITE auto-profiling mode. These modes are set by the auto_profile_type engine variable. If a profiling error occurs, the query is still executed.

If set to OFF, profile results are not collected automatically. The PROFILE command must be used to run to review usage metrics. For both ON and OFF settings, the only profile statistics that are saved are the most recent query plan's execution.

Please note, a profiled query will have a different query plan than a non-profiled query. When a query is first profiled or enable_auto_profile is set to ON compilations of the query are triggered to generate new query plans.

This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

enable_background_plan_invalidation

Controls whether the background statistics thread can invalidate plans automatically. This can improve plans if your data changes over time at the cost of spending time recompiling plans. This variable can sync to all aggregators.

OFF

enable_background_statistics_collection

If set to ON, background statistics are collected for all tables, unless disabled on the table level. If set to OFF, background statistics are not collected for any tables, and background statistics cannot be enabled for individual tables. This variable can sync to all aggregators.

ON

enable_binary_protocol

When this variable is set to ON, server-side prepared statements are enabled. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

ON

enable_broadcast_left_join

When this variable is set to TRUE, the query optimizer can choose the broadcast left join optimization. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

TRUE

enable_columnstore_ingest_management

Enables queuing on columnstore ingest queries when the background flusher is behind. This variable can sync to all nodes, including aggregator and leaf nodes.

TRUE

enable_compiled_images_eviction

Determines whether to use LRU (least recently used) eviction of query object code images. This variable can sync to all nodes, including aggregator and leaf nodes.

ON

enable_disk_plan_expiration

Enable removal of stale on-disk plans from the plancache directory based on the value of disk_plan_expiration_minutes. This variable can sync to all nodes, including aggregator and leaf nodes.

TRUE

enable_disk_plan_explain

Retains human-readable query plan summaries for compiled query plans. Retained information is available through the SHOW PLAN command. This variable can sync to all aggregators.

OFF

enable_dml_query_forwarding

When set to ON, forwards the Data Manipulation Language (DML) queries from a child aggregator to the master aggregator. Currently, this setting is used for write requests to reference tables, which cannot be executed on child aggregators. This variable can sync to all aggregators.

ON

enable_external_functions

See the Enabling External Functions section in CREATE [OR REPLACE] EXTERNAL FUNCTION. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

enable_histogram_to_unwrap_literals

Enables the ability for histograms to estimate predicates using stored procedure parameters or implicit/explicit typecasted literals. This variable needs to be manually enabled by setting it to ON. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

enable_ir_cache

Allows the on-disk IR cache to be enabled or disabled. This variable can sync to all nodes, including aggregator and leaf nodes.

ON

enable_multipartition_queries

When this variable is set to ON, queries are run per leaf instead of per partition on the leaf; the number of threads decreases and performance increases. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

ON

enable_query_forwarding

Enables the forwarding of DDL queries from a child aggregator to the master. sync_permissions must also be enabled for this variable to take effect; otherwise, an error is returned for DDL queries that are run on a child aggregator. For further information, see Cluster Management Commands This variable can sync to all aggregators.

ON

enable_spilling

Enables spilling onto disk for HashGroupBy, hash join, order by, and window function operations during query processing. When enable_spilling is set to ON, the output of SHOW PROFILE JSON includes the following metrics:

spill_outputted_rows: number of rows spilled to disk

spill_disk_usage: number of bytes spilled to disk.

The sync variables spilling_node_memory_threshold_ratio and spilling_query_operator_memory_threshold also define the threshold limit to start spilling onto disk.

This variable can sync to all nodes, including aggregator and leaf nodes.

ON

enable_subprocess_tracing

Enables tracing in backup subprocesses. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

enable_varbuffer_dictionary_compression

Enables compressing identical strings on VARCHAR, VARBINARY, LONGTEXT, LONGBLOB, MEDIUMBLOB, BLOB, TINYBLOB, MEDIUMTEXT, TEXT, and TINYTEXT columns. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

estimate_zero_rows_when_sampling_data_is_missing

Uses heuristic or histogram to estimate missing sampling data instead of returning zero rows. False enables new estimating behavior. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

exclude_scalar_subselects_from_filters

Specifies how SingleStore estimates the equality predicate for uncorrelated scalar subselects. When set to TRUE, the engine estimates scalar subselects as non-selective. When set to FALSE, the engine estimates the equality predicate for uncorrelated scalar subselects as 1/cardinality, and uses heuristics for all other scalar subselect predicates.

This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

expected_leaf_core_count

Number of cpu cores expected on each leaf. This variable can sync to all nodes, including aggregator and leaf nodes.

8

explain_expression_limit

Specifies the maximum number of characters to be used by expressions when outputting the EXPLAIN for a query. This variable can sync to all nodes, including aggregator and leaf nodes.

500

explicit_defaults_for_timestamp

Specifies whether the server disables certain nonstandard behaviors for default values and NULL-value handling in TIMESTAMP columns. See Timestamp Behavior for details on this variable. This variable can sync to all nodes, including aggregator and leaf nodes.

ON

exporter_port

Port number to use for prometheus exporter http endpoint. This variable can sync to all aggregators.

0

exporter_ssl_ca

Path to user CA file to use for prometheus exporter connection. This variable can sync to all aggregators.

exporter_ssl_capath

Path to user directory with CA files to use for prometheus exporter connection. This variable can sync to all aggregators.

exporter_ssl_cert

Provides the path to the ssl certificate. This variable can sync to all aggregators.

exporter_ssl_key

Used by the monitoring process when a user selects what entity to trust for the custom private key. This variable can sync to all aggregators.

exporter_ssl_key_passphrase

Used when the exporter_ssl_key is encrypted so the monitoring process can decrypt it. This variable can sync to all aggregators.

exporter_use_https

Adds the ability to use HTTPS. The default is FALSE, set to TRUE to enable. This variable is used in conjunction with engine variables exporter_ssl_cert and exporter_ssl_key. This variable can sync to all aggregators.

FALSE

external_functions_allowlist

See the Enabling External Functions section in CREATE [OR REPLACE] EXTERNAL FUNCTION. This variable can sync to all nodes, including aggregator and leaf nodes.

external_functions_batch_size

Assists with controlling the batch size while making external function calls. This variable can sync to all nodes, including aggregator and leaf nodes.

512 (rows)

failover_initial_grace_interval_seconds

If a node moves offline a second time after just failing and then coming back online failover won’t be triggered again if it happens within grace_interval_seconds. This is to avoid a cycle of a problem node failing and coming back online. This variable can sync to all aggregators.

300

failover_on_low_disk

Controls whether the cluster fails over if disk use falls below minimal_disk_space. This variable can sync to all nodes, including aggregator and leaf nodes.

TRUE

force_bushy_join_table_limit

Maximum number of tables in a join which SingleStore will still cost bushy joins. Above this threshold, all possible bushy joins will be chosen regardless of cost. This variable can sync to all nodes, including aggregator and leaf nodes.

18

geo_sphere_radius

The radius of the sphere is used for distance calculation, in meters. (Defaults to average Earth radius.) This variable can sync to all nodes, including aggregator and leaf nodes.

6367444.657120

group_concat_max_len

This variable is the maximum length string GROUP_CONCAT() can return (in bytes). It is a session variable that can be set globally and can be set to any value smaller than or equal to max_allowed_packet. For more information on this variable see max_allowed_packet in the previous section and the GROUP_CONCAT SQL reference. This variable should be updated on all nodes and requires a restart to pick up the new value.

This variable can sync to all nodes, including aggregator and leaf nodes.

16777216

highlight_fragment_size

The size of a block of characters used to logically divide up string columns when using the full-text search highlight functionality. The minimum value is 10 and the maximum value is 65535. This variable can sync to all nodes, including aggregator and leaf nodes.

100

highlight_max_number_fragments

Maximum number of fragments to return from highlight function. This variable can sync to all nodes, including aggregator and leaf nodes.

0

histogram_column_correlation

This variable controls the assumed correlation between columns when using histograms for selectivity estimation. This variable can sync to all nodes, including aggregator and leaf nodes.

0.5

ignore_foreign_keys

Allows the most commonly used FOREIGN KEY syntax to be accepted, but not enforced. This variable can sync to all nodes, including aggregator and leaf nodes. See Specifying Unenforced Unique Constraints for more information.

OFF

ignore_insert_into_computed_column

When this variable is set to ON, an INSERT into a computed column succeeds and the INSERT ignores the computed column. When this variable is set to OFF, an INSERT into a computed column fails with an error. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

inlist_precision_limit

Specifies the maximum number of list values to consider during histogram estimation for a query with an IN list; a smaller limit will result in a faster compilation time but may also give a less accurate estimate. NOTE: If you adjust this variable, existing queries will need to be recompiled.

This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

10000

internal_columnstore_idle_flush_wait_seconds

Amount of time (in seconds) the background thread waits before trying to flush on a table with data less than columnstore_flush_bytes. Must be a value between 30 and 86400, and evenly divisible by 5. This variable can sync to all nodes, including aggregator and leaf nodes.

125

internal_max_cte_depth

Specifies the maximum number of nested common table expressions (CTEs). For more information, refer to the WITH topic. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

128

jwks_endpoint

The endpoint to get JWKS updates from. If empty, JWKS update from the endpoint does not happen. This variable can sync to all nodes, including aggregator and leaf nodes.

jwks_require_audience

When set, specifies the value that must be present in JWT aud field. If JWT aud field is an array, then the value must be equal to one of array’s items. This variable can sync to all nodes, including aggregator and leaf nodes.

jwks_ssl_ca_certificate

The CA certificate file to be used for SSL JWKS endpoint host verification when HTTPS protocol is used. This variable can sync to all nodes, including aggregator and leaf nodes.

jwks_update_interval

The JWKS update interval in seconds. If set to 0, JWKS refresh does not happen. This variable can sync to all nodes, including aggregator and leaf nodes.

3600

jwks_username_field

When set, specifies which JWT field contains username overriding other rules. This variable can sync to all nodes, including aggregator and leaf nodes.

leaf_failure_detection

Whether or not the master aggregator should detect leaf failures and trigger failovers. WARNING: Turning this variable OFF disables failover. This variable can sync to all aggregators.

ON

leaf_failover_fanout

Specifies the placement of replica partitions in a cluster. It can be set to the following modes: paired and load_balanced. For more information, see Enabling High Availability. This variable can sync to all nodes, including aggregator and leaf nodes.

paired

license_visibility

Hides license-related variables from SHOW commands. This variable can sync to all nodes, including aggregator and leaf nodes.

TRUE

load_data_errors_retention_minutes

The amount of time, in minutes, that a LOAD DATA error is stored on disk. This variable must be set to at least 0. These errors are surfaced in information_schema.LOAD_DATA_ERRORS as long as they are stored on disk. This variable can sync to all nodes, including aggregator and leaf nodes.

1440

load_data_internal_compression

If this variable is set to ON and a network bottleneck is detected, LOAD DATA compresses data on the aggregator before forwarding it to the leaves. This variable can sync to all nodes, including aggregator and leaf nodes.

ON

load_data_max_buffer_size

Maximum number of unparsed bytes read by LOAD DATA before throwing an error. This variable can sync to all nodes, including aggregator and leaf nodes.

1073741823

load_data_read_size

Number of bytes read at a time by LOAD DATA. This variable can sync to all nodes, including aggregator and leaf nodes.

8192

load_data_write_size

Number of bytes written at a time by LOAD DATA. This variable can sync to all nodes, including aggregator and leaf nodes.

8192

local_file_system_access_restricted

Restricts access to the local file system. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

lock_wait_timeout

Time (in seconds) to wait for a row lock before returning an error. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

30

lockfree_backup

Determines whether to use the lockfree backup implementation that is introduced in version 7.5. This variable can sync to all nodes, including aggregator and leaf nodes.

TRUE

log_file_size_partitions

Specifies the log file size (in bytes) for partitions. Can be set to a minimum of 1 MB (in bytes) to a maximum of 1 TB (in bytes), in multiples of 1 MB. A MB contains 1048576 bytes. Each partition is pre-allocated to two log files. Each of these log files has the size log_file_size_partitions. By default, log_file_size_partitions is 256 MB. Changes in the log file size will not affect the sizes of the partitions that are used by existing databases. Use caution when adjusting this variable as it can result in large log files and thus consume more disk space. This variable can sync to all aggregators.

268435456

log_file_size_ref_dbs

Specifies the log file size (in bytes) for reference databases. Can be set to a minimum of 1 MB (in bytes) to maximum of 1 TB (in bytes), in multiples of 1 MB. A MB contains 1048576 bytes. Each reference database is pre-allocated two log files. Each of these log files has the size log_file_size_ref_dbs. By default, log_file_size_ref_dbs is 64 MB. Changes in the log file size will not affect existing databases. Use caution when adjusting this variable as it can result in large log files and thus consume more disk space. This variable can sync to all aggregators.

67108864

master_promote_kill_timeout_seconds

Specifies the amount of time online failover will wait to drain open transactions on a master partition (whether running or idle) before killing them. This variable can sync to all nodes, including aggregator and leaf nodes.

120

max_allowed_packet

Maximum allowed protocol packet size (in bytes). This is a session variable that is readable at the session level, but which must be set globally. The maximum value is 1 GB and the lowest allowable value is 16 KB, though setting this to a value that low is not recommended. This variable can sync to all nodes, including aggregator and leaf nodes.

104857600

max_async_compilation_concurrency

The number of compiles that can run asynchronously at a given time on each node, when interpreter_mode is set to interpret_first. Its effective value is capped at the number of cores on the host machine. If set to 0, it takes the default value.

This variable can sync to all nodes, including aggregator and leaf nodes.

Half the number of cores on the host machine.

max_compilation_memory_mb

The maximum amount of memory (in megabytes) used to compile a query. An error is returned if the query reaches the specified memory limit. This variable can sync to all nodes, including aggregator and leaf nodes.

4096

max_compilation_time_s

The maximum time allowed (in seconds) to compile a query. An error is returned if the query reaches the specified compilation time limit, and any subsequent attempts to compile the query fail. This variable can sync to all nodes, including aggregator and leaf nodes.

600

max_connect_errors

If the number of interrupted connections from a host exceeds the value of max_connect_errors this host will be blocked from further connections. This variable can sync to all nodes, including aggregator and leaf nodes.

10

max_expression_item_limit

The maximum number of expressions within a query item. The value can be set between 100 and the maximum unsigned INT value. The maximum unsigned INT value disables this feature. This is a session variable that can also be set globally. This engine variable can sync to all aggregators.

Maximum unsigned INT value

max_expression_query_limit

The maximum number of expressions within an entire query. The value can be set between 100 and the maximum unsigned INT value. The maximum unsigned INT value disables this feature. This is a session variable that can also be set globally. This engine variable can sync to all aggregators.

Maximum unsigned INT value

max_pooled_connections

The maximum number of stashed connections per leaf. For more information on this variable, see In-Depth Variable Definitions. This variable can sync to all nodes, including aggregator and leaf nodes.

1024

max_prefetch_threads

Sets the maximum number of threads to use for pre-fetching per scan. This variable can sync to all nodes, including aggregator and leaf nodes.

1

max_prepared_stmt_count

Sets the maximum number of simultaneous prepared statements. This variable can sync to all nodes, including aggregator and leaf nodes.

16382

max_table_memory_room_mb

Sets the maximum amount of memory (in bytes) required when creating or attaching a database. This variable can sync to all nodes, including aggregator and leaf nodes.

1024

maximum_blob_cache_size_percent

Sets the maximum amount of space that can be used by the blob cache. It is a percent of the disk space and its value should be between 0 and 1.

For example: SET GLOBAL maximum_blob_cache_size_percent = 0.90

It is set on the MA unlike maximum_blob_cache_size_mb

This variable can sync to all nodes, including aggregator and leaf nodes.

0

memsql_original_server_version

Displays the version of SingleStore initially installed. This variable can sync to all nodes, including aggregator and leaf nodes.

memsql_snapshot_trigger_size

Indicates the size (in bytes) of the SingleStore database log upon which a new snapshot will be kicked off. The minimum value this engine variable can be set is 65536. This variable can sync to all nodes, including aggregator and leaf nodes.

134217728

monitoring_retention_period

The period in minutes to keep historical monitoring data. Stored procedures reference this variable to purge values. This variable can sync to all aggregators.

10080

multi_insert_tuple_count

Preferred number of tuples in multi-inserts that aggregators send to leaves. This variable can sync to all nodes, including aggregator and leaf nodes.

20000

multi_statement_xact_idle_timeout

Time (in seconds) that is allowed for a multi-statement transaction to remain idle while holding locks. This is a session variable that can also be set globally. This variable can sync to all aggregators.

300

node_degree_of_parallelism

Controls the number of threads per leaf node for parallel columnstore scans. This variable can sync to all nodes, including aggregator and leaf nodes.

0

null_timestamps_for_testing

Variable to write default TIMESTAMP columns during testing. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

num_background_merger_threads

Controls the number of background merger threads to start for each node. The default value of 2 implies two threads run on each node and the work is bound to 2 CPUs. On large nodes, with many CPUs, it is easier for nodes to outpace two merger threads. When there is a high-throughput ingest workload, the merger cannot keep up and query processing gets slower as the data is not well merged.  Use the mv_columnstore_merge_status view to find out if the merger process is slow. If it gets behind, try increasing the value of this variable. For example, if with 32 core nodes, you find the merger is not keeping up then increase it from 2 to 8 and observe if the merger is keeping up with the ingest workload. Only 2 merger threads per partition per table are allowed at a time. Hence more partitions per node can use more threads. For one partition on one table increasing the value of this variable will not make any difference.

The allowed values range between 2 and 64.

This variable can sync to all nodes, including aggregator and leaf nodes.

2

optimize_stmt_threshold

A statement count threshold for a procedure or function. When the threshold is exceeded, the procedure or function compiles faster, but the highest level of code optimizations is not applied; however, SQL query optimizations are still in effect. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

50

optimizer_beam_width

Controls the beam search width. The default value is 10. Setting this to 1 reverts to the previous behavior (prior to v7.5). Higher values (maximum is 1024) will make the one-time query optimization time slower. This variable can sync to all nodes, including aggregator and leaf nodes.

10

optimizer_cross_join_cost

Increased the default value to reduce the chance of cartesian joins being included when there are incorrect estimations. This variable can sync to all nodes, including aggregator and leaf nodes.

1.0

optimizer_disable_subselect_to_join

Setting this to ON disables the optimizer rewrite that converts subselect to joins automatically. Disabling this rewrite may cause queries to fail if they require the rewrite to run. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

optimizer_disable_transitive_predicates

Disables predicate transitivity on query rewrites is set to TRUE. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

optimizer_disable_right_join

Setting this variable to ON disables the right join in a query. This allows the left join to be used by the query plan, thereby allowing disk spilling to happen.

DEFAULT

optimizer_empty_tables_limit

Sets a threshold for the number of empty tables that must be present in a query before the optimizer falls back to being rule-based, instead of cost-based. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

0

optimizer_enable_json_text_matching

Applies to a computed column when matching json_text as a json_col persisted text. When the variable is enabled, a json_text computed column can use the following three text types: TEXT, MEDIUM_TEXT, and LONG_TEXT. Disabling this rewrite may cause queries to fail if they require the rewrite to run. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

optimizer_enable_orderby_limit_self_join

Allows the optimizer to rewrite leaf ORDER BY LIMIT queries using a self-join. Defaults to TRUE (enabled). Disabling this rewrite may cause queries to fail if they require the rewrite to run. This variable can sync to all nodes, including aggregator and leaf nodes.

TRUE

optimizer_min_reference_rows_for_gathered_join

The minimum total number of rows estimates in joined reference tables for gathering non-reference tables to be considered. This variable can sync to all nodes, including aggregator and leaf nodes.

1000000

optimizer_min_reference_tables_for_gathered_join

The minimum number of reference tables joined in a query that will be considered before gathering non-reference tables. 0 disables this feature. This variable can sync to all nodes, including aggregator and leaf nodes.

0

parametrizer_query_max_params

The maximum number of scalar constants a single query can contain. Any query with more than parametrizer_query_max_params will be failed with an error. This variable can sync to all aggregators.

1048576

plan_expiration_minutes

The interval in which a query plan must be reused at least once or it gets unloaded from the query plan cache. This variable can sync to all nodes, including aggregator and leaf nodes.

720

preserve_original_colstore_json

A global variable that preserves NULL values and empty arrays in a JSON object that is written to a columnstore table. Set this variable to OFF to disable this setting. For more information on this variable, see the JSON Guide.

This variable can sync to all nodes, including aggregator and leaf nodes.

AUTO (same as ON)

privilege_caches_update_mode

This variable can be used to temporarily disable updating privilege caches while performing security operations such as: granting/revoking permissions to/from users, granting/revoking permissions to/from roles, adding/deleting roles to/from groups, adding/deleting users to/from groups, role deletion, group deletion. This can result in a performance gain.

IT IS IMPORTANT that the privilege cache updates be reenabled (set back to always) once the administration work is finished.

This variable can sync to all aggregators.

To use this variable:

  1. Set the value to never.

  2. Perform the operations.

  3. Set the value back to always.

always

privilege_transfer_mode

A global variable that controls whether privilege transferability is determined by GRANT OPTION or TRANSFERABLE keyword in the GRANT security management command. The allowed values for this variable are grant_option (default) or per_privilege. Note: This global variable can be set on a SingleStorecluster.

This variable can sync to all nodes, including aggregator and leaf nodes.

grant_option

processlist_rpc_json_max_size

The maximum number of bytes available to the rpc_info column of information_schema.mv_processlist. This variable can sync to all nodes, including aggregator and leaf nodes.

2048

promote_aggregator_timeout_ms

Default timeout (in milliseconds) to wait for a PROMOTE AGGREGATOR operation to finish before giving up. This variable can sync to all aggregators.

180000

query_parallelism

This variable is deprecated. Use query_parallelism_per_leaf_core instead.

Maximum number of simultaneous running queries. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

0

query_parallelism_per_leaf_core

The fraction of available cores that will be used on each leaf for a single query. The available values are any decimal from 0 to 1. If this value is set a 0, flexible parallelism is turned off. If this value is set at 1, all of the available core's processing power will work on a single query. This is a session variable that can be set on aggregators only.

1.0

query_shape_serialization_enabled

Serializes query shape metadata to disk. Available values are AUTO, ON, and OFF. This variable can sync to all nodes, including aggregator and leaf nodes.

AUTO

query_shape_serialization_file_size_cap

Maximum size (in bytes) for each query shape in a serialization file. This variable can sync to all nodes, including aggregator and leaf nodes.

1048576

query_shape_serialization_show_parameters

Stores actual parameter values to on-disk query shape metadata serialization. This variable can sync to all nodes, including aggregator and leaf nodes.

TRUE

read_advanced_counters

If ON, this variable enables thecollection of advanced statistics. For more information, see the Advanced Statistics section of the Management View Statistics Reference topic. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

redundancy_level

If set to 1, there is no redundancy across leaves. If set to 2, turns on SingleStore’s High Availability mode. When you set this variable, its value is propagated to other aggregators only. For more information on this variable, see the Managing High Availability topic. This variable can sync to all nodes, including aggregator and leaf nodes.

1

regexp_compile_mem_mb

Specifies the heap allocation limit of the regexp compiler (in megabytes). This variable can sync to all nodes, including aggregator and leaf nodes.

Can be manually adjusted by the user.

2

regexp_format

Specifies the regular expression format to be used by regular expression functions that you call. Possible values are 'extended' and 'advanced'. When you set this variable, its value is propagated to all nodes. For more information on this variable, see the Regular Expression Functions topic. This variable can sync to all nodes, including aggregator and leaf nodes.

'extended'

regexp_output_validation_mode

This variable controls the output behavior of REGEXP_REPLACE and REGEXP_SUBSTR expressions if a returned string is invalid under its collation setting. Currently, regular expression built-ins could produce non-utf8 strings since they don't have full support for multi-byte characters.

Possible values are skip, return_null, and throw_error. skip means no extra validation checks are performed and invalid strings will be allowed. return_null means "SQL-NULL" will be returned in such cases. throw_error means an error will be reported.

This variable can sync to all nodes, including aggregator and leaf nodes.

throw_error

resource_governor_cpu_limit_mode

Determines whether the mode of control over the CPU for resource pools is HARD or SOFT. For more information on this variable, see CREATE RESOURCE POOL. This variable can sync to all nodes, including aggregator and leaf nodes.

SOFT

resource_pool_statement_selector_function

The function that is executed at runtime that selects the resource pool to use when the user runs a query. The resource_pool variable must be set to system_auto for the resource_pool_statement_selector_function to be used. This variable can sync to all nodes, including aggregator and leaf nodes.

For more information on this variable, see the Setting Resource Limits topic.

result_table_error_lifetime

Sets the lifetime (in seconds) of a result table error before garbage collection. This variable can sync to all nodes, including aggregator and leaf nodes.

1000

sampling_estimate_approach

Specifies the method used for sampling. Possible values include: auto, aggregation, lightweight, and filtered. See Statistics and Sampling Concepts for more information.

This variable can sync to all nodes, including aggregator and leaf nodes.

Reoptimize the most recently profiled query based on previous executions of it.

auto

service_edition

Displays the service edition: STANDARD or ENTERPRISE. This variable can sync to all nodes, including aggregator and leaf nodes.

STANDARD

set_global_variables_allowed_list

A JSON array of strings, where each string is the name of an engine variable. This list provides a means of access to engine variables that are not normally settable by SingleStore Helios users. The contents of the list can be set only by SingleStore customer support engineers. In consultation with SingleStore support engineers, this can allow you to set certain variables yourself -- providing you have SYSTEM_VARIABLES_ADMIN privilege -- that would not be otherwise available for you to control. This variable can sync to all nodes, including aggregator and leaf nodes.

show_query_parameters

If ON, query parameters will be visible in the output of SHOW PROCESSLIST and in the output of SELECT from INFORMATION_SCHEMA.PROCESSLIST. If it is OFF, parameters will be hidden. This variable can sync to all nodes, including aggregator and leaf nodes.

ON

show_with_portability_comments

Adds conditional compatibility comments to outputs like SHOW CREATE TABLE to help with portability. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

singlebox_optimizer_cost_based_threshold

Maximum number of tables present in a join which will utilize the cost-based singlebox join optimizer. For joins with more than this many tables, singlebox join optimization will use an alternate, rule-based algorithm which is faster but less effective. The singlebox join optimizer is responsible for making decisions about the local aspects of how to efficiently execute a join, such as choosing an index. This variable must be set between 0 and 30 tables.

This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

18

skip_segelim_with_inlist_threshold

Controls when segment elimination will not use an IN clause that is too large. This variable can sync to all nodes, including aggregator and leaf nodes.

1000 (elements)

snapshot_trigger_size

The aggregate size of transaction logs (in bytes) which, when reached, will trigger a new snapshot. This variable can sync to all nodes, including aggregator and leaf nodes.

2147483648

snapshot_wait_for_blob_gc_seconds

Specifies how much time (in seconds) to wait between snapshots before taking a snapshot for blob garbage collection from previous snapshots. This variable can sync to all nodes, including aggregator and leaf nodes.

180

snapshots_to_keep

Number of snapshot and log files to keep for backup and replication. This variable may be set live. This variable can sync to all nodes, including aggregator and leaf nodes.

2

sp_query_dynamic_param

Specifies whether different query plans are generated for CALL queries when arguments to the stored procedure are NULL or not NULL arguments. If enabled, queries that use variables inside stored procedures will use the same query plan. For more details, see the In-Depth Variable Definitions section below.

This variable can sync to all aggregators.

AUTO

spilling_maximum_disk_percent

Specifies the maximum percentage of disk space to be used by spilling during query execution. The value must be between 0 and 1, or -1 (default).  The default is recommended in general. 

Under the default setting, spilling will first use available free disk space. Spilling will not use space from the blob cache until the free usable disk space hits a low threshold, which is 10% of (total disk space - minimal_disk_space). After this threshold is reached, spilling is allowed to to use up to 30% of the blob cache space set by the engine variable maximum_blob_cache_size_mb. 

The maximum amount of blob cache space that spilling can negotiate to use  is 30% of maximum_blob_cache_size_mb. Refer to Disk Spilling for more details on spilling.

This variable can sync to all nodes, including aggregator and leaf nodes.

-1

spilling_node_memory_threshold_ratio

Memory threshold (ratio of total node memory, 0 - 1) to start spilling. This variable can sync to all nodes, including aggregator and leaf nodes.

0.75

spilling_query_operator_memory_threshold

Memory threshold (bytes) for a query operator to be considered for spilling. This variable can sync to all nodes, including aggregator and leaf nodes.

104857600

sql_mode

Specifies the SQL mode or modes that affect the SQL syntax SingleStore supports and the query validation checks it performs. See the sql_mode section below for more information. This is a session variable that can also be set globally. This variable can sync to all aggregators.

STRICT_ALL_TABLES

sql_quote_show_create

If ON, identifiers are quoted by the server for SHOW CREATE commands. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

ON

sql_select_limit

The maximum number of rows returned by a SELECT query. If the LIMIT clause is specified in a SELECT query, the value in the LIMIT clause overrides sql_select_limit. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

2^64-1

sub_to_physical_partition_ratio

The number of sub-partitions per physical partition that a new database will be created with. It must be set at 0 or a power of 2 value of the physical partition count. The available values are 2, 4, 8, 16, 32, and 64. This variable can sync to all nodes, including aggregator and leaf nodes.

4

subproc_abridged_errors

Makes subprocess errors easier to read. This variable can sync to all nodes, including aggregator and leaf nodes.

TRUE

subprocess_azure_retries

This engine variable is deprecated. Maximum retries for Azure operations. This variable can sync to all nodes, including aggregator and leaf nodes.

10

subprocess_backup_retries

The maximum number of retries for BACKUP upload and download subprocesses. This variable can sync to all nodes, including aggregator and leaf nodes.

3

subprocess_ec2_metadata_timeout_ms

The maximum amount of time (in milliseconds) the engine waits for or retries a request before timing out to return metadata used to verify the cluster is on ec2 from which implicit credentials can be obtained. This variable can sync to all nodes, including aggregator and leaf nodes.

60000

subprocess_io_idle_timeout_ms

The maximum amount of time (in milliseconds) the engine waits for or retries a request before timing out and failing the backup when connecting to cloud providers. When you set this variable, its value is propagated to all nodes. This variable can sync to all nodes, including aggregator and leaf nodes.

600000

subprocess_max_retries

This variable is the setting for the MaxRetries parameter for S3 and GCS. Connectivity issues that are retry-able will automatically try to connect the default number of times. This variable can sync to all nodes, including aggregator and leaf nodes.

10

subquery_merge_with_outer_joins

This session variable fine-tunes the behavior of the rewrite SubQueryMerge.

If the variable is set to ON, always merge the sub-select with outer join(s). If the variable is set to HEURISTIC, check to see if merging the sub-select with outer join(s) will cause negative performance for the overall query. If the variable is set to OFF, don’t merge sub-selects with outer join(s). If the variable is set to AUTO (this is the default); for v7.5 and previous versions, AUTO will behave like ON. For versions after v7.5, AUTO will behave like HEURISTIC.

This variable can sync to all nodes, including aggregator and leaf nodes.

AUTO

sync_permissions

This aggregator will synchronize its permissions with other aggregators. When ON, this variable also enables DDL forwarding. See Synchronizing Permissions Across Your Cluster for more information about utilizing this variable. This variable can sync to all aggregators.

ON

sync_slave_timeout

This engine variable is deprecated. Maximum amount of time (in milliseconds) for the primary to wait for acknowledgment from the synchronous replica. This variable can sync to all nodes, including aggregator and leaf nodes.

10000

table_name_case_sensitivity

Sets the case-sensitivity for database object names. If set to OFF, tables, views, table aliases, and user-defined table-valued functions (TVFs) are not case-sensitive. For details on case-sensitivity, refer to Database Object Case-Sensitivity. This variable can sync to all nodes, including aggregator and leaf nodes.

ON

tracelog_retention_period

Indicates the retention period (in days) for trace log files.

0

tracelog_retention_size

Controls the retention, by total size in megabytes, of the rotated files of each component log separately. SingleStore will create a separate "memsql_<COMPONENT>.log" file for each component. As trace events happen they are added to the corresponding component log.

For example, the repl log component of the engine will write traces to the memsql_REPL log. If the total retention is 500MB and all REPL* files are over that limit they will be removed until the size is below the limit. This is done for every component and also for the main "memsql.log" log file.

0

transaction_isolation

Transaction isolation level. This is a session variable that can also be set globally. This variable can sync to all aggregators.

This engine variable is configurable on SingleStore Helios.

READ-COMMITTED

two_phase_commit

Turns the Two-Phase Transaction Commit feature ON or OFF. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

tx_isolation

Transaction isolation level. This is a session variable that can also be set globally. This variable can sync to all aggregators.

This engine variable has been deprecated. It is being kept for backward compatibility. Please see the transaction_isolation engine variable.

READ-COMMITTED

trim_malloc

Determines whether malloc_active_memory usage is trimmed periodically. See SHOW STATUS EXTENDED for more information. This variable can sync to all nodes, including aggregator and leaf nodes.

TRUE

use_seekable_json

Used to enable seekable encoding for JSON. This variable can sync to all nodes, including aggregator and leaf nodes.

See Columnstore Seekability using JSON for more information about utilizing this variable.

ON

use_vectorized_join

This variable has been deprecated and is no longer operational in SingleStore version 5.0 or newer. This variable can sync to all nodes, including aggregator and leaf nodes.

ON

varchar_column_string_optimization_length

For a column defined as type CHAR of length len; stores the column as a VARCHAR of length len, if len is greater than or equal to the value of this variable. If the value of this variable is 0, the column is not stored as a VARCHAR.

This variable can sync to all nodes, including aggregator and leaf nodes.

0

verbose_time_heartbeat_thread

Enables logging to heartbeat thread in order to better diagnose process stalls. This logging is not intended for customer consumption and should be requested by and provided to SingleStore for investigation. The default value is off, setting this to on will enable the logging.

This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

verify_fields_in_transitivity

Limits the application of join predicate transitivity to predicates that may be estimated. To use, set this variable to ON. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

wait_die_retry_on_die_sleep_factor

Controls how much time (in milliseconds) a transaction waits on a rowlock before giving up when the decision is "die". The value of the variable is multiplied by the number of locks that the transaction already holds, so if a transaction already has a lot of locks, it will wait longer. The purpose of this mechanism is to avoid unnecessary rollbacks because sometimes there is no real deadlock and instead of giving up right away we wait a bit for the lock to be released before rolling back all the work that was done.

This variable can sync to all nodes, including aggregator and leaf nodes.

250

cluster_name

Specifies the cluster name used by the backup. This variable can sync to all nodes, including aggregator and leaf nodes.

JSON Sync Variables

Name

Description

Default

json_compatibility_level

Controls whether the JSON_SET_<type> function creates a keypath if it does not exist. The default is 7.8, which means missing keypaths are not created. This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

7.8

json_document_max_children

The maximum number of children allowed during schema inference. Must be between 128 and 16384. This variable can sync to all nodes, including aggregator and leaf nodes.

10000

json_document_max_leaves

Limits the number of JSON key paths inferred within a segment. Must be between 1 and 16384. This variable can sync to all nodes, including aggregator and leaf nodes.

10000

json_document_sparse_children_check_ratio

The minimum average frequency to infer the schema for a json sub-object. Must be between 10 and 100000. This variable can sync to all nodes, including aggregator and leaf nodes.

100

json_document_sparse_children_check_threshold

The number of children after which the sparse condition is checked. Must be between 128 and 16384. This variable can sync to all nodes, including aggregator and leaf nodes.

250

json_document_absolute_sparse_key_check_ratio

The minimum absolute frequency to infer the schema for a json sub-object. Must be between 10 and 100000. This variable can sync to all nodes, including aggregator and leaf nodes.

1000

json_document_page_data_soft_limit

The soft limit to how much data fits in a single internal page while compressing a json column. Must be between 1 and 1000000000000. This variable can sync to all nodes, including aggregator and leaf nodes.

1024*1024*1024

json_extract_string_collation

Controls the collation setting for the JSON_EXTRACT_STRING and JSON_MATCH_ANY function. This variable can sync to all nodes, including aggregator and leaf nodes. For versions before 7.8.21, the default is AUTO. In version 7.8.21 and forward SERVER_V2 is the default. For more details, see the In-Depth Variable Definitions section below.

This is a session variable that can also be set globally. This variable can sync to all nodes, including aggregator and leaf nodes.

auto/SERVER_V2

parametrize_json_keys

Used for parametrizing the JSON key path arguments for JSON_EXTRACT_xxx functions and the :: syntax so that query plans can be reused.

This variable can sync to all nodes, including aggregator and leaf nodes.

For more details refer

FALSE

Password Complexity Sync Variables

For information about how to configure a password complexity policy, see Configuring a Password Policy.

Name

Description

Default

password_max_consec_repeat_chars

The maximum number of consecutive repeated characters allowed (0 to 100). For example, if set to 3, passwords with 4 or more consecutive repeated characters (e.g., aaaa or 1111) are disallowed. This variable can sync to all nodes, including aggregator and leaf nodes.

0

password_max_consec_sequential_chars

The maximum number of consecutive characters allowed (0 to 100). For example, if set to 3, passwords with a 4-letter sequence or longer (e.g. 1234 or abcd) are disallowed. This variable can sync to all nodes, including aggregator and leaf nodes.

0

password_min_length

The minimum number of characters required (0 to 100). This variable can sync to all nodes, including aggregator and leaf nodes.

0

password_min_lowercase_chars

The minimum number of lowercase characters required (0 to 100). This variable can sync to all nodes, including aggregator and leaf nodes.

0

password_min_numeric_chars

The minimum number of numeric digit characters required (0 to 100). This variable can sync to all nodes, including aggregator and leaf nodes.

0

password_min_special_chars

The minimum number of special (non-alphanumeric) characters required (0 to 100). This variable can sync to all nodes, including aggregator and leaf nodes.

0

password_min_uppercase_chars

The minimum number of uppercase characters required (0 to 100). This variable can sync to all nodes, including aggregator and leaf nodes.

0

Pipelines Sync Variables

You cannot set a variable for a specific pipeline – each variable setting applies to all pipelines in the cluster.

Name

Description

Default

advanced_hdfs_pipelines

Specifies whether to enable security features for HDFS pipelines. See Enabling Wire Encryption and Kerberos on HDFS Pipelines for more information. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

custom_java_pipelines

Allows custom java pipelines. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

ingest_errors_max_disk_space_mb

The maximum amount of disk space (in megabytes) that is used to log errors for pipelines. This variable can sync to all nodes, including aggregator and leaf nodes.

102400

java_pipelines_extractor_class

Implements an extractor interface used for experimental Java pipelines. This variable can sync to all nodes, including aggregator and leaf nodes.

java_pipelines_heap_size

Heap size (in megabytes) for HDFS pipelines. This variable can sync to all nodes, including aggregator and leaf nodes.

8

pipelines_batches_metadata_to_keep

The number of batch metadata entries to persist per partition, per pipeline before they are overwritten by incoming batches. As data is extracted from a source, it’s written in batches to a destination table on a leaf node. Metadata about these batches is temporarily persisted in the master aggregator’s information_schema.PIPELINES_BATCHES table. As new batches are loaded into the database, the oldest batch metadata entries will be removed from the information_schema.PIPELINES_BATCHES table. See the PIPELINES_BATCHES section for more information about this metadata.

This variable can sync to all nodes, including aggregator and leaf nodes.

1000

pipelines_cdc_row_emit_delay_us

Specifies a forced delay in row emission while migrating/replicating your tables (or collections) to your SingleStore databases. It can have a maximum value of 1000000.

Refer to Replicate Data from MongoDB® for more information.

This variable can sync to all nodes, including aggregator and leaf nodes.

1

pipelines_cdc_java_heap_size

Specifies the JVM heap size limit (in MBs) for CDC-in pipelines.

Refer to Replicate Data from MongoDB® for more information.

This variable can sync to all nodes, including aggregator and leaf nodes.

128

pipelines_deskew_batch_partitions_threshold

For keyless sharded destination tables, if less than this fraction of batch partitions are active, reshuffle to avoid skew. This variable can sync to all nodes, including aggregator and leaf nodes.

0.75

pipelines_errors_retention_minutes

The amount of time in minutes that a pipeline error is stored on disk. These errors are surfaced in PIPELINES_ERRORS as long as they are stored on disk. This variable must be set to at least 0.

This variable can sync to all nodes, including aggregator and leaf nodes.

1440

pipelines_extractor_core_on_timeout

Generates core dumps on pipeline extractor operations that timeout. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

pipelines_extractor_debug_logging

Specifies whether to enable extractor debugging for Kafka or HDFS pipelines. This variable currently does not apply to S3 pipelines. This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

pipelines_extractor_get_offsets_timeout_ms

The maximum time (in milliseconds) to wait for offset data to be returned from the data source before returning an error. Increase this value if you experience timeout errors, such as ERROR 1970 (HY000): Subprocess timed out. Use the value 0 to indicate no timeout.

This variable can sync to all aggregators.

35000

pipelines_extractor_idle_timeout_ms

The maximum time (in milliseconds) that a pipeline will wait for more data from an external source. Use the value 0 to indicate no timeout. This variable can sync to all nodes, including aggregator and leaf nodes.

120000

pipelines_extractor_max_memory

Maximum memory of a pipeline extractor (in megabytes). Minimum value for this engine variable is 1. This variable can sync to all nodes, including aggregator and leaf nodes.

500

pipelines_gc_max_files

Max number of files that will be garbage collected in one iteration. Minimum value for this engine variable is 1. This variable can sync to all aggregators.

10000

pipelines_kafka_version

The Kafka version used for the Kafka extractor. This variable can sync to all nodes, including aggregator and leaf nodes.

This variable is not applicable to current (>= 0.10.0.10) Kafka versions as SingleStore requests a list of supported Kafka broker features using the ApiVersionRequest.

For legacy Kafka, pipelines_kafka_version or kafka_version from a pipeline config is used to determine the list of supported features.

0.10.0.10

pipelines_max_concurrent

The maximum number of pipelines running concurrently. See Load Data with Pipelines page for more information. This variable can sync to all nodes, including aggregator and leaf nodes.

50

pipelines_max_concurrent_batch_partitions

The maximum number of pipeline batch partitions running concurrently. The number of partitions when using the default setting (0) is 2, or the number of leaves on the cluster times the default_partitions_per_leaf setting, whichever is higher.

This variable can sync to all nodes, including aggregator and leaf nodes.

0

pipelines_max_errors_per_partition

Deprecated in MemSQL 6.7. The maximum number of error event rows per leaf node partition to persist before they are deleted. Once the specified number of rows in the information_schema.PIPELINES_ERRORS table is reached, the database will eventually remove the oldest rows from the table. The removal mechanism for older error data is based on heuristics. Old errors are guaranteed to exist up to the specified number, but they may not immediately be removed.

This variable can sync to all nodes, including aggregator and leaf nodes.

1000

pipelines_max_offsets_per_batch_partition

The maximum number of data source partition offsets to extract in a single batch transaction. If the data source’s partition contains fewer than the specified number of offsets, all of the partition’s offsets will be batched into the destination table.

This variable applies only to Kafka pipelines.

This variable can sync to all aggregators.

1000000

pipelines_max_pooled_extractors

Max pooled extractors for Kafka pipelines. The value must be set to at least 1. This variable can sync to all nodes, including aggregator and leaf nodes.

256

pipelines_max_retries_per_batch_partition

The number of retry attempts for writing batch partition data to the destination table. If pipelines_stop_on_error is set to OFF and the specified retry number is reached without success, the batch partition will be skipped and will not appear in the destination table. If a batch partition is skipped, data loss can occur. If pipelines_stop_on_error is set to ON and the specified retry number is reached without success, the pipeline will stop. No batch partition data will be skipped. This configuration variable applies to the entire batch transaction, which includes extraction from a data source, optional transformation, and loading of the data into the destination table. If the batch transaction fails at any point during extraction, transformation, or loading, it will be retried up to the specified number.

This variable can sync to all aggregators.

4

pipelines_offsets_gc_skew_time_minutes

Amount of time in minutes allowed for pipeline metadata timestamp skew. The minimum value for this engine variable is 1. This variable can sync to all aggregators.

1440

pipelines_parse_errors_threshold

If the number of errors per batch partition exceeds the value, the pipeline will stop or skip the current batch and move to the next batch based on if the pipelines_stop_on_error variable is set to true or false. This variable can sync to all aggregators.

Set the variable with an integer value. Disable = 0

0

pipelines_pooled_extractor_batches

Max number of batches for a pooled extractor for Kafka pipelines. The value must be set to between 1 and 2048. This variable can sync to all nodes, including aggregator and leaf nodes.

256

pipelines_stderr_bufsize

The buffer size for standard error output (in bytes). Error messages that exceed this size will be truncated when written to the information_schema.PIPELINES_ERRORS table. However, the complete standard error text can be viewed by using the BATCH_ID and querying the information_schema.PIPELINES_BATCHES table.

This variable can sync to all nodes, including aggregator and leaf nodes.

65535

pipelines_stop_on_error

Specifies whether or not each pipeline in the cluster should stop when an error occurs. If set to OFF, batches will be retried up to the number specified in the pipelines_max_retries_per_batch_partition variable. After all retries have failed, the batch will be skipped. When a batch is skipped, data loss can occur. If set to ON, the batch transaction that caused the error will be retried up to the number specified in the pipelines_max_retries_per_batch_partition variable. After all retries have failed, the pipeline will enter a Stopped state and must be manually started.

This variable can sync to all aggregators.

ON

pipelines_stored_proc_exactly_once

If set to ON, run stored procedures from pipelines in a transaction. If this variable is OFF and there is a stored procedure attached to a pipeline, it may only partially succeed. If the procedure fails only the data that was written will remain, instead of rolling back on failure.

This variable can sync to all aggregators.

ON

Unlimited Storage Database (Bottomless) and Point-in-Time Restore Variables

Name

Description

Default

blob_cache_threadpool_max_concurrent

Controls the maximum number of threads available for downloading blobs from unlimited storage. This variable can sync to all nodes, including aggregator and leaf nodes.

32

bottomless_api_request_timeout_ms

Sets the timeout, in milliseconds, for CURL HTTP requests made by the unlimited storage cloud API (AWS/Azure). This variable can sync to all nodes, including aggregator and leaf nodes.

600000

bottomless_cloud_debug_tracing

Turns on full debug tracing in an unlimited storage cloud API (AWS/Azure). This variable can sync to all nodes, including aggregator and leaf nodes.

OFF

bottomless_experimental_blobstore_mode

When this is set to TRUE, additional verification would be done to ensure correctness of the blob store. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

bottomless_gc_retention_period_minutes

Specifies the amount of time, in minutes, a restore point for a remote storage database is retained after it is created. This determines how far back in time you can do a PITR. For example, to configure retention for 90 days, you should set the value to 90*24*60 = 129600 minutes.

This variable can sync to all nodes, including aggregator and leaf nodes.

1440

bottomless_idle_upload_ms

Number of milliseconds elapsed between unlimited storage uploads. After which, unlimited storage will upload what's queued. This variable can sync to all nodes, including aggregator and leaf nodes.

60000

bottomless_incremental_download_max_buffer_size

Maximum size (in bytes) an unlimited storage download's buffer can grow to. This variable can sync to all nodes, including aggregator and leaf nodes.

1048576

bottomless_skip_create_database_api_checks

Skips API checks when creating a new bottomless database. This variable can sync to all nodes, including aggregator and leaf nodes.

FALSE

bottomless_snapshot_trigger_log_chunks

Determines how often snapshot files are uploaded to the object store. A higher value will cause snapshot files to be uploaded less frequently. Snapshot files are used during recovery. For example, copying bucket files in an object store to a new location and then running ATTACH at the new location will trigger recovery.

This variable can sync to all nodes, including aggregator and leaf nodes.

300

bottomless_upload_max_concurrent

A higher value can increase the concurrency of uploading data updates made on the cluster to the object store. A higher value will increase the CPU and memory usage, and potentially the network usage on the cluster.

This variable can sync to all nodes, including aggregator and leaf nodes.

8

bottomless_upload_throttle_hard_limit_secs

Amount of lag (in seconds) after which all columnstore ingest is throttled, subject to bottomless_upload_throttle_min_disk_limit_mb.

This variable can sync to all nodes, including aggregator and leaf nodes.

1800

bottomless_upload_throttle_min_disk_limit_mb

Minimum disk space when an unlimited storage columnstore ingest kicks in. A value of 0 means there is no minimum and ingest is throttled subject to the hard and soft time limits.

This variable can sync to all nodes, including aggregator and leaf nodes.

0

bottomless_upload_throttle_soft_limit_secs

Amount of lag (in seconds) after which some columnstore ingest may be throttled, subject to bottomless_upload_throttle_min_disk_limit_mb. This engine variable has no effect if the value is higher than the value set for bottomless_upload_throttle_hard_limit_secs.

This variable can sync to all nodes, including aggregator and leaf nodes.

600

Wasm Sync Variables

Name

Description

Default

enable_wasm

Specifies if a user can create or call Wasm UDFs/TVFs.

Note

enable_wasm is a non-sync variable. It is listed with the Wasm sync variables for convenience sake.

ON

wasm_alloc_max_flex_size

Specifies the maximum memory (in bytes) that can be allocated to a Wasm module directly from the operating system  in case of emergencies. This variable can sync to all nodes, including aggregator and leaf nodes.

2097152 (2MB)

wasm_alloc_retry_count

Specifies the maximum number of times SingleStore retries memory allocation if an out-of-memory error occurs in the Wasm runtime. This variable can sync to all nodes, including aggregator and leaf nodes.

3

wasm_alloc_retry_interval

Specifies the time (in milliseconds) before SingleStore retries memory allocation if an out-of-memory error occurs in the Wasm runtime. This variable can sync to all nodes, including aggregator and leaf nodes.

1000

wasm_create_from_url

Enables creation of Wasm functions from a remote URL. Refer to Wasm UDF Sources for more information. This variable can sync to all aggregators.

ON

wasm_max_compiled_module_size

Specifies the maximum size (in bytes) that a compiled Wasm module can use. This variable can sync to all aggregators.

26214400 (25MB)

wasm_max_image_cache_size

Specifies the maximum size (in bytes) that is available for caching Wasm query object code images. This variable can sync to all nodes, including aggregator and leaf nodes.

26214400 (25MB)

wasm_max_linear_memory_size

Specifies the maximum linear memory (in bytes) that an individual Wasm module can use. This will further constrain UDF/TVF-specific GROW TO values. This variable can sync to all nodes, including aggregator and leaf nodes.

16777216 (16MB or 256 Wasm pages)

wasm_max_raw_module_size

Specifies the maximum size (in bytes) of Wasm modules that may be loaded. The size is defined as the size of raw, uncompiled data passed in the CREATE FUNCTION statement. This variable can sync to all aggregators.

26214400 (25MB)

Workload Management Sync Variables

Name

Description

Default

resource_usage_model

Whether to train or enable the resource usage model for workload management. When set to ON, workload management will estimate memory usage and queue queries that are estimated to use too much. When set to TRAINONLY, PROFILE can be run on queries to compare the estimated memory usage of queries with their actual use. This is recommended when upgrading from any version prior to 6.7, to allow for the workload manager to do the necessary machine learning for accurate estimates. Once estimated memory use in PROFILE begins matching actual use, consider setting this variable to ON.

This variable can sync to all aggregators.

OFF

workload_management

Specifies whether to enable workload management for the cluster. If this variable is set to ON, the other workload management engine variables will affect the way a query is executed. If set to OFF, the feature is disabled and no queueing or system resource optimization will occur.

This variable can sync to all aggregators.

ON

workload_management_available_leaf_memory_ratio

Sets the ratio of the available leaf memory for workload_management to use when queuing queries. Valid ranges are [0.0 to 1.0]. The variable sets the ratio of the total amount of memory workload management uses when deciding to queue queries. For example, if it is set 0.9, then SingleStore will assume there is only 90% of the total leaf memory available, and will end up running fewer queries before hitting the limit and needing to queue. This variable can sync to all aggregators.

1.0

workload_management_dynamic_resource_allocation

Enables/disables the dynamic reuse of WM queues feature. This variable can sync to all aggregators.

OFF

workload_management_enable_large_memory_classification

When enabled (set to ON), either memory usage or connections/threads usage can cause a query to be classified as large.

When set to OFF, only connections/threads usage can classify a query as large.

This variable can sync to all aggregators.

TRUE

workload_management_enable_static_partitioning

Enables static resource partitioning for deterministic behavior by workload management. This variable can sync to all aggregators.

FALSE

workload_management_expected_aggregators

The expected number of aggregators that will be used to run a high volume of client queries which require fully distributed execution. The default value is 0, which is equivalent to setting it to the total number of aggregators in the cluster. For version 7.0 and up, this variable should only be set to its default of 0, unless `workload_management_enable_static_partitioning is set to TRUE.

This variable can sync to all aggregators.

0

workload_management_max_connections_per_leaf

The maximum number of connections to use per leaf node in the cluster. This variable can sync to all aggregators.

10000

workload_management_max_queue_depth

The maximum depth of the query queue, which is the maximum number of queries that can be queued. If this number is reached, additional queries will not execute, and an ER_TOO_MANY_QUEUED_QUERIES error will appear.

This variable can sync to all aggregators.

100

workload_management_max_threads_per_leaf

The maximum number of threads to use per leaf. This number correlates with the max_connection_threads engine variable, and they should generally be set to the same value. This variable can sync to all aggregators.

8192

workload_management_memory_queue_threshold

Percentage of memory a query can use before it will get queued. If an individual query is projected to use more than workload_management_memory_queue_threshold * (leaf_maximum_memory - leaf_current_table_memory) / workload_management_expected_aggregators, then it will be queued.

This variable can sync to all aggregators.

0.01

workload_management_memory_queuing

Whether to turn on queuing based on memory usage of queries. This variable can sync to all aggregators.

ON

workload_management_queue_size_allow_upgrade

Specifies how many large queries can be waiting for execution in the LARGE queue when we want to move to that queue a MEDIUM query from the MEDIUM queue. This variable can sync to all aggregators.

1

workload_management_queue_time_warning_ratio

Specifies when a warning will appear based on the ratio of time spent by a query in the queue versus the actual execution time of the query. For example, if a query waits in the queue for one second and it takes four seconds to execute, the ratio is 1:4 or 0.25. Once the specified ratio is reached for a query, a ER_QUERY_QUEUED_WARNING warning will appear.

This variable can sync to all aggregators.

0.5

workload_management_queue_timeout

The time duration (in seconds) after which a query times out and is removed from the queue without being executed.

This variable can sync to all aggregators.

3600

Non-Sync Variables List

See the engine variables overview to learn how non-sync variables work.

Note

All of the non-sync variables listed in the following table are global variables, except when noted as a session variable that can also be set globally in the Description column.

The variables listed in the following tables can be set to take effect on node startup and can be set to take effect while the node is running. Exceptions are noted in the fourth column.

Basic Variables

Name

Description

Default

Exceptions to When Variable can be Set

allow_ipv6

Whether to enable IPv6 support.

When set to false, the SingleStore engine can only use an IPv4 network.

When set to true, SingleStore can use either an IPv4 or IPv6 network.

The actual network interface to use is determined by the bind_address engine variable (below).

FALSE

You can only set this variable while the node is offline. The changes to this variable will take effect on the next start of the node.

Note: If allow_ipv6 is false and bind_address is in IPv6 format, memsqld will produce an error message and refuse to start

bind_address

Defines the IP address to use to listen for incoming connections.

Used in conjunction with the allow_ipv6 engine variable (above).

When allow_ipv6 is set to true, this variable can be set to an IPv6 address (for example, ::), and will switch SingleStore to an IPv6 mode.

When allow_ipv6 is set to false, this variable can only be set to an IPv4 address (for example, 0.0.0.0).

If the address is 0.0.0.0(IPv4) or :: (IPv6), SingleStore accepts connections on all network interfaces; otherwise, it only accepts connections for the given IP address.

0.0.0.0

You can only set this variable while the node is offline. The changes to this variable will take effect on the next start of the node.

datadir

Directory path for the data directory. This directory contains snapshots, logs, and columnstore segments.

You can only set this variable while the node is offline. The changes to this variable will take effect on the next start of the node.

div_precision_increment

The number of digits by which to increase the scale of division results performed with the division (/) operator.

4

You can set this variable but it is currently not supported in SingleStore. SingleStore always returns the results to the fourth precision value.

flush_before_replicate

If ON, data is written and flushed to disk locally before being replicated. This will increase replication latency but will guarantee that replicas will not flush data to disk before the master does.

OFF

You can only set this variable while the node is offline. The changes to this variable will take effect on the next start of the node.

master_aggregator

Appending master_aggregator (with no arguments) to the end of a node’s memsql.cnf specifies that node as the cluster’s master aggregator. For more information on this variable, see Master Aggregator.

You can only set this variable while the node is offline. The changes to this variable will take effect on the next start of the node.

max_execution_time

This variable is unused and setting this variable has no effect. It exists so certain MySQL-compatible apps that have this variable set will not fail when connected to SingleStore.

0

maximum_blob_cache_size_mb

Specifies the maximum blob cache size per leaf node. The cache stores recently accessed columnstore data objects and is located on the cluster's local storage. The cache exists in datadir. The cache size is limited by available space in  datadir.

maximum_blob_cache_size_percentsetting overrides this variable.

Version <=7.9

For disk range 0-40GB, cache value is 100% of disk.

For disk range 40GB+, cache value is 80% of disk.

Version =>8.0

For disk range 0-40GB, cache value is 100% of disk.

For disk range 40-140GB, cache value is 40GB.

For disk range 140-666GB, cache value is 75% of disk.

For disk range 667GB+, cache value is 90% of disk - 100GB.

maximum_memory

Maximum memory SingleStore will use, in MB. WARNING: Do not set it to more than the default.

90% of System RAM or System RAM minus 10 GB, whichever is greater

You are required to restart a node only when the variable is being reduced.

maximum_table_memory

Maximum memory SingleStore will use for table storage, in MB. WARNING: Do not set it to more than the default.

If maximum_memory >= 10 GB, then 90% of maximum_memory. Else if maximum_memory >= 1.25 GB, then maximum_memory - 1 GB

You are required to restart a node only when the variable is being reduced.

port

SingleStore port number.

3306

You can only set this variable while the node is offline. The changes to this variable will take effect on the next start of the node.

reported_hostname

The hostname or IP of the machine in use.

plancachedir

Directory path for the plancache directory. This directory contains compiled plans used for codegen.

./plancache

You can only set this variable while the node is offline. The changes to this variable will take effect on the next start of the node.

tracelogsdir

Directory path for the tracelogs directory.