Use the Workload Manager

SingleStore Helios’s Workload Manager (WM) automatically manages workspace workloads by limiting execution of queries that require fully distributed execution, to ensure that they are matched with available system resources. It improves overall query execution efficiency and prevents workload surges from overwhelming the system. When the system is low on connections, threads, or memory, queries are queued so they can run later when adequate resources are available.

WM divides leaf resources among multiple aggregators. Assume a system with one leaf node and two aggregators. The  workload_management_max_threads_per_leaf variable is set to 1000 implying the total number of threads on the leaf node at any point of time is limited to 1K. Queries running on both aggregators will try to use the threads on the leaf and the WM will attempt to split this limit of 1K threads between the two aggregators based on the requirement of the queries on those two aggregators. If Aggregator1 is heavily loaded, the WM may decide to give 900 threads quota to Aggregator1 and 100 threads quota to Aggregator2.

To see these quotas, which change with time, on the specific node run:

SHOW WORKLOAD MANAGEMENT STATUS;

WM is concerned with:

  • number of threads on leaves specified by the workload_management_max_threads_per_leaf variable,

  • number of connections on each leaf specified by the workload_management_max_connections_per_leaf variable and

  • leaf memory.

In addition to splitting leaf resources among aggregators, WM also throttles the amount of queries that can run on any aggregator. This is to make sure the current running queries do not exceed the quota on the aggregator. WM does this throttling by queuing queries. WM queues queries after Resource Governance.

Related to workload management is the concept of resource governance. Unlike the Workload Manager which is built-in and enabled by default you have to enable the Resource Governance. For more information, see Set Resource Limits.

Configure Workload Management

The workload manager can be configured using engine variables, and the default settings are sufficient for most users regardless of workspace size. However, these engine variables can be tuned to fully utilize system resources for high concurrency workloads. These variables are discussed in the sections that follow.

Before changing any of these values, ensure that you understand each of these variables and have evaluated their impact on your workload.

All of the workload management engine variables are sync variables. To configure these variables, first read about Sync Variables to learn how to set them. Then, refer to Workload Management Sync Variables for information on specific variables.

Workload Management Queue Enhancements

Incoming queries are filtered into queues based on their size (large, medium, or small). If a large number of queries are all filtered into the same queue, that queue could become oversaturated and this could lead to slower processing. SingleStore Helios has two engine variables workload_management_queue_size_allow_upgrade and workload_management_dynamic_resource_allocation for dynamic query handling. These variables work together to dynamically move a query from a saturated queue to another queue if there are available resources. For example, if the medium queue is saturated and the large queue is not, the medium query will be moved into the large queue. To utilize dynamic query handling, set workload_management_dynamic_resource_allocation variable to ON.

Detailed information about these engine variables may be obtained in the Workload Management Sync Variables section of the engine variable list.

Query Classification

The workload manager classifies queries by their size: small, medium, or large. Queries are classified to determine their Queueing Policy.

Small Queries

Small queries do not exceed the workload_management_memory_queue_threshold, which is a small amount of memory relative the total amount of leaf memory available to queries.

Small queries also do not use distributed joins. For this reason, they do not require threads or connections on the leaves.

Medium Queries

Medium queries fit within the following lower and upper resource thresholds. These resources are threads used per leaf, connections used per leaf, and memory used per leaf.

If the resource requirements for a new medium query exceed the total amount of resources allocated to all aggregators for all medium queries, the new query is queued in a local queue that is maintained by an individual aggregator. Each aggregator decides when to dequeue the queries in its local queue.

Lower Thresholds

  • Threads used per leaf: 1

  • Connections used per leaf: 1

  • Memory used per leaf: Total leaf memory available for query processing * workload_management_memory_queue_threshold

Upper Thresholds

  • Threads used per leaf: 50% * workload_management_max_threads_per_leaf / workload_management_expected_aggregators

  • Connections used per leaf: 50% * workload_management_max_connections_per_leaf / workload_management_expected_aggregators

  • Memory used per leaf: Total leaf memory available for query processing / workload_management_expected_aggregators

Large queries

Queries that exceed the upper threshold for medium queries are classified as large queries.

Queueing Policy

Queries are queued based on their size, as determined by the thresholds in the previous section, Query Classification.

Small Queries

Small queries are never queued. They use a negligible amount of leaf resources, which are not tracked.

Medium Queries

At regular intervals, the master aggregator collects statistics from the child aggregators. These statistics include the maximum resources used (threads per leaf, connections per leaf, and memory per leaf) during the interval and a rolling average of the maximum resources required during the previous intervals.

At the end of each interval, the master aggregator uses the statistics gathered from the child aggregators to reallocate resources to each aggregator, if necessary.

If the resource requirements for a new medium query exceed the total amount of resources allocated to all aggregators for all medium queries, the new query is queued in a local queue that is maintained by an individual aggregator. Each aggregator decides when to dequeue the queries in its local queue.

Fifty percent of the total leaf resources are reserved for medium queries.

Large Queries

Up to four large queries in the workspace will run concurrently. Any additional queries are queued in the global queue.

Fifty percent of the total leaf resources are reserved for large queries.

Dynamic Reuse of WM Queues

There are two queues created by Workload Manager: one for Medium queries and one for Large queries. WM dynamically moves queries from the Medium to the Large Queue when the Medium Queue is saturated and the Large Queue is not.

There are two global variables to control this behaviour:: workload_management_dynamic_resource_allocation and workload_management_queue_size_allow_upgrade

Errors and Warnings

Queries return configurable errors if too many queries are queued or a query has been queued for too long. Queries also return configurable warnings when they were queued for a significant time relative to their actual execution time. This allows users to identify when their workspace resources are insufficient to meet the load of incoming queries.

These errors and warnings are:

ER_TOO_MANY_QUEUED_QUERIES: When the workload_management_max_queue_depth value is reached, SingleStore Helios will return this error instead of adding a new query to the queue. If you encounter this error, the maximum queue depth may need to be increased to accommodate the load. This error may also indicate that your workspace’s resources need to be increased or the load of incoming queries needs to be reduced.

ER_QUERY_QUEUE_TIMEOUT: When a query has been queued longer than the workload_management_queue_timeout value, it will return this error and will be removed from the queue. This error indicates that your workspace’s resources or the load of incoming queries need to be adjusted to successfully process incoming queries.

ER_QUERY_QUEUED_WARNING: When the ratio of time spent by a query in the queue versus the actual execution time of the query exceeds the workload_management_queue_time_warning_ratio value, the query will return this warning. The warning indicates the amount of time spent queued and the amount of time spent executing, to help you understand the sources of query latency. Encountering this warning is normal in many workloads, but if query latency is too high, your workspace’s resources may not be sufficient to process incoming queries in a timely manner.

Observe the Status of Workload Management

To see the current state of the workload management system, including the number of running and queued queries, and resource usage and thresholds, you can run the SHOW WORKLOAD MANAGEMENT STATUS command. You can also run the query SELECT * FROM information_schema.MV_WORKLOAD_MANAGEMENT_STATUS;, which will return the same information as the previous command, but per aggregator.

The following example shows sample output from a small, lightly-loaded system. The units for the memory thresholds are in megabytes.

SHOW WORKLOAD MANAGEMENT STATUS;
+-------------------------------------------------------+-------+
| Stat                                                  | Value |
+-------------------------------------------------------+-------+
| Queued Queries (from global queue)                    |     0 |
| Queued Queries (from local queue)                     |     0 |
| Total Queued Queries Since Startup                    |     0 |
| Running Queries (from global queue)                   |     0 |
| Running Queries (from local queue)                    |     0 |
| Running Memory (MB) On Leaves (from global queue)     |     0 |
| Running Memory (MB) On Leaves (from local queue)      |     0 |
| Allocated Memory (MB) On Leaves (for local queue)     |  6673 |
| Required Memory (MB) On Leaves (from local queue)     |     0 |
| Avg Required Memory (MB) On Leaves (from local queue) |     0 |
| Running Threads Per Leaf (from local queue)           |     0 |
| Allocated Threads Per Leaf (for local queue)          |  2048 |
| Required Threads Per Leaf (from local queue)          |     0 |
| Avg Required Threads Per Leaf (from local queue)      |     0 |
| Running Connections Per Leaf (from local queue)       |     0 |
| Allocated Connections Per Leaf (for local queue)      |  2500 |
| Required Connections Per Leaf (from local queue)      |     0 |
| Avg Required Connections Per Leaf (from local queue)  |     0 |
| Memory Threshold (MB) to Queue Locally                |    66 |
| Memory Threshold (MB) to Queue Globally               |  3337 |
| Connections Threshold to Queue Globally               |  2500 |
| Threads Threshold to Queue Globally                   |  2048 |
+-------------------------------------------------------+-------+

The following table provides a general description of some of the status fields. For information on the local queue and the global queue, see the Query Classification section.

Status

Description

Queued Queries

Total number of queries queued due to workload management on this aggregator

Running Queries

Total number of queries in a queue, based on the number and resource intensity of other queries on this aggregator

Running Memory On Leaves

Total estimated memory used per leaf from queries that are above/below Memory Threshold to Queue Locally/Memory Threshold to Queue Globally, which are run through this aggregator

Running Threads Per Leaf

Total number of threads used per leaf from queries run through this aggregator

Running Connections Per Leaf

Total number of connections established per leaf from queries, which are run through this aggregator

Memory Threshold to Queue Locally

Estimated memory that a query needs to use before it is queued

Memory Threshold to Queue Globally

Estimated memory that a query needs to use before it is queued and wait for multi-aggregator coordination to run

Connections Threshold to Queue Globally

Total number of connections that a query takes before it is queued and wait for multi-aggregator coordination to run

Threads Threshold to Queue Globally

Total number of threads that a query takes before it is queued and wait for multi-aggregator coordination to run

In addition, the SHOW PROCESSLIST command can be executed on any SingleStore node to obtain details about currently running queries. Also, information_schema.processlist or information_schema.mv_processlist can be queried to provide the same information but can be filtered for more focused results.

The processlist provides information such as the user, host, database, command type, execution time, state, and additional information about each process. The information provided can help identify long-running or hanging queries, monitor query execution, and analyze query behavior.

If you haven't received any of the errors listed above and you still have available memory or CPU, see the Troubleshooting Poorly Performing Queries page for potential steps to take to resolve the issue.

Last modified: April 17, 2023

Was this article helpful?