Use the Workload Manager
On this page
SingleStore Helios’s Workload Manager (WM) automatically manages workspace workloads by limiting execution of queries that require fully distributed execution, to ensure that they are matched with available system resources.
WM divides leaf resources among multiple aggregators.workload_
variable is set to 1000 implying the total number of threads on the leaf node at any point of time is limited to 1K.
To see these quotas, which change with time, on the specific node run:
SHOW WORKLOAD MANAGEMENT STATUS;
WM is concerned with:
-
number of threads on leaves specified by the
workload_
variable,management_ max_ threads_ per_ leaf -
number of connections on each leaf specified by the
workload_
variable andmanagement_ max_ connections_ per_ leaf -
leaf memory.
In addition to splitting leaf resources among aggregators, WM also throttles the amount of queries that can run on any aggregator.
Related to workload management is the concept of resource governance.
Configure Workload Management
The workload manager can be configured using engine variables, and the default settings are sufficient for most users regardless of workspace size.
Before changing any of these values, ensure that you understand each of these variables and have evaluated their impact on your workload.
All of the workload management engine variables are sync variables.
Workload Management Queue Enhancements
Incoming queries are filtered into queues based on their size (large, medium, or small).workload_
and workload_
for dynamic query handling.workload_
to ON.
Detailed information about these engine variables may be obtained in the Workload Management Sync Variables section of the engine variable list.
Query Classification
The workload manager classifies queries by their size: small, medium, or large.
Small Queries
Small queries do not exceed the workload_
, which is a small amount of memory relative the total amount of leaf memory available to queries.
Small queries also do not use distributed joins.
Medium Queries
Medium queries fit within the following lower and upper resource thresholds.
If the resource requirements for a new medium query exceed the total amount of resources allocated to all aggregators for all medium queries, the new query is queued in a local queue that is maintained by an individual aggregator.
Lower Thresholds
-
Threads used per leaf:
1
-
Connections used per leaf:
1
-
Memory used per leaf: Total leaf memory available for query processing
* workload_
management_ memory_ queue_ threshold
Upper Thresholds
-
Threads used per leaf:
50% * workload_
management_ max_ threads_ per_ leaf / workload_ management_ expected_ aggregators -
Connections used per leaf:
50% * workload_
management_ max_ connections_ per_ leaf / workload_ management_ expected_ aggregators -
Memory used per leaf: Total leaf memory available for query processing
/ workload_
management_ expected_ aggregators
Large queries
Queries that exceed the upper threshold for medium queries are classified as large queries.
Queueing Policy
Queries are queued based on their size, as determined by the thresholds in the previous section, Query Classification.
Small Queries
Small queries are never queued.
Medium Queries
At regular intervals, the master aggregator collects statistics from the child aggregators.
At the end of each interval, the master aggregator uses the statistics gathered from the child aggregators to reallocate resources to each aggregator, if necessary.
If the resource requirements for a new medium query exceed the total amount of resources allocated to all aggregators for all medium queries, the new query is queued in a local queue that is maintained by an individual aggregator.
Fifty percent of the total leaf resources are reserved for medium queries.
Large Queries
Up to four large queries in the workspace will run concurrently.
Fifty percent of the total leaf resources are reserved for large queries.
Dynamic Reuse of WM Queues
There are two queues created by Workload Manager: one for Medium queries and one for Large queries.
There are two global variables to control this behaviour:: workload_
and workload_
Errors and Warnings
Queries return configurable errors if too many queries are queued or a query has been queued for too long.
These errors and warnings are:
ER_
: When the workload_
value is reached, SingleStore Helios will return this error instead of adding a new query to the queue.
ER_
: When a query has been queued longer than the workload_
value, it will return this error and will be removed from the queue.
ER_
: When the ratio of time spent by a query in the queue versus the actual execution time of the query exceeds the workload_
value, the query will return this warning.
Observe the Status of Workload Management
To see the current state of the workload management system, including the number of running and queued queries, and resource usage and thresholds, you can run the SHOW WORKLOAD MANAGEMENT STATUS
command.SELECT * FROM information_
, which will return the same information as the previous command, but per aggregator.
The following example shows sample output from a small, lightly-loaded system.
SHOW WORKLOAD MANAGEMENT STATUS;
+-------------------------------------------------------+-------+
| Stat | Value |
+-------------------------------------------------------+-------+
| Queued Queries (from global queue) | 0 |
| Queued Queries (from local queue) | 0 |
| Total Queued Queries Since Startup | 0 |
| Running Queries (from global queue) | 0 |
| Running Queries (from local queue) | 0 |
| Running Memory (MB) On Leaves (from global queue) | 0 |
| Running Memory (MB) On Leaves (from local queue) | 0 |
| Allocated Memory (MB) On Leaves (for local queue) | 6673 |
| Required Memory (MB) On Leaves (from local queue) | 0 |
| Avg Required Memory (MB) On Leaves (from local queue) | 0 |
| Running Threads Per Leaf (from local queue) | 0 |
| Allocated Threads Per Leaf (for local queue) | 2048 |
| Required Threads Per Leaf (from local queue) | 0 |
| Avg Required Threads Per Leaf (from local queue) | 0 |
| Running Connections Per Leaf (from local queue) | 0 |
| Allocated Connections Per Leaf (for local queue) | 2500 |
| Required Connections Per Leaf (from local queue) | 0 |
| Avg Required Connections Per Leaf (from local queue) | 0 |
| Memory Threshold (MB) to Queue Locally | 66 |
| Memory Threshold (MB) to Queue Globally | 3337 |
| Connections Threshold to Queue Globally | 2500 |
| Threads Threshold to Queue Globally | 2048 |
+-------------------------------------------------------+-------+
The following table provides a general description of some of the status fields.
Status |
Description |
---|---|
Queued Queries |
Total number of queries queued due to workload management on this aggregator |
Running Queries |
Total number of queries in a queue, based on the number and resource intensity of other queries on this aggregator |
Running Memory On Leaves |
Total estimated memory used per leaf from queries that are above/below |
Running Threads Per Leaf |
Total number of threads used per leaf from queries run through this aggregator |
Running Connections Per Leaf |
Total number of connections established per leaf from queries, which are run through this aggregator |
Memory Threshold to Queue Locally |
Estimated memory that a query needs to use before it is queued |
Memory Threshold to Queue Globally |
Estimated memory that a query needs to use before it is queued and wait for multi-aggregator coordination to run |
Connections Threshold to Queue Globally |
Total number of connections that a query takes before it is queued and wait for multi-aggregator coordination to run |
Threads Threshold to Queue Globally |
Total number of threads that a query takes before it is queued and wait for multi-aggregator coordination to run |
In addition, the SHOW PROCESSLIST
command can be executed on any SingleStore node to obtain details about currently running queries.information_
or information_
can be queried to provide the same information but can be filtered for more focused results.
The processlist provides information such as the user, host, database, command type, execution time, state, and additional information about each process.
If you haven't received any of the errors listed above and you still have available memory or CPU, see the Troubleshooting Poorly Performing Queries page for potential steps to take to resolve the issue.
Last modified: April 17, 2023