Query History

Overview

SingleStore supports high-frequency event-tracing. This feature offers a robust infrastructure and method for tracing events that are expected to occur frequently.

Query event tracing is designed to help identify which queries take the longest or that fail so they can be isolated and optimized. This feature is at the core of Query History which displays these query trace events over time.

For query execution reporting, the Query_completion trace monitors query execution details and can be viewed in near real-time.

Enhanced query tracing allows users to trace the high-level execution metrics collected by MV_ACTIVITIES and MV_FINISHED_TASKS along with Query_completion traces. Enhanced query tracing is a preview feature and is provided as a configuration option for Query_completion traces.

Query event traces are configurable, where a query execution may only be traced if it takes longer than, for example, 1,000 milliseconds (1 second) to run, where the query text may be set to either be included or omitted.

Event Type

Query_completion

Engine Variable

trace_events_queue_size

Set the value of this engine variable to 0 to turn off query event tracing

Parameter

Value(s)

Capture query text?

Query_text = on

Query_text = off

Record queries that take x amount of time to complete, in milliseconds

Duration_threshold_msx

While Duration_threshold_ms can be set to any millisecond value, setting its value to 0 will negatively impact the usefulness of this feature and impair the workspace's performance.

Trace all queries that have failed

Trace_all_failed_executions= on

Trace_all_failed_executions= off

Prerequisites

  • A workspace running SingleStore version 8.5 or later

  • Access to Query History in the Cloud portal

The MV_TRACE_EVENTS Information Schema

The MV_TRACE_EVENTS information schema is a ring buffer with a size defined by the value of the trace_events_queue_size engine variable. When this limit is reached, the oldest events are overwritten as the buffer wraps around.

While MV_TRACE_EVENTS can be used to view the most recent events that meet a specified criteria, the information displayed is only temporary. As such, the Query History feature in the Cloud portal is recommended for viewing query trace events over time.

View MV_TRACE_EVENTS

The following examples can be used to record those queries that take longer than 1,000 milliseconds, or one second, to run, which can be used to create a collection of all of the expensive events that have occurred over a specified period of time. The results can provide insight into which workloads are the most demanding on a cluster, and can be used for workload and application tuning and debugging.

Successful Queries

Run the following to display those SQL statements that have been executed successfully.

Note that the following steps are provided to demonstrate how the query events are traced, and only provide a glimpse into which queries are captured. To see the list of successful queries over time, you may filter for “successful queries” in Query History.

  1. Note that the trace_events_queue_size value is s set to 16,000,000 bytes by default and can be updated with a new value via the following SQL statement.

    SET GLOBAL trace_events_queue_size = 16000000;
  2. Enable the Query_completion event type.

    CREATE EVENT TRACE Query_completion WITH (Query_text = on, Duration_threshold_ms = 1000);
  3. Display the query trace events.

    SELECT * FROM information_schema.MV_TRACE_EVENTS;
    NODE_ID,NODE_IP,NODE_TYPE,NODE_START_EPOCH_S,EVENT_ID,EVENT_TYPE,TIME,DETAILS
    1,127.0.0.1,MA,1698096978,1,Query_completion,2023-10-25 16:39:51.747556,"{""activity_name"":""Select_MV_PROCESS_PERMISSIONS__et_al_453a6b42e2c4cc91"",""connection_id"":44,""context_database"":"""",""duration_ms"":76,""plan_id"":55,""query_text"":""select * from information_schema.mv_trace_events"",""resource_pool_name"":""default_pool"",""row_count"":86,""start_time"":""2023-10-25 16:39:51.671264"",""success"":1,""user_name"":""root""}"
    1,127.0.0.1,MA,1698096978,2,Query_completion,2023-10-25 16:40:17.988318,"{""activity_name"":""Select_MV_PROCESS_PERMISSIONS__et_al_453a6b42e2c4cc91"",""connection_id"":44,""context_database"":"""",""duration_ms"":2,""plan_id"":55,""query_text"":""select * from information_schema.mv_trace_events"",""resource_pool_name"":""default_pool"",""row_count"":89,""start_time"":""2023-10-25 16:40:17.986134"",""success"":1,""user_name"":""root""}"

    The payload that describes an event is stored in the DETAILS JSON field that contains the following fields:

    {
    "activity_name": string,
    "connection_id": integer,
    "context_database": string,
    "duration_ms": bigint,
    "plan_id": integer,
    "query_text": string, (on by default; can be sizeable as comments are included)
    "resource_pool_name": string, (empty if no resource pool)
    "row_count": integer,
    "start_time": string (datetime(6)),
    "success": bool,
    "user_name": string (empty string if the user has been dropped)
    [, "error_code": string, ] (field only present if success = false)
    [, "error_message": string ] (field only present if success = false)
    }

    Note: Currently, when a multi-insert query is traced, a plan_id of -1 is reflected without an associated activity_name.

    Note: For read queries, row_count refers to the total number of rows sent back to the client. For write queries, row_count refers to the total number of rows affected across the cluster.

    The following is an example of the DETAILS column JSON output. The query_text contains literals (sanitized via the SECRET function), and includes comments.

    {
    "activity_name":"Select_MV_PROCESS_PERMISSIONS__et_al_453a6b42e2c4cc91",
    "connection_id":44,
    "context_database":"",
    "duration_ms":76,
    "plan_id":55,
    "query_text":"select * from information_schema.mv_trace_events",
    "resource_pool_name":"default_pool",
    "row_count": 86,
    "start_time":"2023-10-25 16:39:51.671264",
    "success":1,
    "user_name":"root"
    }

Failed Queries

Run the following to display those SQL statements that have failed to execute and the associated reason.

Note that the following steps are provided to demonstrate how the query events are traced, and they only provide a glimpse into which queries are captured. To see the list of failed queries over time, you may filter for “failed queries” in Query History.

  1. Note that the trace_events_queue_size value is s set to 16,000,000 bytes by default and can be updated with a new value via the following SQL statement.

    SET GLOBAL trace_events_queue_size = 16000000;
  2. Enable the Query_completion event type to capture failed queries.

    CREATE EVENT TRACE Query_completion WITH (Query_text = on, Duration_threshold_ms = 1000, Trace_all_failed_executions= on);
  3. Run the following to display the query events.

    SELECT * FROM information_schema.MV_TRACE_EVENTS;
    NODE_ID,NODE_IP,NODE_TYPE,NODE_START_EPOCH_S,EVENT_ID,EVENT_TYPE,TIME,DETAILS
    1,127.0.0.1,MA,1698096978,1,Query_completion,2023-10-25 16:39:51.747556,"{""activity_name"":""Select_MV_PROCESS_PERMISSIONS__et_al_453a6b42e2c4cc91"",""connection_id"":44,""context_database"":"""",""duration_ms"":76,""plan_id"":55,""query_text"":""select * from information_schema.mv_trace_events"",""resource_pool_name"":""default_pool"",""row_count"":86,""start_time"":""2023-10-25 16:39:51.671264"",""success"":1,""user_name"":""root""}"
    1,127.0.0.1,MA,1698096978,2,Query_completion,2023-10-25 16:40:17.988318,"{""activity_name"":""Select_MV_PROCESS_PERMISSIONS__et_al_453a6b42e2c4cc91"",""connection_id"":44,""context_database"":"""",""duration_ms"":2,""plan_id"":55,""query_text"":""select * from information_schema.mv_trace_events"",""resource_pool_name"":""default_pool"",""row_count"":86,""start_time"":""2023-10-25 16:40:17.986134"",""success"":1,""user_name"":""root""}"

Time-Bound Queries

Run the following to display those SQL statements that are time-bound.

Note that the following steps are provided to demonstrate how the query events are traced, and only provide a glimpse into which queries are captured. To see the list of time-bound queries over time, you may filter for “time-bound queries” in Query History.

  1. Note that the trace_events_queue_size value is s set to 16,000,000 bytes by default and can be updated with a new value via the following SQL statement.

    SET GLOBAL trace_events_queue_size = 16000000;
  2. Run the following to display just those query events that were created since a specified time @t.

    SET @t = NOW(6); /* Or use a desired time instead of the current time */
  3. Run the following to display the query events.

    SELECT * FROM information_schema.MV_TRACE_EVENTS WHERE TIME > @t;

Enhanced Query Tracing

Note

This is a Preview feature.

With enhanced query tracing, users can trace high-level execution metrics collected by MV_ACTIVITIES and MV_FINISHED_TASKS.

Enhanced query tracing is controlled via a configuration option for Query_completion called Enhanced_stats_collection. There are three configuration options: lite, full, and off. The feature is off by default.

For the remainder of this document, "enhanced query tracing" refers to Query_completion tracing with Enhanced_stats_collection set to lite or full.

Usage

If an existing Query_completion event is present on the cluster, drop that event before enabling enhanced query tracing.

DROP EVENT TRACE Query_completion;

Create an event trace with Enhanced_stats_collection set to off, lite, or full.

CREATE EVENT TRACE Query_completion WITH
(Enhanced_stats_collection = {off | lite | full}, ...);

The new configuration will update asynchronously across all nodes in the cluster.

To view the configuration, query MV_TRACE_EVENTS_STATUS and examine the JSON string in the CONFIGURATION column.

Explanation of Modes

Off

High-level query execution stats from MV_ACTIVITIES or MV_FINISHED_TASKS are not collected.

Lite

Collects distributed high-level execution metrics of the query's execution and stores the aggregated metrics together with the Query_completion event.

{
"activity_name": "Update_t__et_al_290900f5eb4d75d2",
"connection_id": 8,
"context_database": "db",
"duration_ms": 33,
"enhanced_stats": {
"aggregated_stats": {
"avgMemoryBytes": 263055,
"blobCacheMissBytes": 0,
"blobCacheWaitTimeMs": 0,
"cpuTimeMs": 54,
"cpuWaitTimeMs": 0,
"diskLogicalReadBytes": 40530,
"diskLogicalWriteBytes": 28784,
"diskPhysicalReadBytes": 0,
"diskPhysicalWriteBytes": 212992,
"diskSpillingBytes": 0,
"diskTimeMs": 0,
"extractorTimeMs": 0,
"lockTimeMs": 0,
"logBufferLargeTxTimeMs": 0,
"logBufferWaitMs": 0,
"logBufferWriteBytes": 77009,
"logFlushWaitMs": 0,
"memoryMBSecs": 0.032894,
"memoryMajorFaults": 0,
"networkRecvBytes": 2961,
"networkSendBytes": 4448,
"networkTimeMs": 40,
"rgQueuedTimeMs": 0,
"rowLockTimeMs": 0,
"transformTimeMs": 0,
"wmQueuedTimeMs": 0
}
},
"plan_id": 27,
"query_text": "update t set b = b+1",
"resource_pool_name": "default_pool",
"row_count": 4,
"start_time": "2025-03-05 16:39:01.411371",
"success": 1,
"user_name": "root"
}

Full

Collects the same stats as lite. Instead of only storing aggregated stats, also stores stats aggregated per-node.

{
"activity_name": "Select_t1__et_al_54e9e5098b9ef23f",
"connection_id": 8,
"context_database": "db",
"duration_ms": 244,
"enhanced_stats": {
"aggregated_stats": {
"avgMemoryBytes": 18025,
"blobCacheMissBytes": 0,
"blobCacheWaitTimeMs": 0,
"cpuTimeMs": 169,
"cpuWaitTimeMs": null,
"diskLogicalReadBytes": 748154,
"diskLogicalWriteBytes": 818,
"diskPhysicalReadBytes": null,
"diskPhysicalWriteBytes": null,
"diskSpillingBytes": 0,
"diskTimeMs": null,
"extractorTimeMs": 0,
"lockTimeMs": 2,
"logBufferLargeTxTimeMs": 0,
"logBufferWaitMs": 0,
"logBufferWriteBytes": 0,
"logFlushWaitMs": 0,
"memoryMBSecs": 0.01931,
"memoryMajorFaults": null,
"networkRecvBytes": 229227,
"networkSendBytes": 6752,
"networkTimeMs": 162,
"rgQueuedTimeMs": 0,
"rowLockTimeMs": 0,
"transformTimeMs": 0,
"wmQueuedTimeMs": 0
},
"distributed_stats": [
{
"info": {
"node_id": 1
},
"stats": {
"avgMemoryBytes": 14246,
"blobCacheMissBytes": 0,
"blobCacheWaitTimeMs": 0,
"cpuTimeMs": 101,
"cpuWaitTimeMs": null,
"diskLogicalReadBytes": 55624,
"diskLogicalWriteBytes": 142,
"diskPhysicalReadBytes": null,
"diskPhysicalWriteBytes": null,
"diskSpillingBytes": 0,
"diskTimeMs": null,
"extractorTimeMs": 0,
"lockTimeMs": 0,
"logBufferLargeTxTimeMs": 0,
"logBufferWaitMs": 0,
"logBufferWriteBytes": 0,
"logFlushWaitMs": 0,
"memoryMBSecs": 0.003326,
"memoryMajorFaults": null,
"networkRecvBytes": 226569,
"networkSendBytes": 3629,
"networkTimeMs": 142,
"rgQueuedTimeMs": 0,
"rowLockTimeMs": 0,
"transformTimeMs": 0,
"wmQueuedTimeMs": 0
}
},
{
"info": {
"node_id": 2
},
"stats": {
"avgMemoryBytes": 19078,
"blobCacheMissBytes": 0,
"blobCacheWaitTimeMs": 0,
"cpuTimeMs": 68,
"cpuWaitTimeMs": null,
"diskLogicalReadBytes": 692530,
"diskLogicalWriteBytes": 676,
"diskPhysicalReadBytes": null,
"diskPhysicalWriteBytes": null,
"diskSpillingBytes": 0,
"diskTimeMs": null,
"extractorTimeMs": 0,
"lockTimeMs": 2,
"logBufferLargeTxTimeMs": 0,
"logBufferWaitMs": 0,
"logBufferWriteBytes": 0,
"logFlushWaitMs": 0,
"memoryMBSecs": 0.015985,
"memoryMajorFaults": null,
"networkRecvBytes": 2658,
"networkSendBytes": 3123,
"networkTimeMs": 20,
"rgQueuedTimeMs": 0,
"rowLockTimeMs": 0,
"transformTimeMs": 0,
"wmQueuedTimeMs": 0
}
}
]
},
"plan_id": 2,
"query_text": "select a, b, data from t1 join t2 on a = id",
"resource_pool_name": "default_pool",
"row_count": 3,
"start_time": "2025-01-29 14:45:32.790434",
"success": 1,
"user_name": "root"
}

Remarks

  • Some of the statistics collected are considered "advanced statistics" and are only collected when read_advanced_counters is set to ON. Refer to Management Statistics Reference for details. When read_advanced_counters is set to OFF, the values for these statistics appear as null.

  • In rare cases, when errors are encountered in query execution, or when certain activities are killed as part of normal query processing, some execution stats are not traced.

  • The execution metrics collected by enhanced query tracing may have noticeable differences and discrepancies from the same metrics collected by Historical Monitoring.

    • Runtime metrics are collected at a slightly different point in time in query execution for enhanced query tracing than they are for workload monitoring / profiling. In the majority of cases, the difference should not be noticeable.

    • For write queries with two-phase commit (2PC) enabled, enhanced query tracing includes stats collected by any internal PREPARE queries. In contrast, workload monitoring / profiling does not necessarily associate the internal PREPARE query with the user query.

    • Queries that create remote result tables on leaf nodes may have differences in metrics due to the difference in timing of the collection of metrics from the result tables between enhanced query tracing and workload monitoring / profiling.

    • Queries that are retried during execution (e.g. during online failover) will have all of their attempts' stats collected and aggregated for enhanced query tracing as a single query execution (i.e. in one Query_completion event). In contrast, for workload monitoring / profiling, every failed attempt increments the failure_count, and each retry is considered a separate execution.

In this section

Last modified: March 12, 2025

Was this article helpful?

Verification instructions

Note: You must install cosign to verify the authenticity of the SingleStore file.

Use the following steps to verify the authenticity of singlestoredb-server, singlestoredb-toolbox, singlestoredb-studio, and singlestore-client SingleStore files that have been downloaded.

You may perform the following steps on any computer that can run cosign, such as the main deployment host of the cluster.

  1. (Optional) Run the following command to view the associated signature files.

    curl undefined
  2. Download the signature file from the SingleStore release server.

    • Option 1: Click the Download Signature button next to the SingleStore file.

    • Option 2: Copy and paste the following URL into the address bar of your browser and save the signature file.

    • Option 3: Run the following command to download the signature file.

      curl -O undefined
  3. After the signature file has been downloaded, run the following command to verify the authenticity of the SingleStore file.

    echo -n undefined |
    cosign verify-blob --certificate-oidc-issuer https://oidc.eks.us-east-1.amazonaws.com/id/CCDCDBA1379A5596AB5B2E46DCA385BC \
    --certificate-identity https://kubernetes.io/namespaces/freya-production/serviceaccounts/job-worker \
    --bundle undefined \
    --new-bundle-format -
    Verified OK