EXPLAIN
Returns detailed information on how the query is executed.
EXPLAIN
displays query execution operations as a tree, where the operator at the top is the last executed before returning the result to the client, and operators below are executed before their parents. EXPLAIN
plans can also be visualized via a graphical interface using SingleStoreDB Studio Visual Explain.
See the examples section below for descriptions of the most common types of information in EXPLAIN
output.
Syntax
EXPLAIN [EXTENDED | JSON | JSON INTO OUTFILE file_name | DUPLICATE | MBC | MPL | REPRO | UI] { select_statement | insert_select_statement | update_statement | delete_statement }
Remarks
EXPLAIN
can be run with an arbitrary SELECT, INSERT…SELECT, UPDATE, or DELETE statement.EXPLAIN
displays the plan SingleStore would generate if the query were reoptimized and recompiled now. If the query has already been run and a plan has already been saved in the plancache, that plan will be reused, even though it may differ from the plan displayed byEXPLAIN
. You may run DROP … FROM PLANCACHE to force the query to be reoptimized and recompiled. Note that running ANALYZE will automatically drop and recompile query plans if data statistics have changed significantly.In most cases when the optimizer makes a cost-based decision,
EXPLAIN
output displays estimated statistics and costs.If the
enable_disk_plan_explain
global variable is enabled, the explain tree information is also stored in the persisted plancache, in human-readable format. Whenenable_disk_plan_explain
is enabled, theSHOW PLAN [JSON] plan_id
command can also be used to display the explain information. Refer to the SHOW PLAN topic for details.The
INTO OUTFILE file_name
clause writes the explain information in JSON or text format to a file. It creates a new file at the specified location and writes the query profile details to the file. Enter the absolute path to the file as thefile_name
parameter. The output file can be a JSON or a text file.---To display the explain information in text format SHOW PLAN 3; ---To display the explain information in JSON format SHOW PLAN JSON 3;
See the Permission Matrix for the required permission.
Visual Explain via SHOW EXPLAIN UI
Using SHOW EXPLAIN UI
will generate a URL that loads a visual representation of the query explain. When you open the link, the Visual Explain site is loaded and the query explain is displayed.
Explain Modes
EXPLAIN
outputs the explain information in a human-readable format.EXPLAIN EXTENDED
outputs more detailed information for the query plan in addition to the information displayed byEXPLAIN
.EXPLAIN JSON
outputs the explain information in JSON format, which may be more machine-readable.EXPLAIN DUPLICATE
returns the query text in an explicit manner. Commands likeSELECT *
are expanded to list all the actual columns, e.g.SELECT t.a, t.b
. Additionally, some syntax is rewritten. For example, correlated subselects may be rewritten as joins so that they are more performant. This mode is useful for query tuning because it displays the query text before the execution plan is generated. The text generated byEXPLAIN DUPLICATE
is fully functional: you can copy and paste it into a SQL editor and run it.EXPLAIN REPRO
outputs the explain information in JSON format and provides important debugging information. This mode works forSELECT
queries only. The debugging information consists of:DDL for all database objects relevant to the query
Global variable settings
Session variable settings
Table statistics metadata
Sampling cache for filter selectivities
Sampling cache for row counts
Autostats results cache
EXPLAIN UI
generates a URL that, when opened, loads a visual representation of the query profile on the Visual Explain website.
Additionally, the following EXPLAIN
modes are provided for informational and debugging purposes only and may be changed at any time:
EXPLAIN MBC
outputs SingleStore Byte Code generated for the queryEXPLAIN MPL
outputs SingleStore Programming Language code generated for the query
EXPLAIN
EXPLAIN SELECT * FROM t; **** +-----------------------------------+ | EXPLAIN | +-----------------------------------+ | Project [t.id, t.a, t.b] | | Gather partitions:all | | Project [t.id, t.a, t.b] | | TableScan db1.t, PRIMARY KEY (id) | +-----------------------------------+
EXPLAIN EXTENDED
EXPLAIN EXTENDED SELECT * FROM t; **** +----------------------------------------------------------------------------------------------------------------+ | EXPLAIN | +----------------------------------------------------------------------------------------------------------------+ | Project [t.id, t.a, t.b] | | Gather partitions:all query:[SELECT `t`.`id` AS `id`, `t`.`a` AS `a`, `t`.`b` AS `b` FROM `db1_0`.`t` as `t` ] | | Project [t.id, t.a, t.b] | | TableScan db1.t, PRIMARY KEY (id) | +----------------------------------------------------------------------------------------------------------------+
EXPLAIN JSON
EXPLAIN JSON SELECT * FROM t; **** ---------------------------------------------- { "executor":"Project", "out":{ "":"t.id", "":"t.a", "":"t.b" }, "inputs":[ { "executor":"Gather", "partitions":"all", "query":"SELECT `t`.`id` AS `id`, `t`.`a` AS `a`, `t`.`b` AS `b` FROM `db1_0`.`t` as `t` ", "inputs":[ { "executor":"Project", "out":{ "":"t.id", "":"t.a", "":"t.b" }, "subselects":[], "inputs":[ { "executor":"TableScan", "db":"db1", "table":"t", "alias":"t", "index":"PRIMARY KEY (id)", "storage":"lf_skiplist", "dir":"forward", "inputs":[] } ] } ] } ] }
EXPLAIN JSON INTO OUTFILE
EXPLAIN JSON INTO OUTFILE '/tmp/t1.json' SELECT * FROM t1;
To view the contents of the t1.json
file, run the following command at the root
prompt:
cat '/tmp/t1.json';
Output file:
{ "plan_warnings": { }, "explain":[ { "executor":"Gather", "partitions":"all", "query":"SELECT `t1`.`a` AS `a`, `t1`.`b` AS `b` FROM `db_0`.`t1` as `t1` \/*!90623 OPTION(NO_QUERY_REWRITE=1, INTERPRETER_MODE=INTERPRET_FIRST)*\/", "alias":"remote_0", "inputs":[ { "executor":"Project", "out":[ { "alias":"", "projection":"t1.a" }, { "alias":"", "projection":"t1.b" } ], "subselects":[], "inputs":[ { "executor":"TableScan", "db":"db", "table":"t1", "alias":"t1", "index":"PRIMARY KEY ()", "storage":"lf_skiplist", "dir":"forward", "table_type":"sharded_rowstore", "inputs":[] } ] } ] } ], "version":"4", "info":{ "memsql_version":"7.1.2", "memsql_version_hash":"5ae405fc1fe7cf3b9fc8cab1115e8080e67f2476", "num_online_leaves":"1", "num_online_aggs":"1", "context_database":"db" }, "query_info":{ "query_text":"EXPLAIN JSON INTO OUTFILE '\/tmp\/t1.json' SELECT * FROM t1", "text_explain":"Gather partitions:all alias:remote_0\nProject [t1.a, t1.b]\nTableScan db.t1 table_type:sharded_rowstore\n" }
EXPLAIN DUPLICATE
EXPLAIN DUPLICATE SELECT * FROM t; **** +-------------------------------------------------------------------------------+ | query | +-------------------------------------------------------------------------------+ | SELECT `t`.`id` AS `id`,`t`.`a` AS `a`, `t`.`b` AS `b` FROM `test`.`t` as `t` | +-------------------------------------------------------------------------------+ 1 row in set (0.00 sec)
EXPLAIN MBC
EXPLAIN MBC SELECT * FROM t; **** ---------------------------------------------- EXPLAIN # explain mbc select * from t Data: static: offset=0 size=125 align=1 ata="USING `\?`::`\?` AGGREGATOR_ID = \?, CONNECTION_ID = \? SELECT `t`.`id` AS `id`, `t`.`a` AS `a`, `t`.`b` AS `b` FROM `t` as `t` \x00""" Function 0 <100000:CreateProjectState>: Locals: local rvToReturn: offset=0 size=8 align=8 type=int64_t local rvToReceive: offset=16 size=16 align=16 type=uint128_t param hiddenRvParam: offset=32 size=8 align=8 type={ state1: { RemoteElement: RemoteElement, a: { value: int32_t, opt: Opt, pack(4) }, b: { value: int32_t, opt: Opt, pack(4) }, : int64_t, id: int32_t } }* | local state1Addr_1: offset=40 size=8 align=8 type={ RemoteElement: RemoteElement, a: { value: int32_t, opt: Opt, pack(4) }, b: { value: int32_t, opt: Opt, pack(4) }, : int64_t, id: int32_t }* 0x0000 Junk local=hiddenRvParam i32=32 0x000c Lea local=&state1Addr_1 local=hiddenRvParam i32=0 0x001c Call func=10 <row_0tmp1_internal_ctor> local=state1Addr_1 0x002c Junk local=&state1Addr_1 i32=8 0x0038 Literal8 local=&rvToReturn i64=1 0x0048 Return Function 1 <100000:CreateMatchedRowContext>: ........
EXPLAIN MPL
EXPLAIN MPL SELECT * from t; **** ---------------------------------------------- # 82519a8824563b8dce2c4ea02571a1efddf408ee # EXPLAIN MPL SELECT * from t type context_t1 < { state1: struct2, state2: struct3 } type struct3 < { buffer: ProtocolRowBuffer } type exprs_t1 < { state1: row_0tmp1 } type struct2 < { xact: Transaction* } type row_0tmp1 < { RemoteElement: RemoteElement, a: struct1, b: struct1, : int64_t } type struct1 < { value: int32_t, opt: Opt pack(4) } type CachePerQuery1 < { } declare 100000:networkCtor0: fn(`state0`: struct3*) void < { { ProtocolRowBufferInitLocal(&((*(`state0`)).buffer)) return } } declare 100000:CreateProjectState: ctor fn() exprs_t1 < { { } } declare 100000:CreateMatchedRowContext: ctor fn() context_t1 < { { BZero(cast<void*>(&(((*(this)).state1).xact)), 8) call networkCtor0(&((*(this)).state2)) } } declare 100000:UncorrelatedSubselectParameterArrayPopulation: fn(`context0`: context_t1*, `parameterArray1`: ParameterArray*, `allocator2`: TempIncrementalAllocator*, `itemOrdinal4`: int32_t*, `synthInListArgCount5`: int32_t*, `synthInListEndOffset6`: int32_t*, `synthInListBeginOffset7`: int32_t*) void < { { *(`itemOrdinal4`) < 1 return } } declare 100000:RowToRowElement: fn(`row0`: MemSqlClientValue*, `rowElementPtr1`: void*) void < { Assert(0) CheckErrorButNotKilledFlag() return } declare 100000:remoteRowElementCtor: ctor fn() exprs_t1 < { { } } declare 100000:remoteRowElementInit: fn(`state0`: exprs_t1*) void < { { } } declare 100000:remoteRowElementCtorAndInit: fn(`elem0`: void*) void < { { *(cast<exprs_t1*>(`elem0`)) < call remoteRowElementCtor() call remoteRowElementInit(cast<exprs_t1*>(`elem0`)) return } } declare 100000:remoteRowElementDtorInternal: noexcept dtor fn(`elem0`: exprs_t1*) void < { { } } declare 100000:remoteRowElementDtor: noexcept fn(`elem0`: void*) void < { { call remoteRowElementDtorInternal(cast<exprs_t1*>(`elem0`)) return } } declare 100000:ScanTable0: fn(`exprs1`: exprs_t1*, `context2`: context_t1*, `xact3`: Transaction*, `foundRowParam4`: bool*) bool < { { declare `affectedRows5`: uint64_t < 0 declare `networkTicks6`: uint64_t < 0 declare `scannedRows7`: uint64_t < 0 declare `allocator8`: TempIncrementalAllocator < CreateTempIncrementalAllocator(41) { declare `itemOrdinal11`: int32_t < 1 declare `outputAction12`: int32_t < 1 declare `rowOffset9`: uint64_t < 0 declare `rowLimit10`: uint64_t < 9223372036854775807 declare `synthInListEndOffset13`: int32_t < 0 declare `synthInListArgCount14`: int32_t < 0 declare `synthInListBeginOffset15`: int32_t < 1 declare `trc18`: TableRuntimeContext* < GetRuntimeContext(0) declare `synthesizedParams19`: ParameterArray < ParameterArrayInit() call UncorrelatedSubselectParameterArrayPopulation(&(`synthesizedParams19`), cast<TempIncrementalAllocator*>(&(`allocator8`)), &(`itemOrdinal11`), &(`synthInListArgCount14`), &(`synthInListEndOffset13`), &(`synthInListBeginOffset15`)) declare `inListPartitions20`: uint64_t* < [] declare `inListPartitionsRanges21`: uint64_t* < [] declare `inListPartitionsSize22`: uint64_t < 0 declare `paramAlloc23`: TempIncrementalAllocator < CreateTempIncrementalAllocator(41) declare `dp16`: DeParametrizer < DeParametrizerFillIn(" /*!90621 OBJECT()*/ SELECT WITH(binary_serialization_internal=1) `t`.`a` AS `a`, `t`.`b` AS `b` FROM `t` as `t` /*!90623 OPTION(NO_QUERY_REWRITE=1, INTERPRETER_MODE=INTERPRET_FIRST)*/", &(`paramAlloc23`), 185, 9, &(`synthInListEndOffset13`), &(`synthInListArgCount14`), &(`synthInListBeginOffset15`)) DeParametrizerParamInit(&(`dp16`), 0, 1, 1, 2, 1) DeParametrizerParamInit(&(`dp16`), 1, 1, 1, 4, 1) DeParametrizerParamInit(&(`dp16`), 2, 1, 1, 6, 1) DeParametrizerParamInit(&(`dp16`), 3, 1, 1, 8, 1) DeParametrizerParamInit(&(`dp16`), 4, 1, 1, 12, 1) DeParametrizerParamInit(&(`dp16`), 5, 1, 1, 11, 1) DeParametrizerParamInit(&(`dp16`), 6, 1, 1, 13, 1) DeParametrizerParamInit(&(`dp16`), 7, 1, 1, 7, 1) DeParametrizerParamInit(&(`dp16`), 8, 1, 1, 10, 1) declare `queryParallelism24`: int32_t < SetupQueryParallelism(0, &(`trc18`)) declare `npartitions27`: int32_t < 0 declare `part28`: RuntimePartitionBoundary** < 0 RuntimePartitionBoundarySetUp(cast<TempIncrementalAllocator*>(&(`allocator8`)), &(`trc18`), `inListPartitions20`, `inListPartitionsSize22`, &(`npartitions27`), &(`part28`)) DeParametrizerInListUpdate(cast<TempIncrementalAllocator*>(&(`allocator8`)), &(`dp16`), `inListPartitions20`, `inListPartitionsRanges21`, `inListPartitionsSize22`, 1) { declare `rowBuffer29`: ProtocolRowBuffer < ProtocolRowBufferInit() ProtocolRowBufferInitLocal(&(`rowBuffer29`)) declare `simpleStreamingIterator31`: MbcSimpleStreamingIterator < SimpleStreamingIteratorInit(cast<TempIncrementalAllocator*>(&(`allocator8`)), &(`networkTicks6`), &(`scannedRows7`), &(`dp16`), &(`trc18`), &(`synthesizedParams19`), &(`queryParallelism24`), 24, remoteRowElementCtorAndInit, RowToRowElement, remoteRowElementDtor, 1, 1, 1, &(`npartitions27`), &(`part28`), 0) QueryAllPartitionsWithoutRowElement(&(`simpleStreamingIterator31`), &(`outputAction12`), &(`rowOffset9`), &(`rowLimit10`), 0, &(`rowBuffer29`)) ProtocolRowBufferFlush(&(`rowBuffer29`)) } } FinalizeDistributedQueryStats(&(`networkTicks6`), &(`scannedRows7`)) return 0 } } declare 100000:plan: fn(`xact0`: Transaction*) bool < { { defer CheckError() { declare `epoch1`: AutoReadEpochNoInline < AutoReadEpochNoInlineInit(GetThreadId()) declare `context2`: context_t1 < call CreateMatchedRowContext() declare `exprs3`: exprs_t1 < call CreateProjectState() ((`context2`).state1).xact < `xact0` CheckClientConnection() SendMetadata() { declare `distributedXact4`: AutoShardedWriteQueryXact < AutoShardedWriteQueryXactInit(0, 0, 0, 1, 0) AutoShardedWriteQueryXactPrepareToEat(&(`distributedXact4`), 0) AutoShardedWriteQueryXactEatPartitions(&(`distributedXact4`)) { declare `rowFoundPtrParam5`: bool < 0 if (call ScanTable0(`xact0`, &(`rowFoundPtrParam5`))) { } } } FinalizeQuery() { CheckError() } return 1 } } } declare 100000:Estimate: fn() uint64_t < { declare `context0`: context_t1 < call CreateMatchedRowContext() declare `exprs1`: exprs_t1 < call CreateProjectState() return 0 } declare SetupCachePerQuery: noexcept fn(`cachePerQuery0`: CachePerQuery1*, `retryOnOOM1`: bool) bool < try { return 1 } catch return 0 declare main: noexcept fn() int64_t < try { { declare `cachePerQuery1`: CachePerQuery1 < [] CheckFault(call SetupCachePerQuery(&(`cachePerQuery1`), 0)) SetCachePerQuery(cast<void*>(&(`cachePerQuery1`))) defer SetCachePerQuery(0) declare `allocator2`: TempIncrementalAllocator < CreateTempIncrementalAllocator(20) StoreTempAllocForGISInInterpVars(&(`allocator2`)) declare `xact0`: AutoQueryXact < InitAutoReadQueryXact() declare `xact4`: Transaction* < GetQueryTransaction() { try { declare `ret3`: bool < call 100000:plan(`xact4`) CheckError() AutoQueryXactCommit(&(`xact0`)) } fault { AutoQueryXactRollback(&(`xact0`)) } } return 3 } } catch return 5
EXPLAIN REPRO
The debugging information starts at the key debug_info
in the output.
EXPLAIN REPRO SELECT * FROM t;
****
----------------------------------------------
The first section is typical EXPLAIN JSON output:
| EXPLAIN
••••
Then the debugging information is in the next section of the output:
debug_info":
{
"optimizer_caches":{
},
"ddl":[
"CREATE DATABASE `test1` PARTITIONS 8",
"USING `test1` CREATE TABLE `t` (\n `id` int(11) DEFAULT NULL,\n `a` int(11) DEFAULT NULL,\n `b` int(11) DEFAULT NULL,\n KEY `__UNORDERED` () USING CLUSTERED COLUMNSTORE\n , SHARD KEY () \n) AUTOSTATS_CARDINALITY_MODE=INCREMENTAL AUTOSTATS_HISTOGRAM_MODE=CREATE AUTOSTATS_SAMPLING=ON SQL_MODE='STRICT_ALL_TABLES'"
],
"variables":{"as_aggregator": 1, "as_leaf": 0, "batch_external_functions": 0, "binary_serialization": 1, "client_found_rows": 0, "collation_server": 2, "datetime_precision_mode": 0, "default_columnstore_table_lock_threshold": 0, "disable_histogram_estimation": 0, "disable_sampling_estimation": 0, "disable_sampling_estimation_with_histograms": 2, "disable_subquery_merge_with_straight_joins": 2, "display_full_estimation_stats": 0, "distributed_optimizer_broadcast_mult": 0, "distributed_optimizer_estimated_restricted_search_cost_bound": 125, "distributed_optimizer_max_join_size": 22, "distributed_optimizer_min_join_size_run_initial_heuristics": 16, "distributed_optimizer_nodes": 0, "distributed_optimizer_old_selectivity_table_threshold": 22, "distributed_optimizer_run_legacy_heuristic": 0, "distributed_optimizer_selectivity_fallback_threshold": 50000000, "distributed_optimizer_unrestricted_search_threshold": 22, "enable_broadcast_left_join": 1, "enable_local_shuffle_group_by": 1, "enable_multipartition_queries": 1, "enable_skiplist_sampling_for_selectivity": 1, "force_bloom_filters": 0, "force_bushy_join_table_limit": 18, "force_bushy_joins": 0, "force_heuristic_rewrites": 0, "force_table_pushdown": 0, "hash_groupby_segment_distinct_values_threshold": 10000, "ignore_insert_into_computed_column": 0, "inlist_precision_limit": 10000, "interpreter_mode": 4, "leaf_pushdown_default": 0, "leaf_pushdown_enable_rowcount": 120000, "materialize_ctes": 0, "max_broadcast_tree_rowcount": 120000, "max_subselect_aggregator_rowcount": 120000, "node_degree_of_parallelism": 0, "old_local_join_optimizer": 0, "optimize_constants": 1, "optimize_stmt_threshold": 50, "optimizer_beam_width": 10, "optimizer_cross_join_cost": 1.000000, "optimizer_disable_right_join": 0, "optimizer_disable_subselect_to_join": 0, "optimizer_empty_tables_limit": 0, "optimizer_hash_join_cost": 1.000000, "optimizer_merge_join_cost": 1.000000, "optimizer_nested_join_cost": 1.000000, "optimizer_num_partitions": 0, "quadratic_rewrite_size_limit": 200, "query_rewrite_loop_iterations": 1, "reshuffle_group_by_base_cost": 0, "sampling_estimates_for_complex_filters": 1, "singlebox_optimizer_cost_based_threshold": 18, "sql_mode": 4194304, "sql_select_limit": 1, "subquery_merge_with_outer_joins": 3, "cardinality_estimation_level": 3, "data_conversion_compatibility_level": 0, "debug_mode": 0, "default_partitions_per_leaf": 8, "disable_update_delete_distributed_transactions": 0, "enable_alias_space_trim": 0, "enable_spilling": 0, "explicit_defaults_for_timestamp": 1, "json_extract_string_collation": 1, "resource_pool_statement_selector_function": 4294967300, "use_avx2": 1, "use_joincolumnstore": 1, "resource_pool_is_auto": 0},
"optimizer_stats":[
{
"version": 2,
"databaseName": "test1",
"tables": [
{
"tableName": "t",
"rowCount": "4096",
"columns": [
{
"columnName": "id",
"nullCount": "0",
"minValue": "",
"maxValue": "",
"cardinality": "11",
"density": "0x0p+0",
"sampleSize": "0",
"lastUpdated": "1646247938"
},
{
"columnName": "a",
"nullCount": "0",
"minValue": "",
"maxValue": "",
"cardinality": "11",
"density": "0x0p+0",
"sampleSize": "0",
"lastUpdated": "1646247938"
},
{
"columnName": "b",
"nullCount": "0",
"minValue": "",
"maxValue": "",
"cardinality": "11",
"density": "0x0p+0",
"sampleSize": "0",
"lastUpdated": "1646247938"
}
]
}
],
"stats": [
{
"tableName": "t",
"columns": [
{
"columnName": "id"
},
{
"columnName": "a"
},
{
"columnName": "b"
}
]
}
]
}
]
}
} |