EXPLAIN
On this page
Returns detailed information on how the query is executed.
EXPLAIN
displays query execution operations as a tree, where the operator at the top is the last executed before returning the result to the client, and operators below are executed before their parents.EXPLAIN
plans can also be visualized via a graphical interface using SingleStore Visual Explain.
See the examples section below for descriptions of the most common types of information in EXPLAIN
output.
Syntax
EXPLAIN [EXTENDED | JSON | JSON INTO OUTFILE file_name | DUPLICATE | MBC | MPL | ]{ select_statement| insert_select_statement| update_statement| delete_statement }
Remarks
-
EXPLAIN
can be run with an arbitrary SELECT, INSERT…SELECT, UPDATE, or DELETE statement. -
EXPLAIN
displays the plan SingleStore would generate if the query were reoptimized and recompiled now.If the query has already been run and a plan has already been saved in the plancache, that plan will be reused, even though it may differ from the plan displayed by EXPLAIN
.You may run DROP … FROM PLANCACHE to force the query to be reoptimized and recompiled. Note that running ANALYZE will automatically drop and recompile query plans if data statistics have changed significantly. -
In most cases when the optimizer makes a cost-based decision,
EXPLAIN
output displays estimated statistics and costs. -
If the
enable_
global variable is enabled, the explain tree information is also stored in the persisted plancache, in human-readable format.disk_ plan_ explain When enable_
is enabled, thedisk_ plan_ explain SHOW PLAN [JSON] plan_
command can also be used to display the explain information.id Refer to the SHOW PLAN topic for details. -
The
explain_
engine variable specifies the limit of the number of charactersexpression_ limit EXPLAIN
will return.Increase the value to get more text in the EXPLAIN
results.Alternatively, EXPLAIN JSON
shows the entire result. -
The
INTO OUTFILE file_
clause writes the explain information in JSON or text format to a file.name It creates a new file at the specified location and writes the query profile details to the file. Enter the absolute path to the file as the file_
parameter.name The output file can be a JSON or a text file. ---To display the explain information in text formatSHOW PLAN 3;---To display the explain information in JSON formatSHOW PLAN JSON 3; -
This command must be run on the master aggregator or a child aggregator node (see Node Requirements for SingleStore Commands).
Note that when running this command on reference tables you must connect to the master aggregator.
-
Refer to the Permission Matrix for the required permission.
Explain Modes
-
EXPLAIN
outputs the explain information in a human-readable format. -
EXPLAIN EXTENDED
outputs more detailed information for the query plan in addition to the information displayed byEXPLAIN
. -
EXPLAIN JSON
outputs the explain information in JSON format, which may be more machine-readable. -
EXPLAIN DUPLICATE
returns the query text in an explicit manner.Commands like SELECT *
are expanded to list all the actual columns, e.g. SELECT t.
.a, t. b Additionally, some syntax is rewritten. For example, correlated subselects may be rewritten as joins so that they are more performant. This mode is useful for query tuning because it displays the query text before the execution plan is generated. The text generated by EXPLAIN DUPLICATE
is fully functional: you can copy and paste it into a SQL editor and run it.
Additionally, the following EXPLAIN
modes are provided for informational and debugging purposes only and may be changed at any time:
-
EXPLAIN MBC
outputs SingleStore Byte Code generated for the query -
EXPLAIN MPL
outputs SingleStore Programming Language code generated for the query
EXPLAIN
EXPLAIN SELECT * FROM t;
+-----------------------------------+
| EXPLAIN |
+-----------------------------------+
| Project [t.id, t.a, t.b] |
| Gather partitions:all |
| Project [t.id, t.a, t.b] |
| TableScan db1.t, PRIMARY KEY (id) |
+-----------------------------------+
EXPLAIN EXTENDED
EXPLAIN EXTENDED SELECT * FROM t;
+----------------------------------------------------------------------------------------------------------------+
| EXPLAIN |
+----------------------------------------------------------------------------------------------------------------+
| Project [t.id, t.a, t.b] |
| Gather partitions:all query:[SELECT `t`.`id` AS `id`, `t`.`a` AS `a`, `t`.`b` AS `b` FROM `db1_0`.`t` as `t` ] |
| Project [t.id, t.a, t.b] |
| TableScan db1.t, PRIMARY KEY (id) |
+----------------------------------------------------------------------------------------------------------------+
EXPLAIN JSON
EXPLAIN JSON SELECT * FROM t;
----------------------------------------------
{
"executor":"Project",
"out":{
"":"t.id",
"":"t.a",
"":"t.b"
},
"inputs":[
{
"executor":"Gather",
"partitions":"all",
"query":"SELECT `t`.`id` AS `id`, `t`.`a` AS `a`, `t`.`b` AS `b` FROM `db1_0`.`t` as `t` ",
"inputs":[
{
"executor":"Project",
"out":{
"":"t.id",
"":"t.a",
"":"t.b"
},
"subselects":[],
"inputs":[
{
"executor":"TableScan",
"db":"db1",
"table":"t",
"alias":"t",
"index":"PRIMARY KEY (id)",
"storage":"lf_skiplist",
"dir":"forward",
"inputs":[]
}
]
}
]
}
]
}
EXPLAIN JSON INTO OUTFILE
EXPLAIN JSON INTO OUTFILE '/tmp/t1.json' SELECT * FROM t1;
To view the contents of the t1.
file, run the following command at the root
prompt:
cat '/tmp/t1.json';
Output file:
{
"plan_warnings": {
},
"explain":[
{
"executor":"Gather",
"partitions":"all",
"query":"SELECT `t1`.`a` AS `a`, `t1`.`b` AS `b` FROM `db_0`.`t1` as `t1` \/*!90623 OPTION(NO_QUERY_REWRITE=1, INTERPRETER_MODE=INTERPRET_FIRST)*\/",
"alias":"remote_0",
"inputs":[
{
"executor":"Project",
"out":[
{
"alias":"",
"projection":"t1.a"
},
{
"alias":"",
"projection":"t1.b"
}
],
"subselects":[],
"inputs":[
{
"executor":"TableScan",
"db":"db",
"table":"t1",
"alias":"t1",
"index":"PRIMARY KEY ()",
"storage":"lf_skiplist",
"dir":"forward",
"table_type":"sharded_rowstore",
"inputs":[]
}
]
}
]
}
],
"version":"4",
"info":{
"memsql_version":"7.1.2",
"memsql_version_hash":"5ae405fc1fe7cf3b9fc8cab1115e8080e67f2476",
"num_online_leaves":"1",
"num_online_aggs":"1",
"context_database":"db"
},
"query_info":{
"query_text":"EXPLAIN JSON INTO OUTFILE '\/tmp\/t1.json' SELECT * FROM t1",
"text_explain":"Gather partitions:all alias:remote_0\nProject [t1.a, t1.b]\nTableScan db.t1 table_type:sharded_rowstore\n"
}
EXPLAIN DUPLICATE
EXPLAIN DUPLICATE SELECT * FROM t;
+-------------------------------------------------------------------------------+
| query |
+-------------------------------------------------------------------------------+
| SELECT `t`.`id` AS `id`,`t`.`a` AS `a`, `t`.`b` AS `b` FROM `test`.`t` as `t` |
+-------------------------------------------------------------------------------+
1 row in set (0.00 sec)
EXPLAIN MBC
EXPLAIN MBC SELECT * FROM t;
----------------------------------------------
EXPLAIN
# explain mbc select * from t
Data:
static: offset=0 size=125 align=1 ata="USING `\?`::`\?` AGGREGATOR_ID = \?, CONNECTION_ID = \? SELECT `t`.`id` AS `id`, `t`.`a` AS `a`, `t`.`b` AS `b` FROM `t` as `t` \x00"""
Function 0 <100000:CreateProjectState>:
Locals:
local rvToReturn: offset=0 size=8 align=8 type=int64_t
local rvToReceive: offset=16 size=16 align=16 type=uint128_t
param hiddenRvParam: offset=32 size=8 align=8 type={ state1: { RemoteElement: RemoteElement, a: { value: int32_t, opt: Opt, pack(4) }, b: { value: int32_t, opt: Opt, pack(4) }, : int64_t, id: int32_t } }*
| local state1Addr_1: offset=40 size=8 align=8 type={ RemoteElement: RemoteElement, a: { value: int32_t, opt: Opt, pack(4) }, b: { value: int32_t, opt: Opt, pack(4) }, : int64_t, id: int32_t }*
0x0000 Junk local=hiddenRvParam i32=32
0x000c Lea local=&state1Addr_1 local=hiddenRvParam i32=0
0x001c Call func=10 <row_0tmp1_internal_ctor> local=state1Addr_1
0x002c Junk local=&state1Addr_1 i32=8
0x0038 Literal8 local=&rvToReturn i64=1
0x0048 Return
Function 1 <100000:CreateMatchedRowContext>:
........
EXPLAIN MPL
EXPLAIN MPL SELECT * from t;
----------------------------------------------
# 82519a8824563b8dce2c4ea02571a1efddf408ee
# EXPLAIN MPL SELECT * from t
type context_t1 <
{
state1: struct2,
state2: struct3
}
type struct3 <
{
buffer: ProtocolRowBuffer
}
type exprs_t1 <
{ state1: row_0tmp1
}
type struct2 <
{
xact: Transaction*
}
type row_0tmp1 <
{
RemoteElement: RemoteElement,
a: struct1,
b: struct1,
: int64_t
}
type struct1 <
{
value: int32_t,
opt: Opt
pack(4)
}
type CachePerQuery1 <
{
}
declare 100000:networkCtor0: fn(`state0`: struct3*) void <
{
{
ProtocolRowBufferInitLocal(&((*(`state0`)).buffer))
return
}
}
declare 100000:CreateProjectState: ctor fn() exprs_t1 <
{
{ }
}
declare 100000:CreateMatchedRowContext: ctor fn() context_t1 <
{
{
BZero(cast<void*>(&(((*(this)).state1).xact)), 8)
call networkCtor0(&((*(this)).state2))
}
}
declare 100000:UncorrelatedSubselectParameterArrayPopulation: fn(`context0`: context_t1*, `parameterArray1`: ParameterArray*, `allocator2`: TempIncrementalAllocator*, `itemOrdinal4`: int32_t*, `synthInListArgCount5`: int32_t*, `synthInListEndOffset6`: int32_t*, `synthInListBeginOffset7`: int32_t*) void <
{
{
*(`itemOrdinal4`) < 1
return
}
}
declare 100000:RowToRowElement: fn(`row0`: MemSqlClientValue*, `rowElementPtr1`: void*) void <
{
Assert(0)
CheckErrorButNotKilledFlag()
return
}
declare 100000:remoteRowElementCtor: ctor fn() exprs_t1 <
{
{ }
}
declare 100000:remoteRowElementInit: fn(`state0`: exprs_t1*) void <
{
{ }
}
declare 100000:remoteRowElementCtorAndInit: fn(`elem0`: void*) void <
{
{
*(cast<exprs_t1*>(`elem0`)) < call remoteRowElementCtor()
call remoteRowElementInit(cast<exprs_t1*>(`elem0`))
return
}
}
declare 100000:remoteRowElementDtorInternal: noexcept dtor fn(`elem0`: exprs_t1*) void <
{
{ }
}
declare 100000:remoteRowElementDtor: noexcept fn(`elem0`: void*) void <
{
{
call remoteRowElementDtorInternal(cast<exprs_t1*>(`elem0`))
return
}
}
declare 100000:ScanTable0: fn(`exprs1`: exprs_t1*, `context2`: context_t1*, `xact3`: Transaction*, `foundRowParam4`: bool*) bool <
{
{
declare `affectedRows5`: uint64_t < 0
declare `networkTicks6`: uint64_t < 0
declare `scannedRows7`: uint64_t < 0
declare `allocator8`: TempIncrementalAllocator < CreateTempIncrementalAllocator(41)
{
declare `itemOrdinal11`: int32_t < 1
declare `outputAction12`: int32_t < 1
declare `rowOffset9`: uint64_t < 0
declare `rowLimit10`: uint64_t < 9223372036854775807
declare `synthInListEndOffset13`: int32_t < 0
declare `synthInListArgCount14`: int32_t < 0
declare `synthInListBeginOffset15`: int32_t < 1
declare `trc18`: TableRuntimeContext* < GetRuntimeContext(0)
declare `synthesizedParams19`: ParameterArray < ParameterArrayInit()
call UncorrelatedSubselectParameterArrayPopulation(&(`synthesizedParams19`), cast<TempIncrementalAllocator*>(&(`allocator8`)), &(`itemOrdinal11`), &(`synthInListArgCount14`), &(`synthInListEndOffset13`), &(`synthInListBeginOffset15`))
declare `inListPartitions20`: uint64_t* < []
declare `inListPartitionsRanges21`: uint64_t* < []
declare `inListPartitionsSize22`: uint64_t < 0
declare `paramAlloc23`: TempIncrementalAllocator < CreateTempIncrementalAllocator(41)
declare `dp16`: DeParametrizer < DeParametrizerFillIn(" /*!90621 OBJECT()*/ SELECT WITH(binary_serialization_internal=1) `t`.`a` AS `a`, `t`.`b` AS `b` FROM `t` as `t` /*!90623 OPTION(NO_QUERY_REWRITE=1, INTERPRETER_MODE=INTERPRET_FIRST)*/", &(`paramAlloc23`), 185, 9, &(`synthInListEndOffset13`), &(`synthInListArgCount14`), &(`synthInListBeginOffset15`))
DeParametrizerParamInit(&(`dp16`), 0, 1, 1, 2, 1)
DeParametrizerParamInit(&(`dp16`), 1, 1, 1, 4, 1)
DeParametrizerParamInit(&(`dp16`), 2, 1, 1, 6, 1)
DeParametrizerParamInit(&(`dp16`), 3, 1, 1, 8, 1)
DeParametrizerParamInit(&(`dp16`), 4, 1, 1, 12, 1)
DeParametrizerParamInit(&(`dp16`), 5, 1, 1, 11, 1)
DeParametrizerParamInit(&(`dp16`), 6, 1, 1, 13, 1)
DeParametrizerParamInit(&(`dp16`), 7, 1, 1, 7, 1)
DeParametrizerParamInit(&(`dp16`), 8, 1, 1, 10, 1)
declare `queryParallelism24`: int32_t < SetupQueryParallelism(0, &(`trc18`))
declare `npartitions27`: int32_t < 0
declare `part28`: RuntimePartitionBoundary** < 0
RuntimePartitionBoundarySetUp(cast<TempIncrementalAllocator*>(&(`allocator8`)), &(`trc18`), `inListPartitions20`, `inListPartitionsSize22`, &(`npartitions27`), &(`part28`))
DeParametrizerInListUpdate(cast<TempIncrementalAllocator*>(&(`allocator8`)), &(`dp16`), `inListPartitions20`, `inListPartitionsRanges21`, `inListPartitionsSize22`, 1)
{
declare `rowBuffer29`: ProtocolRowBuffer < ProtocolRowBufferInit()
ProtocolRowBufferInitLocal(&(`rowBuffer29`))
declare `simpleStreamingIterator31`: MbcSimpleStreamingIterator < SimpleStreamingIteratorInit(cast<TempIncrementalAllocator*>(&(`allocator8`)), &(`networkTicks6`), &(`scannedRows7`), &(`dp16`), &(`trc18`), &(`synthesizedParams19`), &(`queryParallelism24`), 24, remoteRowElementCtorAndInit, RowToRowElement, remoteRowElementDtor, 1, 1, 1, &(`npartitions27`), &(`part28`), 0)
QueryAllPartitionsWithoutRowElement(&(`simpleStreamingIterator31`), &(`outputAction12`), &(`rowOffset9`), &(`rowLimit10`), 0, &(`rowBuffer29`))
ProtocolRowBufferFlush(&(`rowBuffer29`))
}
}
FinalizeDistributedQueryStats(&(`networkTicks6`), &(`scannedRows7`))
return 0
}
}
declare 100000:plan: fn(`xact0`: Transaction*) bool <
{
{
defer CheckError()
{
declare `epoch1`: AutoReadEpochNoInline < AutoReadEpochNoInlineInit(GetThreadId())
declare `context2`: context_t1 < call CreateMatchedRowContext()
declare `exprs3`: exprs_t1 < call CreateProjectState()
((`context2`).state1).xact < `xact0`
CheckClientConnection()
SendMetadata()
{
declare `distributedXact4`: AutoShardedWriteQueryXact < AutoShardedWriteQueryXactInit(0, 0, 0, 1, 0)
AutoShardedWriteQueryXactPrepareToEat(&(`distributedXact4`), 0)
AutoShardedWriteQueryXactEatPartitions(&(`distributedXact4`))
{
declare `rowFoundPtrParam5`: bool < 0
if (call ScanTable0(`xact0`, &(`rowFoundPtrParam5`)))
{
}
}
}
FinalizeQuery()
{
CheckError()
}
return 1
}
}
}
declare 100000:Estimate: fn() uint64_t <
{
declare `context0`: context_t1 < call CreateMatchedRowContext()
declare `exprs1`: exprs_t1 < call CreateProjectState()
return 0
}
declare SetupCachePerQuery: noexcept fn(`cachePerQuery0`: CachePerQuery1*, `retryOnOOM1`: bool) bool <
try
{
return 1
}
catch return 0
declare main: noexcept fn() int64_t <
try
{
{
declare `cachePerQuery1`: CachePerQuery1 < []
CheckFault(call SetupCachePerQuery(&(`cachePerQuery1`), 0))
SetCachePerQuery(cast<void*>(&(`cachePerQuery1`)))
defer SetCachePerQuery(0)
declare `allocator2`: TempIncrementalAllocator < CreateTempIncrementalAllocator(20)
StoreTempAllocForGISInInterpVars(&(`allocator2`))
declare `xact0`: AutoQueryXact < InitAutoReadQueryXact()
declare `xact4`: Transaction* < GetQueryTransaction()
{
try
{
declare `ret3`: bool < call 100000:plan(`xact4`)
CheckError()
AutoQueryXactCommit(&(`xact0`))
}
fault
{
AutoQueryXactRollback(&(`xact0`))
}
}
return 3
}
}
catch return 5
Related Topics
-
PROFILE displays detailed execution statistics in addition to execution operations.
-
Query Plan Operations describes the components of a query plan.
Last modified: July 30, 2024