Code Generation

A large contributor to SingleStore's superior query execution performance is its system of code generation. In contrast to a traditional interpreter-based execution model for databases, SingleStore embeds an industrial compiler to produce highly efficient machine code that enables low-level optimizations, which are not possible when executing queries via interpretation alone. And now by default, queries are interpreted first and then asynchronously compiled in the background for use in later executions. This speeds up query execution time for long and complex queries, while at the same time providing efficient query plans for later use.

As stated above, the first time SingleStore encounters a given query shape, it will optimize and compile the query asynchronously for future invocations. This incurs minimal overhead which does not depend on the amount of data to be processed, but rather the complexity of the query. The process of code generation involves extracting parameters from the query then transforming the normalized query into a SingleStore-specific intermediate representation tailored to the system. Subsequent requests with the same shape can reuse this plan to complete both quickly and consistently.

Code generation applies to all Data Manipulation Language DML queries. In addition, SingleStore generates code during CREATE TABLE and ALTER TABLE statements. These Data Definition Language DDL queries generate code in order to reduce the compilation time of subsequent DML queries against the table.

The Interpreter Mode’s Effects on Code Generation

The interpreter_mode setting controls whether SingleStore interprets or compiles query shapes, or does both.

This section explains how this mode operates when SingleStore encounters a new query shape.

interpreter_mode is set to compile or llvm

In this mode, SingleStore compiles a query’s shape the first time it is encountered. As shown in the following example, the query SELECT * FROM t WHERE col = 1; takes longer to complete on the first run; this is a result of the compile overhead during the first step. Also note that the WHERE clauses are different in the second and third query, but both query shapes are identical. Therefore, SingleStore reuses the second query’s plan to run the third query.

SELECT * FROM t WHERE col = 1;
Empty set (0.13 sec)
SELECT * FROM t WHERE col = 1;
Empty set (0.00 sec)
SELECT * FROM t WHERE col = 100000;
Empty set (0.00 sec)


Run the PROFILE command followed by the SHOW PROFILE JSON command to view detailed statistics on compilation time (displayed under compile_time_stats in the output).

interpreter_mode is set to compile_lite

In this mode, a query's shape is compiled, without any optimizations, the first time the query shape is encountered. This results in faster compilation than when interpreter_mode is set to compile. However, because compile_lite applies no optimizations, it will execute queries slower than compile.

interpreter_mode is set to compile_lite_first

In this mode, two parallel compilation threads are run: A foreground thread that compiles the query shape in compile_lite mode and a background thread that compiles the query shape in compile mode. When the background thread completes, the foreground thread switches to the background thread's compiled query plan. Generally, SingleStore recommends using compile_lite_first mode instead of compile_lite mode. An exception is when using compile_lite_first would consume too many CPU resources (as the same query shape is compiled twice).

interpreter_mode is set to interpret or mbc

In this mode, queries are interpreted and not compiled.

interpreter_mode is set to interpret_first

In this mode, SingleStore interprets and compiles a query shape in parallel, upon encountering the query shape for the first time. The query runs in interpreted mode until compilation of the query shape completes. In the following example, the first SELECT * FROM t WHERE col = 1; query runs faster than the first query in the previous example.

SELECT * FROM t WHERE col = 1;
Empty set (0.02 sec)
SELECT * FROM t WHERE col = 1;
Empty set (0.00 sec)
SELECT * FROM t WHERE col = 100000;
Empty set (0.00 sec)

As of MemSQL 6.8, interpreter_mode is set to interpret_first by default.

Setting the Interpreter Mode

interpreter_mode can be set:

Using an Engine Variable

The following example sets the interpreter_modeengine variable to compile for the current connection to a SingleStore node:

SET SESSION interpreter_mode = compile;

Using a Query Option

To use the query option, add the OPTION (interpreter_mode = <interpreter_mode_setting>) clause to the end of a query. For example:

SELECT * FROM t WHERE col = 1 OPTION (interpreter_mode = compile);

The interpreter_mode option is non-operational when a query containing the option is run on a leaf node.

Understanding the Plancache

When a query is first run, it activates code generation, gets optimized and translated into lower level machine language. On this first run of the query, the code generation for it will be recorded in the logs. This lower level version of the query, with the wildcard @ replacing for parameters is stored on-disk in the plancache. You can see the plancache files in /var/lib/memsql/<node ID>/plancache/. Note the directories in this location, named for their IDs. In the ID dirs, you will find the .mo, .mpl, and .mu machine language files.

If the data has undergone a lot of changes, then you might want the plancache to be updated with the new stats by running: ANALYZE TABLE table_name;. This will collect new statistics and invalidate previously cached plans on tables that experienced significant changes,

After code generation, the compiled query plans are saved for later use in a plancache. Each SingleStoreDB node (aggregator and leaf) has its own plans and plancache. Each user query (on the aggregator) is associated with a plan on that aggregator, and some number of plans (zero, one, or more) on the leaves.

There are two layers of this plancache: the in-memory plancache and the on-disk plancache (located in the @@global.plancachedir directory). Plans are cached in the in-memory plancache until they expire or until that SingleStoreDB node restarts. A plan expires after the interval of time configured by the plan_expiration_minutes global variable has elapsed (the default value of this variable is 720, or 12 hours).

When a plan expires from the in-memory plancache, it remains in the on-disk plancache and is loaded back into memory the next time the query is run. If the global variables enable_disk_plan_expiration and disk_plan_expiration_minutes are set, then a plan that has not been read from disk during the specified disk_plan_expiration_minutes limit will expire from the on-disk plancache and be removed from disk.

To purge all generated plans, both in-memory and on-disk, run the DROP ALL FROM PLANCACHE command. This command invalidates all previously compiled plans, and generates fresh query plans for any new queries.

To see DML query plans that have been cached in the in-memory plancache, run the SHOW PLANCACHE command or query against the information_schema.PLANCACHE table.

For more information on plan expiration and managing disk usage, see Managing Plancache Memory and Disk Usage.

Overriding Parameterization of Queries

Parameterization is when the constants in a query are converted into parameters, so that queries with the same structure can reuse the query plan. Some query shapes defeat the purpose of parameterization, thereby causing large plancaches.

To override parameterization, use the NOPARAM built-in function.

Codegen Architecture

Code generation produces a specialized programming language, the SingleStore (MemSQL) Plan Language. MPL is simple, tightly managed and designed specifically for SingleStore rather than for general purpose applications. SQL operator trees are converted directly to MPL abstract syntax trees, eliminating the need for expensive parsing and semantic resolution at query compilation time.

In addition to the simplicity of the language, MPL benefits from exposing as primitives many relational operations such as index seek and scalar operations such as BIGINT addition. These operators are compiled statically along with the server and can easily be combined to form a query plan. In this way, much of the work of compilation can be performed ahead of time.

SingleStore Bytecode

When compiling a query, SingleStore first flattens MPL into a compact format known as SingleStore (MemSQL) Bytecode (MBC). Plans in this form can easily be serialized, interpreted or transformed to LLVM bitcode for compilation to x86_64 machine code. Interpreting the bytecode directly nearly eliminates the first-time cost of running a query, but can hamper execution performance:

Empty set (0.05 sec)
Empty set (0.05 sec)

In this example, there is no additional latency on the first request because the query is interpreted.

SELECT * FROM t WHERE j > 0; -- With code generation
Empty set (0.08 sec)
SELECT * FROM t WHERE j > 0; -- Using the cached plan
Empty set (0.02 sec)

Here, the additional latency on the first request is the result of code generation. However, the subsequent request is more than twice as fast as the interpreted execution above. This goes to show the power of SingleStore code generation.

Last modified: August 14, 2023

Was this article helpful?