Warning

SingleStore 9.0 gives you the opportunity to preview, evaluate, and provide feedback on new and upcoming features prior to their general availability. In the interim, SingleStore 8.9 is recommended for production workloads, which can later be upgraded to SingleStore 9.0.

Distributed Plancache

The Distributed PlanCache (DPC) is a third layer of the plancache that supplements the in-memory plancache and the on-disk persistent plancache (PPC). The DPC allows cluster nodes to share compiled plans. Nodes can skip query optimization, code generation, and the LLVM compilation process if the plan has been compiled in a different node in the cluster. Thus, the load on the CPU is reduced and the first-time performance of queries that have been compiled on other nodes is improved.

The DPC improves performance in the following scenarios:

  • Fast scaling: The DPC triggers plan synchronization during a node's reprovisioning phase. During reprovisioning, recently used query plans are downloaded to nodes’ PPCs.

  • Clusters with multiple aggregator nodes: Aggregators periodically sync the most recently used plans from other aggregators.

Overview

When the DPC is enabled, plans are synchronized automatically between nodes. That is, child aggregators and leaf nodes automatically download plans from the DPC into their local PPC. Once a plan is downloaded, the node can use that plan and avoid query optimization, code generation, and plan compilation in many cases.

Similar to how the PPC functions, a plan downloaded to the DPC is usable except when:

  • A variable affecting the plan has changed in the node.

  • Or a table in a query has changed significantly (for example, the number of rows in the table has changed by a factor of two times or more) since the plan was generated.

In addition to downloading plans from the DPC, nodes also upload plans to the DPC to make those plans available to other nodes. Similarly plans can be deleted from the DPC to indicate to other nodes that those plans can be deleted.

DPC operations do not interfere with regular cluster operations. While the DPC may consume CPU, memory, and network resources, all operations happen in the background and do not interfere with regular query execution.

The DPC operates on a best-effort basis and does not guarantee that plan compilation will not occur on a new node.

Remarks

  • DPC requires that the cluster have Unlimited Storage enabled. DPC utilizes unlimited storage to store plancache files.

  • DPC only stores plans generated during MBC, LLVM, and INTERPRET_FIRST interpreter modes.

  • Plan synchronization is not supported on the Master Aggregator.

Enable and Manage the Distributed Plancache

Enable the Distributed Plancache

The enable_distributed_plancache engine variable controls the DPC.

Set the enable_distributed_plancache global engine variable to ON to enable the DPC.

SET GLOBAL enable_distributed_plancache = ON;

Use the following command to verify the DPC is enabled.

SELECT @@enable_distributed_plancache;
 
+--------------------------------+
| @@enable_distributed_plancache |
+--------------------------------+
|                              1 |
+--------------------------------+

Manage the Distributed Plancache

When the DPC is enabled, nodes automatically download, upload, and delete plans from the DPC.

Nodes download plans from the DPC to their local PPC when the cluster scales, is rebalanced, or a new node is added to the cluster. Nodes delete plans from the DPC when those plans are explicitly deleted from their local PPC with DROP … FROM PLANCACHE.

The download, upload, delete, and synchronization operations occur in the background and are managed by the DPC task queue.

Aggregator Synchronization

Child aggregators can be configured to automatically synchronize their local PPCs with the DPC. This process is called aggregator synchronization and is useful for clusters with multiple aggregators.

To enable automatic aggregator synchronization, set enable_periodic_distributed_plancache_agg_sync to ON.

SET GLOBAL enable_periodic_distributed_plancache_agg_sync = ON;

When enable_periodic_distributed_plancache_agg_sync is set to ON, at the interval specified by distributed_plancache_agg_sync_s, each aggregator performs an aggregator synchronization operation and downloads the most recently used distributed_plancache_max_download_plans query plans from the DPC.

Synchronous DPC Lookup

Aggregator nodes can be configured with enable_synchronous_dpc_lookup to enable them to look for plans in the DPC when a plan is not found in their local PPC. With this setting, query execution looks to the DPC before doing local compilation, optimization, and Code Generation. When a plan exists in the DPC, this lookup process is typically faster than local compilation, optimization, and code generation.

To enable synchronous DPC lookup, set enable_synchronous_dpc_lookup to ON.

SET SESSION enable_synchronous_dpc_lookup = ON;

Manual Synchronization

Synchronizations of a node's local PPC with the DPC can be manually initiated for a specific database with the _SYNC_PLANS command.

Engine Variables

The following engine variables are used to manage the DPC:

Name

Description

distributed_plancache_worker_threads

Specifies the number of threads used to process tasks in the task queue for the DPC.

distributed_plancache_max_download_plans

Specifies the maximum number of plans downloaded in a synchronization task for the DPC.

distributed_plancache_agg_sync_s

Specifies the interval between aggregators' periodic synchronizations for the DPC.

enable_periodic_distributed_plancache_agg_sync

Specifies if aggregators periodically synchronize their local PPC with the DPC.

enable_synchronous_dpc_lookup

A session variable that specifies that nodes look for a plan in the DPC when a plan is not found in the node's local PPC.

Refer to List of Engine Variables for default values.

Observe Distributed Plancache Statistics

Connect to a node and use the SHOW DISTRIBUTED_PLANCACHE STATUS command to observe statistics about the DPC on a specific node, as follows:

SHOW DISTRIBUTED_PLANCACHE STATUS;
+-------------------------------------------------------+-------+
| Stat                                                  | Value |
+-------------------------------------------------------+-------+
| Successful Downloads Since Startup                    | 128   |
| Skipped Downloads Since Startup                       | 110   |
| Failed Downloads Since Startup                        | 0     |
| Plans Uploaded Since Startup                          | 2     |
| Plans Deleted Since Startup                           | 0     |
| DB Synchronization Since Startup                      | 1     |
| Successful Downloads From Periodic Sync Since Startup | 12    |
| Skipped Downloads From Periodic Sync Since Startup    | 107   |
| Failed Downloads From Periodic Sync Since Startup     | 0     |
| Periodic Query Plan Syncs Since Startup               | 1     |
| Distributed Plancache Plans Used Since Startup         | 90    |
| Currently Queued Populate Download Tasks              | 0     |
| Currently Queued Download Tasks                       | 0     |
| Currently Queued Upload Tasks                         | 0     |
| Currently Queued Delete Tasks                         | 0     |
| Avg Plan Download Latency (ms)                        | 13    |
| Avg Plan Upload Latency (ms)                          | 38    |
| Avg Plan Delete Latency (ms)                          | 0     |
| Avg Duration For DB Plan Synchronization (ms)         | 1710  |
+-------------------------------------------------------+-------+

Use the following commands to view the number of tasks in the DPC task queue.

SHOW STATUS LIKE 'Queued_DPC_Uploads';
SHOW STATUS LIKE 'Queued_DPC_Downloads';
SHOW STATUS LIKE 'Queued_DPC_PopulateDownloads';
SHOW STATUS LIKE 'Queued_DPC_Deletes';

If the values of these metrics increase significantly and the number of compilations rises due to slow plan synchronization, SingleStore recommends considering an increase in the DPC worker thread pool size (distributed_plancache_worker_threads).

Clear the DPC Queue

CLEAR DISTRIBUTED_PLANCACHE QUEUE;

Synchronize the DPC Manually with _SYNC_PLANS

To synchronize a node's local PPC with the DPC for a specific database, run the following command.

_SYNC_PLANS <dbName>;

This command will download up to distributed_plancache_max_download_plans from the DPC to the node's local PPC for the specified database.

The _SYNC_PLANS command first downloads one plan per table in the database. For databases with large numbers of tables, SingleStore recommends increasing the value of distributed_plancache_max_download_plans to a value greater than the number of tables in the database before running the command.

When this command is run, a synchronization task will be queued regardless of whether enable_distributed_plancache is set to ON or OFF. If the request is enqueued while enable_distributed_plancache is set to OFF, the task will run when enable_distributed_plancache is set to ON.

Examples

Example 1: Improve Performance after Scaling and Cold-Starts

The DPC can improve performance for scaling and cold starts, which is useful when scaling or adding aggregator nodes occurs regularly as a part of normal operations.

Run the following command to enable the DPC.

SET GLOBAL enable_distributed_plancache = ON;

With enable_distributed_plancache set to ON, when a database is scaled or aggregator nodes are added, plans for all databases relevant to the new node(s) are automatically synchronized.

This synchronization process downloads distributed_plancache_max_download_plans. For databases with a large number of tables, SingleStore recommends setting distributed_plancache_max_download_plans to a value greater than the number of tables in the database before running this command.

Run the following queries to track the status of synchronization.

SHOW DISTRIBUTED_PLANCACHE STATUS;
SHOW STATUS LIKE 'Queued_DPC_PopulateDownloads';
SHOW STATUS LIKE 'Queued_DPC_Downloads';

When the values of both Queued_DPC_PopulateDownloads and Queue_DPC_Downloads are zero, all download tasks from the DPC have finished.

Example 2: Many Child Aggregators

The DPC can improve first-time query performance when there are many child aggregators and the query shapes are heterogeneous. To avoid aggregator nodes repeatedly compiling the same plan, run the following on all aggregator nodes:

SET GLOBAL enable_periodic_distributed_plancache_agg_sync = ON;
SET GLOBAL distributed_plancache_agg_sync_s = <sync_seconds>;

The value <sync_seconds> must be at least 30 seconds. For clusters with many aggregators, SingleStore recommends setting this value between 60 seconds and 10 minutes.

These settings ensure that aggregator nodes periodically download plans from the DPC that they do not have in their local PPC.

Another option for improving performance in clusters with many child aggregators is to turn on the synchronous DPC lookup using the following command.

SET SESSION enable_synchronous_dpc_lookup = TRUE;

When the enable_synchronous_dpc_lookup engine variable is set to TRUE and a node detects that a query plan is not in its local PPC, the node will look for a plan in the DPC before compiling the query plan.

Consider a query being run on many aggregators. The first aggregator to receive the query looks (and misses) in its local PPC and the DPC; the node then compiles the plan and uploads it to the DPC. When the query is run on other nodes, those nodes miss in their local PPC, but find the plan in the DPC and use that plan to avoid compilation.

The enable_synchronous_dpc_lookup engine variable is a session variable and enables this lookup only for the current session.

Example 3: Database with a Large Number of Query Plans

The DPC can improve performance when a specific database has a disproportionately large number of query plans compared to the other databases.

To download plans from the DPC to the PPC for a specific node, run the following commands on that node.

SET GLOBAL distributed_plancache_max_download_plans = <max_plans_to_download>;
_SYNC_PLANS <dbname>;

To download all query plans for the database, set the value <max_plans_to_download> to the number of tables in the database plus the number of plans in the database.

Last modified: June 25, 2025

Was this article helpful?

Verification instructions

Note: You must install cosign to verify the authenticity of the SingleStore file.

Use the following steps to verify the authenticity of singlestoredb-server, singlestoredb-toolbox, singlestoredb-studio, and singlestore-client SingleStore files that have been downloaded.

You may perform the following steps on any computer that can run cosign, such as the main deployment host of the cluster.

  1. (Optional) Run the following command to view the associated signature files.

    curl undefined
  2. Download the signature file from the SingleStore release server.

    • Option 1: Click the Download Signature button next to the SingleStore file.

    • Option 2: Copy and paste the following URL into the address bar of your browser and save the signature file.

    • Option 3: Run the following command to download the signature file.

      curl -O undefined
  3. After the signature file has been downloaded, run the following command to verify the authenticity of the SingleStore file.

    echo -n undefined |
    cosign verify-blob --certificate-oidc-issuer https://oidc.eks.us-east-1.amazonaws.com/id/CCDCDBA1379A5596AB5B2E46DCA385BC \
    --certificate-identity https://kubernetes.io/namespaces/freya-production/serviceaccounts/job-worker \
    --bundle undefined \
    --new-bundle-format -
    Verified OK