Distributed Plancache

Note

This feature is an opt-in preview.

Opt-in previews allow you to evaluate and provide feedback on new and upcoming features prior to their general availability.

The Distributed PlanCache (DPC) is a third layer of the plancache that supplements the in-memory plancache and the on-disk persistent plancache (PPC). The DPC allows cluster nodes to share compiled plans. Nodes can skip query optimization, code generation, and the LLVM compilation process if the plan has been compiled in a different node in the cluster. Thus, the load on the CPU is reduced and the first-time performance of queries that have been compiled on other nodes is improved.

The DPC improves performance in the following scenarios:

  • Fast scaling: The DPC triggers plan synchronization during a node's reprovisioning phase. During reprovisioning, recently used query plans are downloaded to nodes’ PPCs.

  • Clusters with multiple aggregator nodes: Aggregators periodically sync the most recently used plans from other aggregators.

Overview

When the DPC is enabled, plans are synchronized automatically between nodes. That is, child aggregators and leaf nodes automatically download plans from the DPC into their local PPC. Once a plan is downloaded, the node can use that plan and avoid query optimization, code generation, and plan compilation in many cases.

Similar to how the PPC functions, a plan downloaded to the DPC is usable except when:

  • A variable affecting the plan has changed in the node.

  • Or a table in a query has changed significantly (for example, the number of rows in the table has changed by a factor of two times or more) since the plan was generated.

In addition to downloading plans from the DPC, nodes also upload plans to the DPC to make those plans available to other nodes. Similarly plans can be deleted from the DPC to indicate to other nodes that those plans can be deleted.

DPC operations do not interfere with regular cluster operations. While the DPC may consume CPU, memory, and network resources, all operations happen in the background and do not interfere with regular query execution.

The DPC operates on a best-effort basis and does not guarantee that plan compilation will not occur on a new node.

Remarks

  • DPC requires that the cluster have Unlimited Storage enabled. DPC utilizes unlimited storage to store plancache files.

  • DPC only stores plans generated during MBC, LLVM, and INTERPRET_FIRST interpreter modes.

  • Plan synchronization is not supported on the Master Aggregator.

Enable and Manage the Distributed Plancache

Enable the Distributed Plancache

The enable_distributed_plancache engine variable controls the DPC.

Use the following command to verify the DPC is enabled.

SELECT @@enable_distributed_plancache;
 
+--------------------------------+
| @@enable_distributed_plancache |
+--------------------------------+
|                              1 |
+--------------------------------+

Manage the Distributed Plancache

When the DPC is enabled, nodes automatically download, upload, and delete plans from the DPC.

Nodes download plans from the DPC to their local PPC when the cluster scales, is rebalanced, or a new node is added to the cluster. Nodes delete plans from the DPC when those plans are explicitly deleted from their local PPC with DROP … FROM PLANCACHE.

The download, upload, delete, and synchronization operations occur in the background and are managed by the DPC task queue.

Aggregator Synchronization

Child aggregators can be configured to automatically synchronize their local PPCs with the DPC. This process is called aggregator synchronization and is useful for clusters with multiple aggregators.

When enable_periodic_distributed_plancache_agg_sync is set to ON, at the interval specified by distributed_plancache_agg_sync_s, each aggregator performs an aggregator synchronization operation and downloads the most recently used distributed_plancache_max_download_plans query plans from the DPC.

Synchronous DPC Lookup

Aggregator nodes can be configured with enable_synchronous_dpc_lookup to enable them to look for plans in the DPC when a plan is not found in their local PPC. With this setting, query execution looks to the DPC before doing local compilation, optimization, and Code Generation. When a plan exists in the DPC, this lookup process is typically faster than local compilation, optimization, and code generation.

Engine Variables

The following engine variables are used to manage the DPC:

Name

Description

distributed_plancache_worker_threads

Specifies the number of threads used to process tasks in the task queue for the DPC.

distributed_plancache_max_download_plans

Specifies the maximum number of plans downloaded in a synchronization task for the DPC.

distributed_plancache_agg_sync_s

Specifies the interval between aggregators' periodic synchronizations for the DPC.

enable_periodic_distributed_plancache_agg_sync

Specifies if aggregators periodically synchronize their local PPC with the DPC.

enable_synchronous_dpc_lookup

A session variable that specifies that nodes look for a plan in the DPC when a plan is not found in the node's local PPC.

Refer to List of Engine Variables for default values.

Observe Distributed Plancache Statistics

Connect to a node and use the SHOW DISTRIBUTED_PLANCACHE STATUS command to observe statistics about the DPC on a specific node, as follows:

SHOW DISTRIBUTED_PLANCACHE STATUS;
+-------------------------------------------------------+-------+
| Stat                                                  | Value |
+-------------------------------------------------------+-------+
| Successful Downloads Since Startup                    | 128   |
| Skipped Downloads Since Startup                       | 110   |
| Failed Downloads Since Startup                        | 0     |
| Plans Uploaded Since Startup                          | 2     |
| Plans Deleted Since Startup                           | 0     |
| DB Synchronization Since Startup                      | 1     |
| Successful Downloads From Periodic Sync Since Startup | 12    |
| Skipped Downloads From Periodic Sync Since Startup    | 107   |
| Failed Downloads From Periodic Sync Since Startup     | 0     |
| Periodic Query Plan Syncs Since Startup               | 1     |
| Distributed Plancache Plans Used Since Startup         | 90    |
| Currently Queued Populate Download Tasks              | 0     |
| Currently Queued Download Tasks                       | 0     |
| Currently Queued Upload Tasks                         | 0     |
| Currently Queued Delete Tasks                         | 0     |
| Avg Plan Download Latency (ms)                        | 13    |
| Avg Plan Upload Latency (ms)                          | 38    |
| Avg Plan Delete Latency (ms)                          | 0     |
| Avg Duration For DB Plan Synchronization (ms)         | 1710  |
+-------------------------------------------------------+-------+

Use the following commands to view the number of tasks in the DPC task queue.

SHOW STATUS LIKE 'Queued_DPC_Uploads';
SHOW STATUS LIKE 'Queued_DPC_Downloads';
SHOW STATUS LIKE 'Queued_DPC_PopulateDownloads';
SHOW STATUS LIKE 'Queued_DPC_Deletes';

If the values of these metrics increase significantly and the number of compilations rises due to slow plan synchronization, SingleStore recommends considering an increase in the DPC worker thread pool size (distributed_plancache_worker_threads).

Last modified: June 25, 2025

Was this article helpful?

Verification instructions

Note: You must install cosign to verify the authenticity of the SingleStore file.

Use the following steps to verify the authenticity of singlestoredb-server, singlestoredb-toolbox, singlestoredb-studio, and singlestore-client SingleStore files that have been downloaded.

You may perform the following steps on any computer that can run cosign, such as the main deployment host of the cluster.

  1. (Optional) Run the following command to view the associated signature files.

    curl undefined
  2. Download the signature file from the SingleStore release server.

    • Option 1: Click the Download Signature button next to the SingleStore file.

    • Option 2: Copy and paste the following URL into the address bar of your browser and save the signature file.

    • Option 3: Run the following command to download the signature file.

      curl -O undefined
  3. After the signature file has been downloaded, run the following command to verify the authenticity of the SingleStore file.

    echo -n undefined |
    cosign verify-blob --certificate-oidc-issuer https://oidc.eks.us-east-1.amazonaws.com/id/CCDCDBA1379A5596AB5B2E46DCA385BC \
    --certificate-identity https://kubernetes.io/namespaces/freya-production/serviceaccounts/job-worker \
    --bundle undefined \
    --new-bundle-format -
    Verified OK