Detect and Address Slow Performance and High Memory Usage of Pipelines
On this page
Concepts
This topic requires an understanding of pipeline batches, which are explained in The Lifecycle of a Pipeline.
Detect slow performance and high memory usage
Run PROFILE PIPELINE
Run PROFILE PIPELINE to gather resource consumption metrics, such as starting and ending times, for operations that a batch processes.
Query the information_ schema. PIPELINE_ BATCHES
table
The following query returns diagnostics per database, per pipeline and per batch.
SELECT DATABASE_NAME, PIPELINE_NAME, BATCH_ID, BATCH_STATE, BATCH_TIMEFROM information_schema.PIPELINES_BATCHES_SUMMARYORDER BY BATCH_ID;
-
BATCH_
, if set toSTATE Queued
, indicates that the batch cannot extract, shape, or load data until cluster resources are freed. -
A high
BATCH_
value indicates a long amount of time for the batch to extract, shape, and to load the data (combined).TIME
For a description of all of the fields in the information_
table, see the information_
Query the information_ schema. MV_ ACTIVITIES
and information_ schema. MV_ QUERIES
tables
As an alternative to querying the information_
table, as detailed in the previous section, you can run the following query, which provides diagnostic information per pipeline, such as avg_
, avg_
, avg_
, and avg_
.
SELECT substr(replace(replace(MVAC.activity_name,'\n',''), ' ',' '),1,30) AS ACTIVITY_n,substr(replace(replace(query_text,'\n',''), ' ',' '),1,30) AS query_text,database_name AS databaseName,last_finished_timestamp AS last_run,(cpu_time_ms/(run_count+success_count+failure_count)) AS avg_cpu_time_ms,(cpu_wait_time_ms/(run_count+success_count+failure_count)) AS avg_cpu_wait_time_ms,(elapsed_time_ms/(run_count+success_count+failure_count)) AS avg_elapsed_time_ms,(lock_time_ms/(run_count+success_count+failure_count)) AS avg_lock_time_ms,(network_time_ms/(run_count+success_count+failure_count)) AS avg_network_time_ms,(disk_time_ms/(run_count+success_count+failure_count)) AS avg_disk_time_ms,(round((disk_b/1024/1024),2)/(run_count+success_count+failure_count)) AS avg_io_mb,(round((network_b/1024/1024),2)/(run_count+success_count+failure_count)) AS avg_network_mb,round((1000*(memory_bs/1024/1024)/(elapsed_time_ms)),2) AS avg_memory_mb,(memory_major_faults/(run_count+success_count+failure_count)) AS avg_major_faults,(run_count+success_count+failure_count) AS total_executionsFROM information_schema.mv_activities_cumulative MVACJOIN information_schema.mv_queries MVQ ON MVQ.activity_name = MVAC.activity_nameWHERE MVAC.activity_name like '%RunPipeline%' ORDER BY avg_elapsed_time_ms DESC LIMIT 10;
Address Slow Performance and High Memory Usage
The following items may help to increase pipeline performance and reduce memory usage.
-
You can decrease the maximum number of partitions that a batch can utilize using ALTER PIPELINE SET MAX_
PARTITIONS_ PER_ BATCH. Decreasing the maximum number of partitions will limit parallelism for large data sources. -
You can increase the batch interval of an existing pipeline using ALTER PIPELINE SET BATCH_
INTERVAL. If you increase the batch interval, data will be loaded from the data source less frequently. -
You can break large files into smaller chunks to increase the ingestion speed.
Large files get parsed on a single leaf, which increases the ingestion time. -
You can add more leaves to your cluster, and then rebalance the cluster, which will result in fewer numbers of partitions per leaf.
-
Are you using a stored procedure with your pipeline? If so, how the stored procedure is implemented can have significant consequences on the memory use and performance of your pipeline.
For details, see Writing Efficient Stored Procedures for Pipelines. -
For low parallelism pipelines, such as single file S3 loads and single partition Kafka topics, you can use an aggregator pipeline by running the
CREATE AGGREGATOR PIPELINE
command (an aggregator pipeline is not required for single-partition Kafka topics).If you are using an existing non-aggregator pipeline, you cannot convert it directly to an aggregator pipeline. Instead, drop your existing pipeline and recreate it as an aggregator pipeline. Caution
If a pipeline is dropped and recreated, the pipeline will start reading from the earliest offset in the data source, once the pipeline is started.
This may cause duplicate records to be inserted into the destination table(s). -
Have queries or stored procedures that you normally run changed recently? If so, check these queries or stored procedures for changes that could be contributing to slow performance and high memory usage.
-
Do you have any uncommitted transactions? These are transactions for which you have run
START TRANSACTION
, but have not runCOMMIT
orROLLBACK
afterwards.For more information, see the Uncommitted Transactions section in ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction. -
If you are using the
IGNORE
orSKIP ALL ERRORS
options in yourCREATE PIPELINE .
statement, that could be causing the pipeline to run slowly, as these options may write large numbers of warnings to the logs.. . LOAD DATA . . . For IGNORE
andSKIP ALL ERRORS
alternatives, see LOAD DATA. -
If possible, avoid using the
ENCLOSED BY
andOPTIONALLY ENCLOSED BY
options in yourCREATE PIPELINE .
statement.. . LOAD DATA . . . -
If your pipeline uses a transform, and implements the transform using an interpreted language such as Python, consider using a compiled language to speed up performance.
-
If your pipeline uses a transform, and it is slow, you may be able to use a
CREATE PIPELINE .
statement instead of using a transform.. . SET . . . WHERE -
Check if removing or moving files that were already processed by the pipeline could improve pipeline performance.
To perform this check, follow these steps: -
Run:
SELECT database_name, pipeline_name, batch_start_unix_timestamp, batch_time - max(batch_partition_time) AS batch_overheadFROM information_schema.PIPELINES_BATCHESGROUP BY database_name, pipeline_name, batch_idORDER BY database_name, pipeline_name, batch_id; -
If
batch_
is increasing asoverhead batch_
increases, try the following:start_ unix_ timestamp -
Run
SELECT * FROM information_
.schema. PIPELINES_ FILES; -
For rows in the output that have the value
LOADED
in theFILE_
column, move or the delete the corresponding files in theSTATE FILE_
column.NAME
-
-
Clearing the information_ schema. pipeline_ files
Table
You can clear the data in the information_
table.
-
To drop a single file and its file record from the
information_
table, use the ALTER PIPELINE .schema. pipelines_ files . . DROP FILE command. For example: ALTER PIPELINE <your_pipeline>; DROP FILE 'pipeline_files'; -
To drop all of the files associated with a pipeline, you can drop and recreate the pipeline.
This will clear all the file records associated with the pipeline. However, the metadata related to the pipeline will also be deleted. If you do need the metadata, copy it to a separate table before recreating the pipeline. To recreate the pipeline: -
Make a note of the pipeline settings to use when recreating the pipeline.
SHOW CREATE PIPELINE <your_pipeline> EXTENDED; -
Stop the pipeline.
STOP PIPELINE <your_pipeline>; -
Drop the pipeline.
DROP PIPELINE <your_pipeline>; -
Recreate the pipeline using the settings obtained in step "a".
-
Start the pipeline.
START PIPELINE <your_pipeline>;
-
High Memory Usage for S3 Pipeline
When using an S3 pipeline, over time, an increase in memory in the md_
may occur.ENABLE OFFSETS METADATA GC
should be added to the CREATE PIPELINE
query block.ALTER PIPELINE
statement with the ENABLE OFFSETS METADATA GC
clause.
See the S3 Pipeline Using Metadata Garbage Collection (GC) section in the CREATE PIPELINE or the ALTER PIPELINE topics.
To check the memory usage use the query below:
SELECT * FROM information_schema.INTERNAL_TABLE_STATISTICS WHERE table_name like "md_extractors_offsets" ORDER BY memory_use DESC;
+---------------+-----------------------+---------+-----------+------+------------+----------------+------+------------+-------------------+----------------+
| DATABASE_NAME | TABLE_NAME | ORDINAL | HOST | PORT | NODE_TYPE | PARTITION_TYPE | ROWS | MEMORY_USE | STORAGE_TYPE | ROWS_IN_MEMORY |
+---------------+-----------------------+---------+-----------+------+------------+----------------+------+------------+-------------------+----------------+
| ticket_test | md_extractors_offsets | NULL | 127.0.0.1 | 3306 | Aggregator | Reference | 2 | 524544 | INTERNAL_METADATA | 2 |
| ticket_test | md_extractors_offsets | NULL | 127.0.0.1 | 3307 | Leaf | Reference | 2 | 524544 | INTERNAL_METADATA | 2 |
| ticket_test | md_extractors_offsets | 1 | 127.0.0.1 | 3307 | Leaf | Master | 0 | 0 | INTERNAL_METADATA | 0 |
| ticket_test | md_extractors_offsets | 7 | 127.0.0.1 | 3307 | Leaf | Master | 0 | 0 | INTERNAL_METADATA | 0 |
| ticket_test | md_extractors_offsets | 6 | 127.0.0.1 | 3307 | Leaf | Master | 0 | 0 | INTERNAL_METADATA | 0 |
| ticket_test | md_extractors_offsets | 5 | 127.0.0.1 | 3307 | Leaf | Master | 0 | 0 | INTERNAL_METADATA | 0 |
| ticket_test | md_extractors_offsets | 4 | 127.0.0.1 | 3307 | Leaf | Master | 0 | 0 | INTERNAL_METADATA | 0 |
| ticket_test | md_extractors_offsets | 3 | 127.0.0.1 | 3307 | Leaf | Master | 0 | 0 | INTERNAL_METADATA | 0 |
| ticket_test | md_extractors_offsets | 2 | 127.0.0.1 | 3307 | Leaf | Master | 0 | 0 | INTERNAL_METADATA | 0 |
| ticket_test | md_extractors_offsets | 0 | 127.0.0.1 | 3307 | Leaf | Master | 0 | 0 | INTERNAL_METADATA | 0 |
+---------------+-----------------------+---------+-----------+------+------------+----------------+------+------------+-------------------+----------------+
Last modified: August 5, 2024