Troubleshoot Pipelines

Concepts

This topic requires an understanding of pipeline batches, which are explained in The Lifecycle of a Pipeline.

View pipeline errors

The Pipelines provide information about pipeline errors that have occurred. Some useful queries against these tables are provided in this section.

Query the information_schema.PIPELINES_ERRORS table

You can run the following query to show all errors that have occurred, per database, per pipeline, per batch, and per partition.

SELECT DATABASE_NAME, PIPELINE_NAME, BATCH_ID, PARTITION, BATCH_SOURCE_PARTITION_ID,
ERROR_KIND, ERROR_CODE, ERROR_MESSAGE, LOAD_DATA_LINE_NUMBER, LOAD_DATA_LINE
FROM information_schema.PIPELINES_ERRORS;

Query files that were skipped

The query in the previous section does not show files that were skipped because they had errors. To return such files that were skipped per database and per pipeline (but not per batch nor per partition), run the following query.

SELECT * FROM information_schema.PIPELINES_FILES WHERE FILE_STATE = 'Skipped';

If you need additional information, such as the database, the partition, the error that was generated and the line of the error file or object that caused the issue, run the following query.

SELECT pe.DATABASE_NAME, pe.PIPELINE_NAME, pe.BATCH_ID, pe.PARTITION,
pe.BATCH_SOURCE_PARTITION_ID, pe.ERROR_TYPE, pe.ERROR_KIND, pe.ERROR_CODE, pe.ERROR_MESSAGE,
pe.LOAD_DATA_LINE_NUMBER, pe.LOAD_DATA_LINE
FROM information_schema.PIPELINES_FILES pf, information_schema.PIPELINES_ERRORS pe
WHERE pe.BATCH_SOURCE_PARTITION_ID = pf.FILE_NAME and pf.FILE_STATE = 'Skipped';

Address specific errors

The following table lists errors that can occur when running a pipeline statement, such as CREATE PIPELINE, and errors that can occur while a pipeline is extracting, shaping, and loading data.

Error

Resolution

You get a syntax error when running CREATE PIPELINE.

Both CREATE PIPELINE and LOAD DATA (which is part of the CREATE PIPELINE syntax) have many options. Verify the syntax for the options you include is specified in the correct order.

You receive error 1970: Subprocess timed out

The master aggregator can likely not connect to the pipeline's data source. Check the connection parameters, such as CONFIG and CREDENTIALS, that specify how to connect to the data source.Also verify that the data source is reachable from the master aggregator.

CREATE PIPELINE ... S3 returns an error that the bucket cannot be located.

The bucket name is case-sensitive. Verify that the case of the bucket name specified in your CREATE PIPELINE ... S3 statement matches the case of the bucket name in S3.

Error 1953: exited with failure result (8 : Exec format error) or No such file or directory

This error can occur when a pipeline attempts to run a transform. Check the following:

1. Verify that the first line of your transform contains a shebang. This specifies the interpreter (such as Python) to use to execute the script.

2. Is the interpreter (such as Python) installed on all leaves?

3. If the transform was written on a Windows machine, do the newlines use \r\n?

CREATE PIPELINE ... WITH TRANSFORM fails with a libcurl error.

An incorrect path to the transform was likely specified. If the path to the transform is correct, then running curl with the path to the transform will succeed.

Error: java.lang.OutOfMemoryError: Java heap space

This error may occur when the default value (8MB) for the engine variable java_pipeline_heap_size is exceeded. Raise the default level to potentially correct this error.

A parsing error occurs in your transform.

To debug your transform, you can run EXTRACT PIPELINE ... INTO OUTFILE. This command saves a sample of the data extracted from the data source to a file. For debugging purposes, you can make changes to the file as needed and then send the file to the transform. For more information, see EXTRACT PIPELINE … INTO OUTFILE.

Rename a table referenced by a pipeline

When trying to rename a table that is referenced by a pipeline the following error will result:

ERROR 1945 ER_CANNOT_DROP_REFERENCED_BY_PIPELINE: Cannot rename table because it is referenced by pipeline <pipeline_name>

The following sequence demonstrates how to rename a pipeline referenced table:

  1. Save your pipeline settings:

    SHOW CREATE PIPELINE <pipeline_name> EXTENDED;
  2. Stop the pipeline:

    STOP PIPELINE <pipeline_name>;
  3. Drop the pipeline:

    DROP PIPELINE <pipeline_name>;
  4. Change the name of the table:

    ALTER TABLE <old_table_name> RENAME <new_table_name>;
  5. Recreate the pipeline with the settings obtain in step 1 and change the table name to reflect the new table name.

  6. Start the pipeline:

    START PIPELINE <pipeline_name>;

Pipeline errors that are handled automatically

Typical error handing scenario

In most situations, an error that occurs while a pipeline is running is handled in this way:

If an error occurs while a batch b is running, then b will fail and b 's transaction rolls back. Then b is retried at most pipelines_max_retries_per_batch_partition times. If all of the retries are unsuccessful and pipelines_stop_on_error is set to ON, the pipeline stops. Otherwise, the pipeline continues and processes a new batch nb ,which processes the same files or objects that b attempted to process, excluding any files or objects that may have caused the error.

The following table lists events, which may or may not cause errors, and how the events are handled.

Event

How the Event is Handled

The pipeline cannot access a file or object.

The typical error handling scenario (mentioned earlier in this topic) applies.

nb skips the file/object.

The pipeline cannot read a file or object because it is corrupted.

The typical error handling scenario (mentioned earlier in this topic) applies.

nb skips the file or object.

After fixing the issue with the corrupted file/object, you can have the pipeline reprocess the file/object by running ALTER PIPELINE ... DROP FILE <filename>;. The pipeline will process the file/object during the next batch.

A file or object is removed from the filesystem after the batch has started processing the file/object.

The batch does not fail; the file or object is processed.

A file is removed from the filesystem (or an object is removed from an object store) after the pipeline registers the file/object in information_schema.PIPELINES_FILES, but before the file/object is processed.

The typical error handling scenario (mentioned earlier in this topic) applies.

nb skips the file or object.

The cluster restarts while the batch is being processed.

The typical error handling scenario (mentioned earlier in this topic) applies.

Once the cluster is online, b is retried.

A leaf node is unavailable before the pipeline starts.

This does not cause the pipeline to fail. The pipeline will not ingest any data to the unavailable leaf node.

A leaf node fails while the pipeline is running.

The batch fails. The batch is retried as described in the typical error handling scenario; that batch and all future batches no longer attempt to load data to the unavailable leaf node.

An aggregator fails while the pipeline is running

The batch fails. When the aggregator is available, the batch is retried as described in the typical error handling scenario.

The pipeline reaches the allocated storage space for errors.

The pipeline pauses. How to address the issue: 1) Increase the value of the ingest_errors_max_disk_space_mb engine variable. 2) Run CLEAR PIPELINE ERRORS; to free up storage space for errors. (Running this command will remove all existing pipeline errors that are shown when running SHOW ERRORS;).

Additional Information

In this section

Last modified: February 27, 2024

Was this article helpful?