Pipeline Built-in Functions
Warning
SingleStore 9.0 gives you the opportunity to preview, evaluate, and provide feedback on new and upcoming features prior to their general availability. In the interim, SingleStore 8.9 is recommended for production workloads, which can later be upgraded to SingleStore 9.0.
On this page
SingleStore provides two built-in functions with pipelines to help load data.SET
clause.
pipeline_ source_ file()
Pipelines persist the name of a file by using the pipeline_
function.SET
clause to set a table column to the name of the pipeline data source file.
For example, given the table definition CREATE TABLE b(isbn NUMERIC(13), title VARCHAR(50));
, use the following statement to set the titles of files while ingesting data from AWS S3.
CREATE PIPELINE books ASLOAD DATA S3 's3://<bucket_name>/Books/'CONFIG '{"region":"us-west-2"}'CREDENTIALS '{"aws_access_key_id": "<access_key_id>","aws_secret_access_key": "<secret_access_key>"}'SKIP DUPLICATE KEY ERRORSINTO TABLE b(isbn)SET title = pipeline_source_file();
For more information on using the pipeline_
function to load data from AWS S3, refer to Load Data from Amazon Web Services (AWS) S3.
pipeline_ batch_ id()
Pipelines persist the ID of the batch used to load data with the pipeline_
built-in function.SET
clause to set a table column to the ID of the batch used to load the data.
For example, given the table definition CREATE TABLE t(b_
, use this statement to load the batch ID into the b_
column:
CREATE PIPELINE p AS LOAD DATA ... INTO TABLE t(@b_id,column_2) ...SET b_id = pipeline_batch_id();
Last modified: November 27, 2024