Load Data from Amazon Web Services (AWS) S3
On this page
SingleStore Pipelines can extract objects from Amazon S3 buckets, optionally transform them, and insert them into a destination table.
Prerequisites
To complete this Quickstart, your environment must meet the following prerequisites:
-
AWS Account: This Quickstart uses Amazon S3 and requires an AWS account’s access key id and secret access key.
-
SingleStore Helios installation –or– a SingleStore Helios workspace: You will connect to the database or workspace and create a pipeline to pull data from your Amazon S3 bucket.
Part 1: Creating an Amazon S3 Bucket and Adding a File
-
On your local machine, create a text file with the following CSV contents and name it
books.:txt The Catcher in the Rye, J.D. Salinger, 1945 Pride and Prejudice, Jane Austen, 1813 Of Mice and Men, John Steinbeck, 1937 Frankenstein, Mary Shelley, 1818 -
In S3 create a bucket and upload
books.to the bucket.txt For information on working with S3, refer to the Amazon S3 documentation. Note that the
aws_that your SingleStore pipeline will use (specified in the next section inaccess_ key_ id CREATE PIPELINE library .) must have read access to both the bucket and the file.. . CREDENTIALS . . .
Once the books. file has been uploaded, you can proceed to the next part of the Quickstart.
Part 2: Generating AWS Credentials
To be able to use an S3 bucket within the pipeline syntax, the following minimum permissions are required:
-
s3:GetObject
-
s3:ListBucket
These permissions only provide for read access from an S3 bucket which is the minimum required to ingest data into a pipeline.
There are two ways to create an IAM Policy: with the Visual editor or JSON.
Create an IAM Policy Using the Visual Editor
-
Log into the AWS Management Console.
-
Obtain the Amazon Resource Number (ARN) and region for the bucket.
The ARN and region are located in the Properties tab of the bucket. -
Select IAM from the list of services.
-
Select Policies under Access Management and select the Create policy button.
-
Using the Visual editor:
-
Select the Service link and select S3 from the list or manually enter S3 into the search block.
-
Select the S3 link from the available selections.
-
In the Action section, select the List and Read checkboxes.
-
Under Resources, select the bucket link and select the Add ARN link.
Enter the ARN and bucket name and select the Add button. -
Under Resources, select the object link and select the Add ARN link.
Enter the ARN and object name and select the Add button. If no objects are added under resources, the created policy has access to all objects in the bucket’s root path. -
Request conditions are optional.
-
Create an IAM Policy Using JSON
-
To use JSON for policy creation, copy the information from following the code block into the AWS JSON tab.
Make sure to change the bucket name. {"Version": "2012-10-17","Statement": [{"Sid": "VisualEditor1","Effect": "Allow","Action": ["s3:GetObject","s3:ListBucket"],"Resource": ["arn:aws:s3:::<bucket_name>","arn:aws:s3:::<bucket_name>/*"]}]} -
Select the Add tag button if needed and select Next: Review.
-
Enter a policy name this is a required field.
The description field is optional. Select Create policy to finish.
Assign the IAM Policy to a New User
-
In the IAM services, select Users and select the Add users button.
-
Enter in a name for the new user and select Next.
-
Select the Attach policies directly radio button.
Use the search box to find the policy or scroll through the list of available policies. -
Select the checkbox next to the policy to be applied to the user and select Next.
-
Select the Create user button to finish.
Create Access Keys for Pipeline Syntax
Access keys must be generated for the new user.
-
In the IAM services, select Users and select the user name.
-
Select the Security credentials tab.
-
In the access keys section, select the Create access key button.
-
Select the Third-party service radio button and select Next.
-
Although setting a description tag is optional, SingleStore recommends doing so, especially when multiple keys are needed.
Select the Create key button to continue. -
Either download a
.file containing the access and secret key information or copy the credentials directly.csv Select Done when finished. -
Following is the basic syntax for using an access key and a secret access key in a pipeline:
CREATE PIPELINE <pipeline_name> ASLOAD DATA S3 's3://bucket_name/<file_name>'CONFIG '{"region":"us-west-2"}'CREDENTIALS '{"aws_access_key_id": "<access key id>","aws_secret_access_key": "<access_secret_key>"}'INTO TABLE <destination_table>FIELDS TERMINATED BY ',';
If creating or starting S3 pipelines takes approximately 60 seconds, or fails with a subprocess timeout when running outside AWS, or in environments where IMDS is blocked, reduce the value of the subprocess_ engine variable (for example, to 1000) or explicitly provide CREDENTIALS.1 (millisecond) to avoid delays.
Warning
If the key information is not downloaded or copied to a secure location before selecting Done, the secret key cannot be retrieved, and will need to be recreated.
Part 3: Creating a SingleStore Database and S3 Pipeline
Now that you have an S3 bucket that contains an object (file), you can use SingleStore Helios or DB to create a new pipeline and ingest the messages.
Create a new database and a table that adheres to the schema contained in the books. file.
CREATE DATABASE books;
CREATE TABLE classic_books(title VARCHAR(255),author VARCHAR(255),date VARCHAR(255));
These statements create a new database named books and a new table named classic_, which has three columns: title, author, and date.
Now that the destination database and table have been created, you can create an S3 pipeline.books. file to your bucket.
-
The name of the bucket, such as:
<bucket-name> -
The name of the bucket’s region, such as:
us-west-1 -
Your AWS account’s access keys, such as:
-
Access Key ID:
<aws_access_ key_ id> -
Secret Access Key:
<aws_secret_ access_ key>
-
-
Your AWS account's session token, such as:
-
Session Token:
your_session_ token -
Note that the
aws_is required only if your credentials in thesession_ token CREDENTIALSclause are temporary
-
Using these identifiers and keys, execute the following statement, replacing the placeholder values with your own.
CREATE PIPELINE libraryAS LOAD DATA S3 'my-bucket-name'CONFIG '{"region": "us-west-1"}'CREDENTIALS '{"aws_access_key_id": "your_access_key_id", "aws_secret_access_key": "your_secret_access_key", "aws_session_token": "your_session_token"}'INTO TABLE `classic_books`FIELDS TERMINATED BY ',';
You can see what files the pipeline wants to load by running the following:
SELECT * FROM information_schema.PIPELINES_FILES;
If everything is properly configured, you should see one row in the Unloaded state, corresponding to books..CREATE PIPELINE statement creates a new pipeline named library, but the pipeline has not yet been started, and no data has been loaded.
START PIPELINE library FOREGROUND;
When this command returns success, all files from your bucket will be loaded.information_ again, you should see all files in the Loaded state.classic_ table to make sure the data has actually loaded.
SELECT * FROM classic_books;
+------------------------+-----------------+-------+
| title | author | date |
+------------------------+-----------------+-------+
| The Catcher in the Rye | J.D. Salinger | 1945 |
| Pride and Prejudice | Jane Austen | 1813 |
| Of Mice and Men | John Steinbeck | 1937 |
| Frankenstein | Mary Shelley | 1818 |
+------------------------+-----------------+-------+You can also have SingleStore run your pipeline in the background.
DELETE FROM classic_books;ALTER PIPELINE library SET OFFSETS EARLIEST;
The first command deletes all rows from the target table.books. so you can load it again.
To start a pipeline in the background, run
START PIPELINE library;
This statement starts the pipeline.SHOW PIPELINES.
SHOW PIPELINES;
+----------------------+---------+
| Pipelines_in_books | State |
+----------------------+---------+
| library | Running |
+----------------------+---------+At this point, the pipeline is running and the contents of the books. file should once again be present in the classic_ table.
Note
Foreground pipelines and background pipelines have different intended uses and behave differently.
Use Cloud Workload Identity with S3 Pipelines
You can use Cloud Workload Identity instead of static credentials to load data via S3 pipelines.
Perform the following tasks to create an S3 pipeline that authenticates using the cloud workload identity:
-
-
Create an IAM role in your AWS account with the necessary privileges.
You can also use an existing IAM role. -
Update the IAM role's trust policy to allow the workspace's cloud workload identity to assume the role.
Specify the cloud workload identity ARN of the SingleStore workspace.
Alternatively, create a CloudFormation stack to configure the IAM roles.
-
-
Create an S3 pipeline.
In the pipeline configuration: -
Set
creds_tomode eks_in theirsa CONFIGclause. -
Specify the IAM role to assume using
role_in thearn CREDENTIALSclause.The specified role ARN must match the configured delegated entities for the SingleStore workspace. For example: CREATE PIPELINE s3_pipeline ASLOAD DATA S3 's3://bucket-name/path/'CONFIG '{"region": "us-east-1","creds_mode": "eks_irsa"}'CREDENTIALS '{"role_arn": "arn:aws:iam::xxxxxxxx:role/singlestore-s3-pipeline"}'INTO TABLE table_nameFIELDS TERMINATED BY ',';
-
-
Start the pipeline to ingest data.
START PIPELINE s3_pipeline;
If delegated entities are not configured, SingleStore pipelines that attempt to use IRSA with a role ARN not present in the delegated entities list fail at runtime.
Use CloudFormation Stack to Configure the IAM Role
Use this CloudFormation Stack template to define the IAM role.
|
Parameter |
Description |
|
|
Name of the IAM role that the SingleStore workspace assumes to access the S3 buckets. Default: |
|
|
Comma-separated list of SingleStore workspace's cloud workload identity ARNs that can assume this IAM role. Note: Do not include a trailing comma (at the end of the list). |
|
|
Comma separated list of S3 buckets that can be accessed by assuming this role. |
Refer to Getting started with CloudFormation for more information.
Next Steps
See About SingleStore Pipelines to learn more about how pipelines work.
In this section
Last modified: January 30, 2026