Iceberg Ingest
On this page
Note
This is a Preview feature.
Apache Iceberg is an open-source table format that helps simplify analytical data processing for large datasets in data lakes.
Remarks
The following are supported:
-
Iceberg Version 1 tables and Iceberg Version 2 tables with data files in Parquet format.
-
Iceberg tables stored in Amazon S3 with catalogs: GLUE, Snowflake, REST, JDBC, Hive, Hadoop, Polaris, and Unity.
Important
The global engine variable enable_ must be set to ON to use Iceberg Ingest.OFF by default.
SET GLOBAL enable_iceberg_ingest = ON;
Syntax
CREATE [OR REPLACE] PIPELINE <pipeline_name> ASLOAD DATA S3 '[<table_identifier>]'CONFIG '{"catalog_type": "GLUE|SNOWFLAKE|REST|JDBC|HIVE|HADOOP",<configuration_json>[, "endpoint_url": "<s3_compatible_storage_endpoint>"][, "ingest_mode": "append|upsert|one_time"][, "catalog_name": "<your_catalog_name>" ][, “catalog.<property>” : “property_value” [, …]][, "schema.change.stop":true|false ][, "iceberg_vended_credentials":true|false]}'CREDENTIALS '<credentials_json>’[REPLACE | MERGE] INTO TABLE <table_name><iceberg_subvalue_mapping>FORMAT ICEBERG;<iceberg_subvalue_mapping>:({<singlestore_col_name> | @<variable_name>}<- <iceberg_subvalue_path> [, ... ])<iceberg_subvalue_path>: {ident [::ident ...]}
All the data shaping options for Parquet pipelines are supported for Iceberg pipelines.
Schema inference for Iceberg pipelines is supported.
Table Identifier
-
The
<table_identifies the Iceberg table.identifier> The <table_is catalog-specific but is typically in the form:identifier> database_.name. table_ name
CONFIG
Catalog Specification
-
The
catalog_is required for the catalog specification.type -
The
catalog_is a name to associate with the catalog when reading table metadata and is used internally in SingleStore for logging and metrics purposes.name The catalog_is required for the JDBC catalog and is optional for other catalogs.name -
The
catalog.is a list of key-value pairs for configuring the catalog connection.property The property and value are passed directly to the Iceberg SDK to establish the catalog connection.
Refer to Iceberg Catalog and Security Configuration for more information.
S3 Specification
-
The
<configuration_is a JSON string for S3 configuration parameters such asjson> region,endpoint_, andurl compatibility_.mode Refer to CREATE PIPELINE and CREATE LINK for more information.
S3 Compatible Backend Storage
SingleStore supports Amazon S3 compatible backend storage for all catalog types used with Iceberg tables.endpoint_ is an Amazon S3 compatible storage backend, such as MinIO or IBM Storage Ceph.endpoint_ defined in the S3 Specification.
Ingest Modes
SingleStore supports multiple mechanisms for ingesting Iceberg data.MERGE pipelines as the preferred method for ingesting Iceberg data.MERGE pipelines do not meet performance or functional requirements, three alternative ingest modes are available using ingest_ one_, append, and insert.
Important
MERGE pipelines cannot be used with ingest_ options.MERGE and an ingest_ option in the same pipeline.
Schema Change Stop
Specify the "schema. option in the pipeline definition to enable schema change detection for Iceberg tables.
With "schema., when the engine detects a schema change, it pauses the ingestion and throws an error.ERROR_ column of the information_ view and includes a diff in JSON format that contains information about how the schema has evolved.
Refer to Schema Change Detection for an example of using "schema. and resuming ingestion after a schema change was detected.
When the "schema. option is selected (the default), ingestion continues as long as the source data contains all columns included in the pipeline declaration and the values are convertible to destination columns according to the setting of the data_ engine variable
Vended Credentials (iceberg_ vended_ credentials)
SingleStore supports vended credentials for Iceberg REST catalogs with S3 storage.
AWS EKS IRSA (eks_ irsa)
SingleStore supports AWS Elastic Kubernetes Service (EKS) IAM vended credentials for Iceberg REST catalogs with S3 storage.
CREDENTIALS (<credentials_ json>)
-
The
<credentials_specifies S3 credentials in JSON format.json> With this specification, credentials are directly included in the pipeline. While including credentials in the pipeline is simpler, it is less secure than using vended credentials or EKS IRSA. For information about the supported credential options, refer to CREATE PIPELINE. -
When using Vended Credentials or EKS IRSA, the
CREDENTIALSclause is optional; however, the clause may be included to add additional authentication parameters.
REPLACE INTO
REPLACE INTO is required for pipelines that use insert_.REPLACE INTO not be used with other types of pipelines as doing so causes unnecessary overhead.
When you execute a REPLACE INTO statement on a table with a PRIMARY KEY or UNIQUE index, the engine checks for matching values in these indexes.PRIMARY KEY or UNIQUE index, REPLACE INTO operates like INSERT.
MERGE INTO
The MERGE INTO clause creates a pipeline, called a MERGE pipeline, that continuously ingests data from an Iceberg table into a SingleStore table.MERGE pipeline continuously detects inserts, updates, and deletes to and from the Iceberg table and applies those updates to the SingleStore table.
SingleStore recommends MERGE pipelines as the preferred method for ingesting Iceberg data.MERGE and a comparison of MERGE and ingest_.
Subvalue Mappings
The iceberg_ assigns fields from the Iceberg table to columns in the SingleStore table or to temporary variables.::-separated list of field names is used in iceberg_ to look up fields in nested schemas.::-separated list of field names is used to look up fields in nested Parquet schemas.
-
The last field in
iceberg_must be a primitive type.subvalue_ path -
All
iceberg_components containing whitespace or punctuation must be surrounded by backticks (`).subvalue_ path -
The
iceberg_may not contain Parquet nested types (list or map types).subvalue_ path Refer to Parquet - Nested Types for more information.
Iceberg Catalog and Security Configuration
Catalog Types
SingleStore supports the following Iceberg Catalogs: Glue, Snowflake, REST, JDBC, Hive, and Hadoop.
Vended Credentials
SingleStore supports "vended credentials" for Iceberg REST catalogs with S3 storage.
Catalogs have catalog-specific settings for vended credentials."catalog. in the CONFIG section."iceberg_ in the CONFIG.
AWS Elastic Kubernetes Service (EKS) IAM Roles for Service Accounts (IRSA)
AWS EKS IRSA can be used to provide credentials for a S3 pipeline.
-
The SingleStore server must be deployed on an AWS EKS cluster using Helios BYOC.
-
EKS IRSA must be enabled by setting the
enable_engine variable toeks_ irsa ON.SET GLOBAL enable_eks_irsa = ON; -
Include the
"creds_option in the pipelinemode": "eks_ irsa" CONFIG.
SingleStore supports two scenarios for access with IAM roles and EKS IRSA:
-
The role assigned to a pod has direct access to S3 resources.
-
Include the
"creds_option in the pipelinemode": "eks_ irsa" CONFIG.CREATE PIPELINE <pipeline_name> ASLOAD DATA S3 's3://bucket_name/directory'CONFIG '{"region":"us-west-2", "creds_mode": "eks_irsa"}'CREDENTIALS '{}'INTO TABLE <destination_table><iceberg_subvalue_mapping>FORMAT ICEBERG;
-
-
The role assigned to a pod does not have direct access to the S3 resources, but it has permissions to assume a role that has the desired access.
-
Provide the role to be assumed with
role_in the pipelinearn CREDENTIALS.CREATE PIPELINE <pipeline_name> ASLOAD DATA S3 's3://bucket_name/directory'CONFIG '{"region":"us-west-2", "creds_mode": "eks_irsa"}'CREDENTIALS '{"role_arn": "<role_arn_to_assume>"}'INTO TABLE <destination_table><iceberg_subvalue_mapping>FORMAT ICEBERG;
-
Refer to the Minimum Required S3 Pipeline Syntax and AWS Elastic Kubernetes Service (EKS) IAM Roles for Service Accounts (IRSA) Authentication examples in CREATE PIPELINE.
AWS Secrets Manager Secrets
Secrets in the AWS Secrets Manager can be used in the pipeline CONFIG or CREDENTIALS.
To place a secret part of a pipeline CONFIG or CREDENTIALS into the AWS Secrets Manager:
-
Identify the part of the configuration that you want to hide.
Use the secrets feature only for credentials; do not store the entire configuration into the Secrets Manager, as this may cause problems with the pipeline. -
Add the secret part as a secret in AWS Secrets Manager.
As AWS Secrets Manager stores secrets as JSON, a secret may contain multiple key-value pairs. For example, you might create a secret that contains values for "catalog.andcredential" "catalog..another_ credential" -
Ensure the pipeline's AWS credentials allow accessing the AWS Secrets Manager.
Those credentials could be either static credentials, or a service role provided to the pod by EKS IRSA. -
Remove the secret part or parts from the pipeline CONFIG and replace them with
"ref": "arn_.of_ your_ secret" {"catalog_type":"REST","region":"us-east-1","catalog.uri":"https://whatever.snowflakecomputing.com/polaris/api/catalog","ref": "arn:aws:secretsmanager:us-east-1:188440000000:secret:test-secret-VkYPHc",...}
Every "ref" property in the pipeline CONFIG or CREDENTIALS is replaced by the values contained in the secret mentioned in the property's value."ref" they replace.
The AWS Secrets Manager can be accessed with a role that must be assumed before accessing the secret.CREDENTIALS using secretsmanager_, or in ref by using a pair of role ARN and secret ARN separated by a pipe; for example: "ref": "arn:aws:iam::188440000000:role/role_.
When AWS credentials are used only to access the Secrets Manager, and not to access data on S3 (for example, when vended credentials are used to access S3) those credentials must be prefixed with the string "secretsmanager_, such as, "secretsmanager_.
Types of Ingest
Compare Types of Continuous Ingest
SingleStore supports several types of continuous ingest: MERGE pipelines, Append-Only, and Upsert.
These types of ingest have different characteristics, SingleStore recommends using MERGE pipelines as the preferred method for ingesting Iceberg data.
-
MERGEcan process Append, Overwrite, Replace, and Delete (positional and equality) Iceberg snapshots; Append-Only and Upsert usingingest_have limitations on the types of Iceberg snapshots processed.mode -
The
MERGEpipeline has the broadest coverage of what can be ingested as incremental updates, merge pipelines are therefore inherently more complex.
-
-
MERGErequires adding three columns ($_,file $_,row $_) in the destination (SingleStore) table.delete These three extra columns have the following impacts: -
MERGEis slightly slower on initial ingest compared to Append-Only mode. -
MERGEtakes slightly more space in storage. -
The
MERGEpipeline statement is more complex, but that can be mitigated with INFER.
-
-
When using
MERGE, users may not directly update the destination (SingleStore) table.If the destination (SingleStore) table is updated, updates made to the source Iceberg table may not propagate properly. -
When using
MERGE, the Replace type of snapshot is not a no-op.Data files will be re-ingested if the source table went through compaction. -
MERGEimplements merge-on-read (MOR). -
MERGErequires an extra condition forSELECTstatements to manage MOR-style deletes.-
An extra entity (
VIEW) is created byINFERby default. -
Impacts scan performance.
-
-
ingest_updates the SingleStore table directly and does not have a read penalty.mode
Merge Pipelines (MERGE)
The MERGE clause creates a merge pipeline, that continuously ingests data from an Iceberg table into a SingleStore table.
For a merge pipeline, the SingleStore (destination) table:
-
Must have the following three columns:
-
`$_file` VARCHAR(2048) COMMENT 'ICEBERG_ FILE_ PATH' -
`$_row` BIGINT COMMENT 'ICEBERG_ FILE_ POS' -
`$_delete` JSON DEFAULT '{}' COMMENT 'ICEBERG_ DELETED_ BY'
-
-
You may use different column names for the
`$_,file` `$_, androw` `$_columns as long as the columns are marked with thedelete` COMMENTclauses shown above. -
Must have shard key defined as:
-
SHARD KEY(`$_file`, `$_ row`)
-
-
Must not be updated or modified.
If the destination table is modified, updates may be applied incorrectly.
The pipeline declaration must set the `$_ and `$_ columns as follows:
SET `$_file` = pipeline_source_file(), `$_row` = pipeline_source_file_row()
To ingest deletes, a merge pipeline must be created using a VIEW over the destination table.WHERE clause:
WHERE JSON_LENGTH(`$_delete`) = 0
Merge pipelines perform merge-on-read on-demand during table scan when a view is used.
MERGE Pipeline provides a merge pipeline example.
Append-Only (ingest_ mode)
Append-only mode is used for scenarios in which new rows are added to the Iceberg table, but no rows are deleted or modified.
In append-only mode ("ingest_) the pipeline will process APPEND Iceberg snapshots.DELETE or OVERWRITE snapshot is encountered, an error is raised."ignore_ in the pipeline configuration."ignore_ as doing so may lead to data inconsistency.
Upsert (ingest_ mode)
Upsert mode is used for scenarios where updates modify non-key columns based on a specified key column(s).
In upsert mode ("ingest_), the pipeline will process updates to the Iceberg table as upserts to the SingleStore table.APPEND and OVERWRITE snapshots of the Iceberg table as upserts to the SingleStore table.
Requirements:
-
The SingleStore table must have a primary key or a unique index.
In the <iceberg_, a column(s) in the Iceberg table must be mapped to the column(s) in SingleStore on which there is a key or unique index.subvalue_ mapping> Typically, the column(s) in the Iceberg table will also be declared as a key. -
The pipeline must be created using
REPLACE.Refer to Additional CREATE PIPELINE Examples for more information.
Limitations:
-
Pipelines will fail on delete snapshots.
Users may override these errors by specifying "ignore_in the pipeline configuration.unsupported_ modifications":true SingleStore does not recommend setting "ignore_as doing so may lead to data inconsistency.unsupported_ modifications":true -
Limitations are expected to be addressed in future releases.
One-Time (ingest_ mode)
A one-time ingest ("ingest_) is supported.
CREATE OR REPLACE
When a pipeline with a specified pipeline_ already exists, the CREATE OR REPLACE command functions similarly to CREATE PIPELINE, with the added benefit of preserving existing pipeline metadata, such as loaded offsets and data files.CREATE OR REPLACE on an existing pipeline initiates the Iceberg pipeline to retrieve a new snapshot, schema, and data files, and inserts data from these new files into the destination table in SingleStore.
Executing CREATE OR REPLACE on an existing Iceberg pipeline may cause some data files to be ingested twice.CREATE OR REPLACE only with REPLACE statements or in an upsert configuration.
CREATE PIPELINE books_create_pipe ASLOAD DATA S3 'db.books'CONFIG '{"region":"us-west-2","catalog_type": "GLUE","catalog_name": "s3_catalog","ingest_mode": "one_time"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'REPLACE INTO TABLE books(Id <- id,Name <- name,NumPages <- numPages,Rating <- rating)FORMAT ICEBERG;
Refer to CREATE PIPELINE for syntax for CREATE OR REPLACE PIPELINE, REPLACE INTO TABLE, and ON DUPLICATE KEY UPDATE.
Iceberg Pipeline Configuration
Data Type Translation
The following table lists Iceberg Types, the SingleStore data type that can be used to store those types, and the recommended conversion to be applied with a SET clause.
|
Iceberg Type |
Recommended SingleStore Data Type |
Recommended Conversion |
|---|---|---|
|
boolean |
| |
|
int |
| |
|
long |
| |
|
float |
| |
|
double |
| |
|
decimal(P,S) |
| |
|
date |
|
DATE_ |
|
time |
|
DATE_ |
|
timestamp |
|
DATE_ |
|
timestamptz |
|
DATE_ |
|
string |
| |
|
uuid |
| |
|
fixed (L) |
| |
|
binary |
|
Data File Filtering
When ingesting from Iceberg tables, SingleStore applies file-level filtering to reduce the number of data files downloaded.
File-level filtering is determined by the WHERE clause in the pipeline definition.
-
Only a subset of expressions can be used for file filtering.
-
Expressions that are not eligible for file-level filtering are applied later as row-level filtering after the file is downloaded.
To enable file filtering before download, write the WHERE clause using the following rules:
-
Use any combination of the boolean operators
ANDandOR. -
Use the binary operators
=,<,<=,>,>=with a column name on one side and a literal value on the other.For example, WHEREcolumn_name_ 1 < 9999 -
Function calls are not supported in file filtering.
Pipelines always evaluate the full WHERE clause during ingestion, after file-level filtering.
-
Row-level filtering may include more complex expressions, such as function calls, that are not eligible for file-level filtering.
-
When the
WHEREclause uses anANDoperator, unsupported expressions can be discarded for file-level filtering while still being applied during row-level filtering.
For example, WHERE col1 > 1000 AND col2 = my_
-
Per-file filtering:
col1 > 1000 -
Per-row filtering:
col1 > 1000 AND col2 = my_udf(col3)
If none of the expressions in the WHERE clause can be applied at the file level, the CREATE PIPELINE statement issues the following warning:
Pipeline's where clause is not suitable for file-level filtering
File-level filtering relies on column-level metadata that Iceberg generates for each data file.
Monitor Ingest
The per-file status of the Iceberg table ingestion can be queried through the PIPELINES_
Enabling OFFSETS_ on the Iceberg pipeline reduces the number of entries by tracking files only referenced by the pipeline’s current Iceberg table snapshot.OFFSETS_ can be set when you CREATE the pipeline or modify it using ALTER PIPELINE.
Configure Iceberg Ingest
The global engine variable enable_ must be set to ON to use Iceberg ingest.OFF by default.
SET GLOBAL enable_iceberg_ingest = ON;
Engine variables can be used to control parameters including memory usage and timeouts for Iceberg pipelines.
Pipeline parameters including memory usage and timeouts can be controlled using global engine variables with the pipelines_ prefix as specified in List of Engine Variables.
Memory usage can also be controlled using the java_ and pipelines_ engine variables.
Pipeline timeouts can be controlled using pipelines_ and pipelines_.
Refer to List of Engine Variables for more information.
Examples
Glue Catalog on Amazon S3
An Iceberg table with data files in Parquet format that is stored in an AWS S3 bucket using AWS Glue can be loaded into a SingleStore table using a pipeline (CREATE PIPELINE.
In this example, a table named books is created and data from an Iceberg table that meets this schema is loaded into the books table.
Create the table.
CREATE TABLE books(Id INT,Name TEXT,NumPages INT,Rating DOUBLE,PRIMARY KEY(Id));
The following data is used for this example.
(1, 'Happy Place', 400, 4.9)(2, 'Legends & Lattes', 304, 4.9)(3, 'The Vanishing Half', 352, 4.9)(4, 'The Wind Knows My Name', 304, 4.9)
The PIPELINE statement below will load data from an Iceberg table containing the data above into the books table.<- are the column names from the SingleStore table into which the data will be loaded.<- are the column names from the Iceberg table which is to be loaded into SingleStore.
CREATE PIPELINE books_pipe ASLOAD DATA S3 'db.books'CONFIG '{"region":"us-west-2","catalog_type": "GLUE"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'INTO TABLE books(Id <- id,Name <- name,NumPages <- numPages,Rating <- rating)FORMAT ICEBERG;
Test the pipeline.
TEST PIPELINE books_pipe;
+------+------------------------+----------+--------+
| Id | Name | NumPages | Rating |
+------+------------------------+----------+--------+
| 4 | The Wind Knows My Name | 304 | 4.9 |
| 1 | Happy Place | 400 | 4.9 |
| 2 | Legends & Lattes | 304 | 4.9 |
| 3 | The Vanishing Half | 352 | 4.9 |
+------+------------------------+----------+--------+Refer to START PIPELINE for more information on starting pipelines.
Use Subvalue Mappings
This example shows the use of subvalue mappings to load nested elements from an Iceberg schema into a SingleStore table.
Create a table.
CREATE TABLE addresses(Id INT,Name TEXT,Street TEXT,City TEXT,Country TEXT,PRIMARY KEY(Id));
The following data is used for this example.
(1, 'Mylo', struct('123 Main St', 'New York', 'USA'))(2, 'Naya', struct('456 Elm St', 'San Francisco', 'USA'))
The PIPELINE statement below will load data from an Iceberg table containing the data above into the addresses table.<- are the column names from the SingleStore table into which the data will be loaded.<- are the column names from the Iceberg table which is to be loaded into SingleStore.
CREATE PIPELINE addresses_pipe ASLOAD DATA S3 'db2.addresses'CONFIG '{"region":"us-west-2","catalog_type": "GLUE","catalog_name": "s3_catalog"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'INTO TABLE addresses(Id <- id,Name <- name,Street <- address::street,City <- address::city,Country <- address::country)FORMAT ICEBERG;
Test the pipeline.
TEST PIPELINE addresses_pipe;
+------+------+-------------+---------------+---------+
| Id | Name | Street | City | Country |
+------+------+-------------+---------------+---------+
| 1 | Mylo | 123 Main St | New York | USA |
| 2 | Naya | 456 Elm St | San Francisco | USA |
+------+------+-------------+---------------+---------+Refer to START PIPELINE for more information on starting pipelines.
Snowflake Catalog on Amazon S3
Ingest an Iceberg table stored in Amazon S3 with a Snowflake catalog.
Iceberg tables to be ingested in SingleStore must be created on an external volume.
CREATE PIPELINE addresses_pipe ASLOAD DATA S3 'db_name.schema_name.table_name'CONFIG '{"region": "us-west-2","catalog_type": "SNOWFLAKE","catalog.uri": "jdbc:snowflake://tpq12345.snowflakecomputing.com","catalog.jdbc.user":"<user_name>","catalog.jdbc.password":"<password>","catalog.jdbc.role":"<user role>"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'INTO TABLE addresses(Id <- id,Name <- name,Street <- address::street,City <- address::city,Country <- address::country)FORMAT ICEBERG;
For the Snowflake catalog, the <table_ must consist of three parts - the database name, the schema name, and the table name, db_ in the example above.
The catalog.SELECT SYSTEM$ALLOWLIST(); in the Snowflake system.
In addition, the catalog., catalog., catalog., and catalog. are required when using the Snowflake catalog.
REST Catalog on Amazon S3
Ingest an Iceberg table stored in Amazon S3 with REST catalog.
CREATE PIPELINE addresses_pipe ASLOAD DATA S3 'db_name.table_name'CONFIG '{"region": "us-west-2","catalog_type": "REST","catalog.uri": "http://host.addresss:8181"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'INTO TABLE addresses(Id <- id,Name <- name,Street <- address::street,City <- address::city,Country <- address::country)FORMAT ICEBERG;
File Filtering with REST Catalog on Amazon S3
Ingest an Iceberg table stored in Amazon S3 with REST catalog using file filtering.
Assume there exists an Iceberg table that stores weather sensor data and is partitioned with a day transform on the timestamp as follows.
CREATE TABLE sensors.weather (sensorid int,temperature double,humidity double,pressure double,ts timestamp)PARTITIONED BY day(ts);
This data is to be loaded into a SingleStore table defined as follows:
CREATE TABLE weather_data(SensorId INT,Temperature DOUBLE,Humidity DOUBLE,Pressure DOUBLE,TS DATETIME,PRIMARY KEY(SensorId));
The following pipeline will load all weather data for January 1, 2026 and later into the weather_ table.
CREATE PIPELINE weather_pipe ASLOAD DATA S3 'sensors.weather'CONFIG '{"region": "us-west-2","catalog_type": "REST","catalog.uri": "http://host.addresss:8181"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'INTO TABLE weather_data(SensorId <- sensorid,Temperature <- temperature,Humidity <- humidity,Pressure <- pressure,TS <- ts)FORMAT ICEBERGWHERE timestamp >= '2026-01-01';
JDBC Catalog on Amazon S3
Ingest an Iceberg table stored in Amazon S3 with JDBC catalog.
SingleStore supports Postgres, MySQL, and SQLite JDBC drivers by default.
The following example uses JDBC with SQLite.
CREATE PIPELINE addresses_pipe ASLOAD DATA S3 'db_name.table_name'CONFIG '{"region": "us-west-2","catalog_type": "JDBC","catalog_name": "catalog_name","catalog.warehouse": "s3://path_to_warehouse","catalog.uri":"jdbc:sqlite_:file:/path_jdbc"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'INTO TABLE addresses(Id <- id,Name <- name,Street <- address::street,City <- address::city,Country <- address::country)FORMAT ICEBERG;
The following example uses JDBC with MySQL.
CREATE PIPELINE addresses_pipe ASLOAD DATA S3 'db_name.table_name'CONFIG '{"region": "us-west-2","catalog_type": "JDBC","catalog_name": "catalog_name","catalog.warehouse": "s3://path_to_warehouse","catalog.uri": "jdbc:mysql://host.address:3306/default","catalog.jdbc.user": "<user_name>","catalog.jdbc.password": "<password>"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'INTO TABLE addresses(Id <- id,Name <- name,Country <- address::country)FORMAT ICEBERG;
The catalog. and catalog_ are required for JDBC catalogs.
Hive Catalog on Amazon S3
Ingest an Iceberg table stored in Amazon S3 using Hive Catalog.
CREATE PIPELINE addresses_pipe ASLOAD DATA S3 'db_name.table_name'CONFIG '{"catalog_type": "HIVE","catalog.uri": "thrift://<service_endpoint>:46590","region": "us-east-1","catalog.hive.metastore.client.auth.mode": "PLAIN","catalog.hive.metastore.client.plain.username": "<username>","catalog.hive.metastore.client.plain.password": "<password>","catalog.metastore.use.SSL": "true","catalog.hive.metastore.truststore.type": "PKCS12","catalog.hive.metastore.truststore.path": "/path/to/your/project/hive/truststore.12""catalog.hive.metastore.truststore.password": <truststore_password>}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'INTO TABLE addresses(Id <- id,Name <- name,Street <- address::street,City <- address::city,Country <- address::country)FORMAT ICEBERG;
The catalog. is the base URL for accessing the Hive catalog's API or service endpoint.
The catalog. is the authentication mode for connecting to the Hive Metastore.
The catalog. is the username used to authenticate with the Hive Metastore.
The catalog. is the password for the authenticated user.
The catalog. is a boolean flag that secures communication with the Hive Metastore.
The catalog. is the truststore format used to validate the SSL certificate.
The catalog. is the file path that contains the SSL certificate.
The catalog. is the password needed to access the truststore.
Refer to GitHub for additional Hive configurations.
Hadoop Catalog on Amazon S3
Ingest an Iceberg table stored in Amazon S3 with Hadoop catalog.
CREATE PIPELINE addresses_pipe ASLOAD DATA s3 'db_name.table_name'CONFIG '{"catalog_type": "HADOOP","catalog.warehouse": "s3://path_to_warehouse","endpoint_url": "<s3_compatible_storage_endpoint>","region": "us-east-1"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key":"<your_secret_access_key>"}'INTO TABLE addresses(Id <- id,Name <- name,Street <- address::street,City <- address::city,Country <- address::country)FORMAT ICEBERG;
Polaris Catalog on Amazon S3
Ingest an Iceberg table stored in Amazon S3 using Polaris Catalog.
CREATE PIPELINE addresses_pipe ASLOAD DATA S3 'db_name.table_name'CONFIG '{"catalog_type": "REST","catalog.warehouse": "<polaris_catalog_name>","table_id": "db_name.table_name","region":"us-east-1","catalog.uri":"https://tpq12345.snowflakecomputing.com/polaris/api/catalog","region":"us-east-1","catalog.credential":"catalog.credential","catalog.scope": "PRINCIPAL_ROLE:ALL"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'INTO TABLE addresses(Id <- id,Name <- name,Street <- address::street,City <- address::city,Country <- address::country)FORMAT ICEBERG;
The catalog. is the base URL for accessing the Polaris catalog's API or service endpoint.
The catalog. defines the access permissions for the Polaris catalog.
The catalog. is a secret key from Polaris catalog connection, formatted as <ClientID>:<Secret>.
Upsert (ingest_ mode)
The example below shows a pipeline using ingest_ of upsert.ingest_ is upsert, the books table has a primary key, Id, in this example.
When started, this pipeline will ingest data from the latest snapshot of the Iceberg table into the SingleStore books table.
CREATE PIPELINE books_upsert_pipe ASLOAD DATA S3 'db.books'CONFIG '{"region":"us-west-2","catalog_type": "GLUE","catalog_name": "s3_catalog""ingest_mode": "upsert"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'REPLACE INTO TABLE books(Id <- id,Name <- name,NumPages <- numPages,Rating <- rating)FORMAT ICEBERG;
Append (ingest_ mode)
The example below shows a pipeline using ingest_ of append.
When started, this pipeline will ingest data from the latest snapshot of the Iceberg table into the SingleStore books table.
CREATE PIPELINE books_append_pipe ASLOAD DATA S3 'db.books'CONFIG '{"region":"us-west-2","catalog_type": "GLUE","catalog_name": "s3_catalog","ingest_mode": "append"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'INTO TABLE books(Id <- id,Name <- name,NumPages <- numPages,Rating <- rating)FORMAT ICEBERG;
MERGE Pipeline
The example below shows an example of a creating a merge pipeline.
In this example, a destination table in SingleStore , named _ , is created with the necessary $_, $_, and $_ columns.books , is created on top of the _ table.
When started, this pipeline will ingest data from the latest snapshot of the source Iceberg table into the SingleStore _ table._ table.
Create the destination _ table.
CREATE TABLE _books(Id INT,Name TEXT,NumPages INT,Rating DOUBLE,`$_file` VARCHAR(2048),`$_row` BIGINT,`$_delete` JSON DEFAULT '{}',KEY(Id),PRIMARY KEY(`$_file`,`$_row`));
Create the books view.
CREATE VIEW booksASSELECT Id, Name, NumPages, Rating from _booksWHERE JSON_LENGTH(`$_delete`) = 0;
Define the pipeline.
CREATE PIPELINE books_pipe ASLOAD DATA S3 'db.books'CONFIG '{"region":"us-west-2","catalog_type": "GLUE","catalog_name": "s3_catalog"}'CREDENTIALS '{"aws_access_key_id": "<your_access_key_id>","aws_secret_access_key": "<your_secret_access_key>"}'MERGE INTO TABLE _books(Id <- id,Name <- name,NumPages <- numPages,Rating <- rating)FORMAT ICEBERGSET `$_file` = pipeline_source_file(),`$_row` = pipeline_source_file_row();
Schema Change Detection
Consider an Iceberg table stored external to SingleStore that is to be ingested into SingleStore with a pipeline.
Let the source Iceberg table have the following schema:
(id int,bigid long,fl float,doub double,occurrence_da date,data string,st struct<a int, b int, c struct<d string, e int>>)
Assume a pipeline has been created to ingest the source Iceberg table into a destination SingleStore table.
Assume the following alter table queries are performed on the source Iceberg table.
ALTER TABLE <table_name> ALTER COLUMN id TYPE bigintALTER TABLE <table_name> RENAME COLUMN id to identityALTER TABLE <table_name> ADD COLUMN name stringALTER TABLE <table_name> DROP COLUMN occurrence_da
If "schema. is set, when the engine detects these changes to the source Iceberg table, the pipeline is stopped, and the following error message is generated.
Iceberg Table Schema at Source has changed. Schema Diff :{"schema_id": {"before": 0,"after": 8},"diff": [{"op": "column_type_change","column": "id","before": "int","after": "long","column_id": 1},{"op": "column_rename","before": "id","after": "identity","column_id": 1},{"op": "column_add","before": null,"after": "name","column_id": 13},{"op": "column_delete","before": "occurrence_da","after": null,"column_id": 5}]}
In this error message, the Iceberg schema version is included in schema_, and the diff includes one entry for each modification to the table schema.
Resume Ingestion
Follow these steps to resume ingestion after a pipeline has been paused due to schema change.
-
Run
SHOW CREATE PIPELINE <pipeline_and capture the results of this command.name> EXTENDED -
The result of this query will contain
ALTER PIPELINEstatements that move the "file offsets" back to the position before the pipeline was paused.
-
-
Drop the pipeline.
-
Apply the schema changes manually to the SingleStore table.
-
Recreate the pipeline.
-
Run the
ALTER PIPELINEstatements captured in step 1. -
Start the new pipeline.
Debug Datetime Conversions
Datetime value conversion can be debugged by loading raw datetime from an Iceberg file into a BLOB.books_ table with a PublishTimestamp attribute of type DATETIME(6) and an extra attribute, RawTimestamp, into which the raw timestamp data will be loaded.TEXT attribute in addition to, or instead of, using a BLOB attribute.
CREATE TABLE books_debug(Id INT,Name TEXT,NumPages INT,Rating DOUBLE,PublishTimestamp DATETIME(6),RawTimestamp BLOB);
The PIPELINE statement below loads the publishTimestamp from the Iceberg file into both the PublishTimestamp column and the RawTimestamp column.SET statement is used to convert the publishTimestamp from Iceberg to the PublishTimestamp in SingleStore.
The column names on the left side of the <- are the column names from the SingleStore table into which the data will be loaded.<- are the column names from the Iceberg file which is to be loaded into SingleStore.
CREATE PIPELINE books_debug_pipe ASLOAD DATA S3 ''CONFIG '{"region":"us-west-2","catalog_type": "GLUE","catalog_name": "s3_catalog","table_id": "dbtest.books"}'CREDENTIALS '{"aws_access_key_id": "","aws_secret_access_key": "","aws_session_token": ""}'INTO TABLE books_debug(Id <- IdIce,Name <- Name,NumPages <- NumPages,Rating <- Rating,@ts<-PublishTimestamp,RawTimestamp<-PublishTimestamp)FORMAT ICEBERGSET PublishTimestamp = DATE_ADD('1970-01-01', INTERVAL @ts MICROSECOND);
Use TEST PIPELINE to compare the values in the two columns.PublishTimestamp column shows the converted timestamp, the RawTimestamp column contains the timestamp from the Iceberg file, in microseconds in this example.SET statement.
TEST PIPELINE books_debug_pipe;
+------+--------------------+----------+--------+------------------+-------+---------------------+
| Id | Name | NumPages | Rating | RawTimestamp | PublishTimestamp |
+------+--------------------+----------+--------+------------------+-------+---------------------+
| 1 | Happy Place | 400 | 4.9 | 1680721200000000 | 2023-04-05 12:00:00.000000 |
| 2 | Legends & Lattes | 304 | 4.9 | 1669665600000000 | 2022-11-28 12:00:00.000000 |
| 3 | The Vanishing Half | 352 | 4.9 | 1591124400000000 | 2020-06-02 12:00:00.000000 |
+------+--------------------+----------+--------+------------------+-------+---------------------+Troubleshooting
The following table lists errors that can occur when creating an Iceberg Ingest pipeline.
|
Catalog |
Error |
Cause and Resolution |
|---|---|---|
|
All |
protocol error … Process died unexpectedly or didn't start. |
An incorrect value of the engine variable |
|
Snowflake |
Certificate for <. |
An incorrect URI may cause this error. Verify that the catalog. |
|
Snowflake |
SEVERE: WARNING!!! Using fail-open to connect. |
This issue needs to be resolved on the Snowflake side. |
|
Snowflake |
Parquet parsing errors such as “Dictionary encoding not implemented". |
Set Snowflake table property |
Related Topics
Last modified: January 30, 2026