Load Data from Estuary Flow

You can stream data from Estuary Flow to SingleStore using the Kafka-compatible Dekaf API via pipelines.

To load data into SingleStore:

  1. Log in to the Estuary Dashboard.

  2. Generate a Refresh Token.

    1. On the left navigation pane, select Admin.

    2. On the CLI-API tab, under the Refresh Token section, select Generate Token.

    3. On the Generate Refresh Token dialog, enter a description for the token, and then select GENERATE TOKEN.

  3. Copy and securely store the token. This token is used in the CREDENTIALS clause of the CREATE PIPELINE statement to authenticate the connection.

    Note: The token is displayed only once.

  4. Create a table in SingleStore to store the data to be ingested.

  5. Create a pipeline to stream data from Estuary Flow.

  6. Start the pipelines.

Here's an example:

CREATE TABLE estExample (id INT, title VARCHAR(255));
CREATE PIPELINE estPipeline AS
LOAD DATA KAFKA "<endpoint/database>"
CONFIG '{
"security.protocol":"SASL_SSL",
"sasl.mechanism":"PLAIN",
"sasl.username":"{}",
"broker.address.family": "v4",
"schema.registry.username": "{}",
"fetch.wait.max.ms": "2000"
}'
CREDENTIALS '{
"sasl.password": "<ESTUARY_ACCESS_TOKEN>",
"schema.registry.password": "<ESTUARY_ACCESS_TOKEN>"
}'
INTO table estExample
FORMAT AVRO SCHEMA REGISTRY '<schema_registry>'
( id <- id, title <- title );
START PIPELINE estPipeline;

Last modified: December 19, 2024

Was this article helpful?