How the SingleStore Kafka Sink Connector Works
On this page
The SingleStore Kafka Sink connector ("the connector") provides a reliable and high-performance way to stream data from Kafka topics directly into SingleStore tables.
By default, each Kafka topic is mapped to a SingleStore table with the same name.
Note
The connector only performs insert operations, and each Kafka record is inserted as a new row.singlestore.
connector configuration property.
The connector also supports exactly-once delivery to ensure that each record is inserted only once even on retries or failures.
Automated Table Creation
If the target table does not exist in SingleStore, the connector can automatically create tables, provided the schema of the Kafka record value is available.
Table Keys
You can configure the connector to automatically add keys (indexes) to the new tables using the tableKey.
property, where:
-
index_
: Specifies the index to add.type The connector supports the following values: PRIMARY
,UNIQUE
,SHARD
,COLUMNSTORE
, andKEY
.Refer to Understanding Keys and Indexes in SingleStore for information on keys and indexes supported by SingleStore. -
name
: (Optional) Specifies a name for the key.For example tableKey.
, where key_PRIMARY. key_ primary_ orders primary_ orders is the name of the key.
Note
These keys (indexes) are only added to the tables when the connector automatically creates a table.
The value of the tableKey.
property can be specified as a comma-separated list, for example:
tableKey.PRIMARY=id
tableKey.COLUMNSTORE=data,created_at
tableKey.UNIQUE.unique_email=email
In this example,
-
A primary key is created on the
id
column. -
A columnstore key is created on
data
andcreated_
columns.at -
A unique key named
unique_
is created on theemail email
column.
Exactly-Once Delivery
The SingleStore Kafka Sink Connector supports exactly-once delivery to prevent ingesting duplicate data in the database, even in cases of retries or failures.
To enable exactly-once delivery, set the following property:
singlestore.metadata.allow=true
When exactly-once delivery is enabled, the connector creates a table named kafka_
(default name unless specified) which tracks metadata for every transaction to ensure idempotency.singlestore.
property to change the name of the metadata table.
singlestore.metadata.table=my_custom_metadata_table
Each record in the kafka_
tables includes:
-
A unique identifier consisting of Kafka topic, partition, and offset.
This identifier guarantees the uniqueness of ingested data. -
The number of records written in the transaction.
-
The timestamp of when the transaction occurred.
The data is written to both the target SingleStore table and metadata table within a single transaction.
Last modified: August 5, 2025