SingleStore Kafka Sink Connector Properties

The SingleStore Kafka Sink Connector ("the connector") supports the following configuration properties:

Property

Default Value

Description

connection.ddlEndpoint

The hostname or IP address of the Master Aggregator in the SingleStore deployment in the host[:port] format, where port is an optional parameter.

For example: master-agg.abc.internal:3308 or master-agg.abc.internal.

connection.dmlEndpoints

The value of connection.ddlEndpoint.

The hostname or IP address of the aggregator nodes in the SingleStore deployment on which the queries are run. The endpoint is specified in the host[:port],host[:port],... format, where port is an optional parameter and multiple hosts are separated by a comma.

For example: child-agg:3308, child-agg2.

connection.database

Specifies the default database to connect with. If set, all connections use the specified database by default.

connection.user

root

The username of the SingleStore database user to connect with.

connection.password

An empty string.

The password for the SingleStore database user.

params.<name>

Specifies a JDBC parameter. This parameter is injected into the connection URI.

max.retries

10

Specifies the maximum number of times to retry on errors before failing a task.

fields.whitelist

By default, all fields are inserted.

Specifies a comma-separated list of fields to be inserted in the table. Refer to Data Mapping for more information.

retry.backoff.ms

3000

Specifies the time (in milliseconds) to wait following an error before making a retry attempt.

tableKey.<index_type>[.name]

Specifies additional keys to add to tables created by the connector as a comma-separated list, where,

  • index_type specifies the type of key (index) to add. The connector supports PRIMARY, COLUMNSTORE, UNIQUE, SHARD, and KEY.

  • name specifies the name of the column to which the index is applied.

Refer to Table Keys for more information.

singlestore.loadDataCompression

GZip

Compresses data on load. It can have the following values: GZip, LZ4, or Skip.

singlestore.metadata.allow

true

Enables the use of an additional metadata table to save Kafka transaction metadata. Refer to Exactly-Once Delivery for more information.

singlestore.metadata.table

kafka_connect_transaction_metadata

Specifies the name of the table where the Kafka transaction metadata is saved. Refer to Exactly-Once Delivery for more information.

singlestore.tableName.<topicName>

Specifies an explicit table name to use for the specified topic. Refer to Data Mapping for more information.

singlestore.filter

Specifies a SQL expression to filter incoming data. This parameter is inserted directly into the query's WHERE clause. SingleStore does not recommend using this property because it is vulnerable to SQL-injection attacks.

singlestore.columnToField.<tableName>.<columnName>

Specifies a mapping between SingleStore table column names and the Kafka record fields. Nested fields are specified as a sequence of field names separated by ..

For example, for a record with the value {"a": {"b": 1}, "c": "d"}, the mapping for the field b is a.b.

Refer to Data Mapping for more information.

singlestore.recordToTable.mappingField

Specifies the Kafka record field that defines the SingleStore table where the Kafka records are written. This property is used with singlestore.recordToTable.mapping.<value>.

Refer to Data Mapping for more information.

singlestore.recordToTable.mapping.<value>

Specifies a mapping between the Kafka record and the SingleStore table name. This property is used with singlestore.recordToTable.mappingField.

Refer to Data Mapping for more information.

singlestore.upsert

false

If enabled, updates a row in case of a duplicate key.

Last modified: August 5, 2025

Was this article helpful?

Verification instructions

Note: You must install cosign to verify the authenticity of the SingleStore file.

Use the following steps to verify the authenticity of singlestoredb-server, singlestoredb-toolbox, singlestoredb-studio, and singlestore-client SingleStore files that have been downloaded.

You may perform the following steps on any computer that can run cosign, such as the main deployment host of the cluster.

  1. (Optional) Run the following command to view the associated signature files.

    curl undefined
  2. Download the signature file from the SingleStore release server.

    • Option 1: Click the Download Signature button next to the SingleStore file.

    • Option 2: Copy and paste the following URL into the address bar of your browser and save the signature file.

    • Option 3: Run the following command to download the signature file.

      curl -O undefined
  3. After the signature file has been downloaded, run the following command to verify the authenticity of the SingleStore file.

    echo -n undefined |
    cosign verify-blob --certificate-oidc-issuer https://oidc.eks.us-east-1.amazonaws.com/id/CCDCDBA1379A5596AB5B2E46DCA385BC \
    --certificate-identity https://kubernetes.io/namespaces/freya-production/serviceaccounts/job-worker \
    --bundle undefined \
    --new-bundle-format -
    Verified OK