SingleStore Kafka Sink Connector Properties
The SingleStore Kafka Sink Connector ("the connector") supports the following configuration properties:
Property |
Default Value |
Description |
---|---|---|
|
The hostname or IP address of the Master Aggregator in the SingleStore deployment in the For example: |
|
|
The value of |
The hostname or IP address of the aggregator nodes in the SingleStore deployment on which the queries are run. For example: |
|
Specifies the default database to connect with. |
|
|
|
The username of the SingleStore database user to connect with. |
|
An empty string. |
The password for the SingleStore database user. |
|
Specifies a JDBC parameter. |
|
|
|
Specifies the maximum number of times to retry on errors before failing a task. |
|
By default, all fields are inserted. |
Specifies a comma-separated list of fields to be inserted in the table. |
|
|
Specifies the time (in milliseconds) to wait following an error before making a retry attempt. |
|
Specifies additional keys to add to tables created by the connector as a comma-separated list, where,
Refer to Table Keys for more information. |
|
|
|
Compresses data on load. |
|
|
Enables the use of an additional metadata table to save Kafka transaction metadata. |
|
|
Specifies the name of the table where the Kafka transaction metadata is saved. |
|
Specifies an explicit table name to use for the specified topic. |
|
|
Specifies a SQL expression to filter incoming data. |
|
|
Specifies a mapping between SingleStore table column names and the Kafka record fields. For example, for a record with the value Refer to Data Mapping for more information. |
|
|
Specifies the Kafka record field that defines the SingleStore table where the Kafka records are written. Refer to Data Mapping for more information. |
|
|
Specifies a mapping between the Kafka record and the SingleStore table name. Refer to Data Mapping for more information. |
|
|
|
If enabled, updates a row in case of a duplicate key. |
Last modified: August 5, 2025