Load Data from the Confluent Kafka Connector

The SingleStore Confluent Kafka Connector is a Kafka Connect connector that allows you to easily ingest AVRO, JSON, and CSV messages from Kafka topics into SingleStoreDB. More specifically, the Confluent Kafka Connector is a Sink (target) connector designed to read data from Kafka topics and write that data to SingleStoreDB tables.

Learn more about the SingleStoreDB Confluent Kafka Connector, including how to install and configure it, in Working with the Kafka Connector.

Working with the Kafka Connector

To understand Kafka’s core concepts and how it works, please read the Kafka documentation. This guide assumes that you understand Kafka’s basic concepts and terminology, and that you have a working Kafka environment up and running.

The Confluent Kafka Connector is available via the Confluent Hub and as a download from SingleStoreDB.

Note: After you have installed the version you want to use, you will need to configure the connector properties.

The rest of this page describes how the connector works.

Note: You can also use a pipeline to Load Data from Kafka Using a Pipeline.

Connector Behavior

See the SingleStore Kafka Connector for information about the connector.

Auto-creation of tables

While loading data, if the table does not exist in SingleStoreDB, it will be created using the information from the first record.

The table name is the name of the topic. The table schema is taken from the record’s valueSchema. If valueSchema is not a struct, then a single column with name data will be created with the schema of the record. Table keys are taken from the tableKey property.

If the table already exists, all records will be loaded directly into it. Automatic schema changes are not supported, so all records should have the same schema.

Exactly once delivery

To achieve exactly once delivery, set singlestore.metadata.allow to true. The kafka_connect_transaction_metadata table will then be created.

This table contains an identifier, count of records, and time of each transaction. The identifier consists of kafka-topic, kafka-partition, and kafka-offset. This combination provides a unique identifier that prevents duplication of data in the SingleStoreDB database. Kafka saves offsets and increases them only if the kafka-connect job succeeds. If the job fails, Kafka will restart the job with the same offset. This means that if the data is written to the database, but the operation fails, Kafka will try to write data with the same offset and metadata identifier to prevent duplication of existing data and simply complete the work successfully.

Data is written to the table and to the kafka_connect_transaction_metadata table in one transaction. Because of this, if an error occurs, no data is added to the database.

To overwrite the name of this table, use the singlestore.metadata.table property.

Data Types

The connector converts Kafka data types to SingleStoreDB data types:

Kafka Type

SingleStoreDB Type

STRUCT

JSON

MAP

JSON

ARRAY

JSON

INT8

TINYINT

INT16

SMALLINT

INT32

INT

INT64

BIGINT

FLOAT32

FLOAT

FLOAT64

DOUBLE

BOOLEAN

TINYINT

BYTES

TEXT

STRING

VARBINARY(1024)

Table Keys

To add a column as a key in SingleStoreDB, use the tableKey property:

Suppose you have an entity:

{
"id" : 123,
"name" : "Alice"
}

If you want to add the id column as a PRIMARY KEY to your SingleStoreDB table, add "tableKey.primary": "id" to your properties configuration.

Doing so will generate the following query during table creation:

CREATE TABLE IF NOT EXISTS `table` (
`id` INT NOT NULL,
`name` TEXT NOT NULL,
PRIMARY KEY (`id`)
)

You can also specify the name of a key by providing it like this: "tableKey.primary.someName" : "id".

This will create a key with a name:

CREATE TABLE IF NOT EXISTS `table` (
`id` INT NOT NULL,
`name` TEXT NOT NULL,
PRIMARY KEY `someName`(`id`)
)

Table Names

By default, the Kafka Connector maps data from topics into SingleStoreDB tables by matching the topic name to the table name. For example, if the Kafka topic is called kafka-example-topic then the connector will load it into the SingleStoreDB table called kafka-example-topic. The table will be created if it does not already exist.

To specify a custom table name, you can use the singlestore.tableName.<topicName> property.

{
...
"singlestore.tableName.foo" : "bar",
...
}

In this example, data from the Kafka topic foo will be written to the SingleStoreDB table called bar.

You can use this method to specify custom table names for multiple topics:

{
...
"singlestore.tableName.kafka-example-topic-1" : "singlestore-table-name-1",
"singlestore.tableName.kafka-example-topic-2" : "singlestore-table-name-2",
...
}

Installing the SingleStore Kafka Connector via Confluent Hub

This guide shows you how to install and configure the SingleStore Kafka Connector in Confluent Hub, via the following process:

  1. Make sure you satisfy the prerequisites.

  2. Install the connector.

  3. Configure the connector.

Prerequisites

Make sure you have met the following prerequisites before installing the connector.

  • MemSQL version 6.8 or newer/SingleStore version 7.1 or newer installed and running.

Install the Connector and Add a Connection in Confluent

Install the SingleStore Kafka Connector via the Confluent Hub.

Run the Confluent Hub CLI installation command as described on the Confluent Hub:

Accept all of the default configuration options while installing.

Now that you have the connector installed, you can create a connection.

  1. Browse to the Confluent Control Center.

  2. Click Connect in the left side menu.

  3. Click Add Connector.

  4. Select SingleStore Sink Connector.

  5. Select the topics from which you want to get data.

  6. Configure the connector properties.

For an explanation of the various configuration properties, see SingleStore Kafka Connector Properties.

Installing the SingleStore Kafka Connector via Download

This guide shows you how to get and install the Java-based SingleStore Kafka Connector for connecting with open source Apache Kafka. The process looks like this:

  1. Make sure you satisfy the prerequisites.

  2. Download the connector JAR file.

  3. Configure the connector properties.

Prerequisites

Make sure you have met the following prerequisites before installing the connector.

  • MemSQL version 6.8 or newer/SingleStore version 7.1 or newer installed and running

  • Java Development Kit (JDK) installed

  • Apache Kafka installed and running

  • Kafka Schema Registry configured

  • Kafka Connect

  • For SingleStore Kafka Connector versions prior to 1.1.1, install the MariaDB JDBC driver

    For SingleStore Kafka Connector versions 1.1.1 and newer, Install/configure the latest version of the SingleStore JDBC driver

Download and the SingleStore Kafka Connector

Get the SingleStore Kafka Connector JAR file here.

You will need to have the JDK installed and configured, with JAVA_HOME pointing to where the JDK is installed.

See the README and the Quickstart files on the GitHub page for more information.

Configure the SingleStoreDB Connector Properties

The connector is configurable via a property file or Kafka-REST. The properties should be specified before starting the kafka-connect job.

The connector properties include the standard Kafka properties as well as some SingleStoreDB-specific properties.

For an explanation of the various SingleStoreDB-specific configuration properties, see SingleStore Kafka Connector Properties.

SingleStore Kafka Connector Properties

Configuration for the connector is controlled via the SingleStore Kafka Connector Sink configuration properties.

Confluent users will configure these properties via the Confluent UI.

The properties listed below show the SingleStoreDB-specific properties. For a complete list of properties refer to the Apache Kafka documentation.

SingleStore Kafka Connector Sink Configuration Properties

Property

Description

Default

connection.ddlEndpoint (required)

Hostname or IP address of the SingleStoreDB Master Aggregator in the format host[:port] (port is optional). Ex. master-agg.foo.internal:3308 or master-agg.foo.internal

connection.dmlEndpoints

Hostname or IP address of SingleStoreDB Aggregator nodes to run queries against in the format host[:port],host[:port],… (port is optional, multiple hosts separated by comma). Ex. child-agg:3308,child-agg2

ddlEndpoint

connection.database (required)

If set, all connections will default to using this database.

empty)

connection.user

SingleStoreDB username.

root

connection.password

SingleStoreDB password.

no password

params.<name>

Specify a specific MySQL or JDBC parameter which will be injected into the connection URI.

empty

max.retries

The maximum number of times to retry on errors before failing the task.

10

retry.backoff.ms

The time in milliseconds to wait following an error before a retry attempt is made.

3000

tableKey.<index_type>[.name]

Specify additional keys to add to tables created by the connector; value of this property is the comma separated list with names of the columns to apply key; <index_type> one of (PRIMARY, COLUMNSTORE, UNIQUE, SHARD, KEY).

singlestore.loadDataCompression

Compress data on load; one of (GZip, LZ4, Skip).

GZip

singlestore.metadata.allow

Allows or denies the use of an additional meta-table to save the recording results.

true

singlestore.metadata.table

Specify the name of the table to save Kafka transaction metadata.

kafka_connect_transaction_metadata

singlestore.tableName.<topicName>=<tableName>

Specify an explicit table name to use for the specified topic.

Example Configuration

You can see an example configuration here.

Last modified: February 3, 2023

Was this article helpful?