Connect to Kafka Pipelines using an Outbound Endpoint
On this page
Overview
Kafka pipelines in SingleStore can connect to externally hosted Kafka clusters using outbound private endpoints, even when the Kafka cluster is not exposed to the public internet.
SingleStore supports outbound connections to these clusters by using the spoof.
To support this configuration, use the spoof.
pipeline setting to map each broker hostname and port to the corresponding private endpoint.
Connect to Amazon MSK Kafka
There are two approaches to setting up outbound access to Amazon MSK:
-
Single NLB and Private Endpoint for all Kafka brokers
-
Separate PrivateLink per Kafka broker
Single PrivateLink and NLB
This configuration uses a single Network Load Balancer (NLB) with multiple listeners to forward traffic to each Kafka broker.
-
Create one target group per broker.
Each target group should: -
Forward traffic to the broker’s IP address.
-
Use the appropriate port (e.
g. , 6001, 6002, or 6003) based on the security protocol in use.
-
-
Create a single Network Load Balancer.
Only one NLB is required for this setup, and it will be used to route traffic to all Kafka brokers. -
For each target group, create a listener on a unique port (e.
g. , 6001, 6002, 6003) that forwards to the target group. -
Create an endpoint service pointing to the load balancer as shown above.
-
Whitelist the AWS SingleStore account shared by SingleStore to the allowed principals for the endpoint service.
Once the PrivateLink service has been created, use it to configure the outbound connection in SingleStore.
-
Create an Outbound Private Endpoint
-
In the SingleStore Cloud Portal, navigate to the Firewall tab in your Helios workspace.
-
Create an Outbound Private Endpoint using the PrivateLink service created in the previous step.
-
-
Create the Pipeline
-
After the endpoint is active, configure your Kafka pipeline to use it.
-
Use the appropriate port in the
LOAD DATA KAFKA
command based on the security protocol.For SASL/SCRAM, use port 9096
. -
In the
spoof.
section, map each broker’s hostname and port to the outbound private endpoint and corresponding NLB listener port (e.dns g. , 6001, 6002, 6003). CREATE OR REPLACE PIPELINE kafka_munis_poc_pipeline ASLOAD DATA KAFKA '<broker-1>:9096,<broker-2>:9096,<broker-3>:9096/<kafka-topic>'CONFIG '{"spoof.dns": {"<broker-1>:9096": "<vpc-endpoint>:6001","<broker-2>:9096": "<vpc-endpoint>:6002","<broker-3>:9096": "<vpc-endpoint>:6003"},"sasl.username": "<REDACTED>","sasl.mechanism": "SCRAM-SHA-512","security.protocol": "SASL_SSL","ssl.ca.location": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"}'CREDENTIALS '{"sasl.password": "<REDACTED>"}'DISABLE OUT_OF_ORDER OPTIMIZATIONSKIP DUPLICATE KEY ERRORSINTO TABLE <table_name>; -
The broker hostnames and ports in the KAFKA URI and the
spoof.
mapping must match exactly.dns You may use any port, as long as it is consistent across both settings.
-
Separate PrivateLink per Kafka Broker
This setup creates one NLB and one outbound endpoint per Kafka broker.
-
Create one target group per broker.
Each target group must: -
Forward traffic to the broker’s IP address.
-
Use the appropriate port (e.
g. , 6001, 6002, or 6003) based on the security protocol in use.
-
-
For each Kafka broker, create an NLB associated with its corresponding target group.
-
Ensure that the NLB is deployed in the same availability zone (AZ) as the broker and its target group.
-
Configure a listener on the NLB using the appropriate port for your Kafka connection protocol (e.
g. , 6001, 6002, or 6003). -
Each listener should forward traffic to the target group that routes to the specific broker’s IP address.
-
-
Create an endpoint service pointing to the load balancer as shown above.
- -
Whitelist the AWS SingleStore account shared by SingleStore to the allowed principals for each endpoint service.
-
While outbound private endpoints can typically be created directly in the SingleStore Cloud Portal, the portal currently allows only one outbound endpoint per workspace.
Since this configuration requires multiple endpoints, open a Support ticket to have SingleStore create additional outbound endpoints. Provide the following: -
Endpoint Service ARNs
-
AWS network IDs (e.
g. , use1-az1, use1-az2) of the subnets hosting the Kafka brokers
-
-
Create the pipeline using the provisioned endpoints.
SingleStore will provide the outbound private endpoints associated with each Kafka broker. Once these are available, configure the pipeline using the spoof.
setting as shown below:dns CREATE OR REPLACE PIPELINE kafka_munis_poc_pipeline ASLOAD DATA KAFKA '<broker-1>:9096,<broker-2>:9096,<broker-3>:9096/<kafka-topic>'CONFIG '{"spoof.dns": {"<broker-1>:9096": "<vpc-endpoint 1>:9096","<broker-2>:9096": "<vpc-endpoint 2>:9096","<broker-3>:9096": "<vpc-endpoint 3>:9096"},"sasl.username": "<REDACTED>","sasl.mechanism": "SCRAM-SHA-512","security.protocol": "SASL_SSL","ssl.ca.location": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"}'CREDENTIALS '{"sasl.password": "<REDACTED>"}'DISABLE OUT_OF_ORDER OPTIMIZATIONSKIP DUPLICATE KEY ERRORS
Note
The connection should reference the actual Kafka broker endpoints.spoof.
configuration will then redirect those endpoints to the corresponding outbound private endpoints.
Connect to Confluent Kafka
Confluent Cloud does not expose individual broker hostnames, so the spoof.
method is not applicable.
Prerequisites
Refer to the following Confluent documentation for information on instructions:
Set Up Private DNS Access
Confluent Kafka can only be integrated using private DNS from SingleStore to resolve the Confluent Kafka domain.
To enable a custom domain, you must provision a dedicated Confluent cluster.
Open a Support ticket with SingleStore.
-
The custom domain name used in Confluent.
-
PrivateLink service(s) created in Confluent
-
Availability zones where the PrivateLink endpoints are hosted.
SingleStore will create a Private DNS Zone.
Note
You only need to reference the bootstrap server.
Last modified: July 21, 2025