Connect SingleStore Helios to AWS MSK using AWS PrivateLink
On this page
Overview
You can connect your AWS MSK (Managed Service Apache Kafka) service to SingleStore Helios using AWS PrivateLink.
-
Use a single PrivateLink service with a single Network Load Balancer (NLB) that forwards traffic to each Kafka broker via custom ports.
Create a Separate AWS PrivateLink Service per Kafka Broker
Refer to Connect to Kafka Pipelines using an Outbound Endpoint for more information on setting up a separate AWS PrivateLink service per Kafka broker.
Use a Single PrivateLink Service with a Single NLB
To configure using a single backend service, perform the following steps:
-
Set up a Kafka cluster in AWS MSK.
-
Configure an Endpoint service using a single NLB.
-
Set up an outbound private connection in SingleStore Helios.
-
Create a pipeline in SingleStore.
Set Up a Kafka Cluster in AWS MSK
To set up a Kafka cluster in AWS MSK, perform the following steps:
-
Create a Kafka cluster in AWS MSK with Provisioned cluster type.
-
When configuring your AWS MSK cluster, select Availability Zones (AZs) that overlap with the AZs used by SingleStore Helios deployment.
-
Attach a security group that allows traffic from the internal IP range of the NLB that you will create later.
You can allow traffic from all internal IP ranges. -
During cluster creation, enable SASL/SCRAM authentication for secure authentication.
-
After the cluster is created:
-
Add your SASL username and password as a secret in AWS Secrets Manager.
-
Encrypt the secret using a Customer Managed Key (CMK).
-
Attach the secret to the AWS MSK cluster to enable authentication for SingleStore.
-
Configure an Endpoint Service using a Single NLB
To configure an endpoint service using a Single NLB to manage connectivity to the Kafka brokers, perform the following steps:
-
Create target Groups for each Kafka broker.
Note
Create one target group per broker.
-
Get the SASL/SCRAM endpoints from the Kafka cluster summary in View Client Information.
-
Retrieve the IP addresses of the Kafka brokers by selecting the gear icon in the AWS MSK console and enabling the Show IP addresses option.
-
-
Create a Network Load Balancer.
-
Create a single NLB to handle traffic for all Kafka brokers using separate listener ports.
Note
Only a single NLB is required for this private endpoint service setup, regardless of the number of brokers.
-
Create a separate listener for each target group.
Assign a unique port to each listener. For example, use ports 6001, 6002, and 6003 for the three Kafka brokers. -
Create an endpoint service pointing to the load balancer as shown in the previous step.
-
After creating the endpoint service, add the AWS account ID provided by SingleStore to the Allow principals list.
This enables SingleStore to find and access the private endpoint service. -
Use the endpoint service created above to set up an outbound endpoint to SingleStore, which allows SingleStore to connect to your Kafka service.
For more information, refer to Set up an Outbound Connection in SingleStore.
-
Set Up an Outbound Connection in SingleStore
Refer to Connect to SingleStore Helios using AWS PrivateLink for more information on configuring the outbound private connection in SingleStore Helios.
Create a Pipeline in SingleStore
Using the outbound private connection created earlier, you can create a pipeline in SingleStore.
CREATE OR REPLACE PIPELINE KAFKA_MUNIS_POC_PIPELINE AS LOAD DATA KAFKA 'b-2.testvibhor3.c7wkh8.c10.kafka.us-east-1.amazonaws.com:9096,b-3.testvibhor3.c7wkh8.c10.kafka.us-east-1.amazonaws.com:9096,b-1.testvibhor3.c7wkh8.c10.kafka.us-east-1.amazonaws.com:9096/test'CONFIG '{"spoof.dns": {"b-1.testvibhor3.c7wkh8.c10.kafka.us-east-1.amazonaws.com:9096":"<VPC ENDPOINT>:<NLB LISTENER PORT1>","b-2.testvibhor3.c7wkh8.c10.kafka.us-east-1.amazonaws.com:9096":"<VPC ENDPOINT>:<NLB LISTENER PORT2>","b-3.testvibhor3.c7wkh8.c10.kafka.us-east-1.amazonaws.com:9096":"<VPC ENDPOINT>:<NLB LISTENER PORT3>"},"sasl.username": "<REDACTED>","sasl.mechanism": "SCRAM-SHA-512","security.protocol": "SASL_SSL","ssl.ca.location": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"}'CREDENTIALS '{"sasl.password": "<REDACTED>"}'DISABLE OUT_OF_ORDER OPTIMIZATIONSKIP DUPLICATE KEY ERRORSINTO TABLE kafkatest;
Ensure the following configuration details are applied:
-
In the
LOAD DATA
section, use the appropriate Kafka port for your connection method, e.g. , 9096
in this case for SASL/SCRAM. -
In the
spoof.
section, define mappings that connect each Kafka broker to the corresponding outbound PrivateLink endpoint.dns -
Each broker’s hostname is mapped to the correct NLB listener port (e.
g. , 6001, 6002, 6003).
Example
The following example shows how a pipeline securely ingests data from Kafka brokers over AWS PrivateLink into a kafkatest
table in SingleStore using SASL/SCRAM authentication, spoof.
mapping, and SSL encryption.
CREATE OR REPLACE PIPELINE KAFKA_MUNIS_POC_PIPELINE AS LOAD DATA KAFKA 'b-2.testvibhor3.c7wkh8.c10.kafka.us-east-1.amazonaws.com:9096,b-3.testvibhor3.c7wkh8.c10.kafka.us-east-1.amazonaws.com:9096,b-1.testvibhor3.c7wkh8.c10.kafka.us-east-1.amazonaws.com:9096/test'CONFIG '{"spoof.dns": {"b-1.testvibhor3.c7wkh8.c10.kafka.us-east-1.amazonaws.com:9096":"vpce-0af45e49ab211df13-67n9i1aw-us-east-1c.vpce-svc-02e49c31898768e9c.us-east-1.vpce.amazonaws.com:6003","b-2.testvibhor3.c7wkh8.c10.kafka.us-east-1.amazonaws.com:9096":"vpce-0af45e49ab211df13-67n9i1aw-us-east-1a.vpce-svc-02e49c31898768e9c.us-east-1.vpce.amazonaws.com:6001","b-3.testvibhor3.c7wkh8.c10.kafka.us-east-1.amazonaws.com:9096":"vpce-0af45e49ab211df13-67n9i1aw-us-east-1b.vpce-svc-02e49c31898768e9c.us-east-1.vpce.amazonaws.com:6002"},"sasl.username": "<REDACTED>","sasl.mechanism": "SCRAM-SHA-512","security.protocol": "SASL_SSL","ssl.ca.location": "/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem"}'CREDENTIALS '{"sasl.password": "<REDACTED>"}'DISABLE OUT_OF_ORDER OPTIMIZATIONSKIP DUPLICATE KEY ERRORSINTO TABLE kafkatest;
Note
The broker hostnames and ports used in the Kafka URI must exactly match those specified in the spoof.
mapping.
Last modified: July 24, 2025