Important

The SingleStore 9.1 release candidate (RC) gives you the opportunity to preview, evaluate, and provide feedback on new and upcoming features prior to their general availability. In the interim, SingleStore 9.0 is recommended for production workloads, which can later be upgraded to SingleStore 9.1.

CREATE LINK

Creates a new connection link to S3, Azure, GCS, HDFS, HTTP, Kafka, MongoDB®, or MySQL for a permitted user.

Refer to ALTER LINK to modify an existing connection link.

Syntax

CREATE [OR REPLACE] LINK [IF NOT EXISTS] [db_name.]connection_name AS
{ S3 | AZURE | GCS | HDFS | HTTP | KAFKA | MONGODB | MYSQL }
CREDENTIALS 'credentials_json'
[ CONFIG 'configuration_json' ]
[ DESCRIPTION 'description' ]

Remarks

  • If the OR REPLACE clause is provided and a link with link_name already exists, then the CREATE query updates the link with the new definition (including its credentials), but preserves pipeline’s state and its cursor positions. The source type of an existing link cannot be modified using the CREATE OR REPLACE statement. For example, an existing S3 link pLink cannot be redefined as a Kafka source using CREATE OR REPLACE LINK pLink AS KAFKA...

  • The OR REPLACE and IF NOT EXISTS clauses are mutually exclusive, i.e., they cannot be used in the same CREATE LINK statement.

  • db_name is the name of the SingleStore database. It is an optional parameter. If not specified, the connection link is created in the current (context) database.

  • CONFIG specifies non-secret configuration settings, such as region and host. Sensitive secrets must not be stored in CONFIG. Store all passwords, tokens, and keys in CREDENTIALS instead. The contents of CONFIG may be displayed in metadata listings, such as SHOW LINKS and INFORMATION_SCHEMA.LINKS.

  • CONFIG and CREDENTIALS can be specified in either order (CONFIG followed by CREDENTIALS or CREDENTIALS followed by CONFIG).

  • The CONFIG and CREDENTIALS expect JSON-formatted information. Refer to the Examples. For more configuration examples, refer to BACKUP DATABASE.

  • connection_name is a user defined name of the connection link.

  • Only users with the CREATE LINK permission can create a connection link.

  • description specifies the details related to the connection link.

  • Commands such as BACKUP, RESTORE, CREATE PIPELINE, and SELECT support connection links. Refer to any of these commands for details on the credentials_json and configuration_json clauses.

  • This command causes implicit commits. Refer to COMMIT for more information.

  • This command can be used to create regular and encrypted HTTP links. Refer to Using HTTP connection links for more information and examples.

  • For EKS IRSA support refer Enable EKS IRSA

  • Refer to the Permission Matrix for the required permissions.

Examples

AWS S3 Example

The configuration_json and credentials_json for AWS connections look like this:

CREDENTIALS '{
  "aws_access_key_id": "replace_with_your_access_key_id",
  "aws_secret_access_key": "replace_with_your_secret_access_key"
  [, "aws_session_token": "replace_with_your_temp_session_token"]
  [, "role_arn":"replace_with_your_role_arn"]
}'

CONFIG ‘{
  "region": "your_region",
  "endpoint_url": "http://other_endpoint"
  [,"x-amz-server-side-encryption":"<encryption_type>" [, "x-amz-server-side-encryption-aws-kms-key-id":"<optional_key>" ] |
  "x-amz-server-side-encryption-customer-algorithm":"<encryption_type>",
  "x-amz-server-side-encryption-customer-key":"<encrypted_or_unencrypted_key>",
  "x-amz-server-side-encryption-customer-key-MD5":"<key>"
  ]
}'

For other options for the CONFIG clause (disable_gunzip, request_payer, and others) see CREATE PIPELINE.

The following example demonstrates how to create an S3 connection link product_S3 to the database Orderdb.

CREATE LINK Orderdb.product_S3 AS S3
CREDENTIALS '{"aws_access_key_id":"your_access_key_id",
"aws_secret_access_key":"your_secret_access_key"}'
CONFIG '{"region":"us-east-1"}'
DESCRIPTION 'Products ordered in December';

Azure Example

The credentials_json for Azure looks like this:

CREDENTIALS '{
  "account_name": "your_account_name",
  "account_key": "encrypted_key"
}';

The following example shows how to create an Azure connection link.

CREATE LINK [db_name.]connection_name AS AZURE
CREDENTIALS '{
"account_name": "your_account",
"account_key": "encrypted_key"
}';

For more information about Azure credentials, see Azure Blob Pipeline Syntax.

GCS Example

The credentials_json for GCS looks like this:

CREDENTIALS '{
  "access_id": "your_google_access_key",
  "secret_key": "your_google_secret_key"
};

The following example demonstrates how to create a GCS connection link called mylink.

CREATE LINK mylink AS GCS CREDENTIALS '{
"access_id": "your_google_access_key",
"secret_key": "your_google_secret_key"
}';

For more information about GCS credentials, see CREATE PIPELINE.

HDFS Example

The login for HDFS cluster is performed via a keytab file, so typically credentials_json is left blank and config_json is used. The following example demonstrates how to create an HDFS link called mylink:

CREATE LINK mylink AS HDFS
CREDENTIALS ''
CONFIG '{
"hadoop.security.authentication": "kerberos",
"kerberos.user": "user@masked_host",
"kerberos.keytab": "/opt/software/adkeytabs/user.keytab",
"dfs.client.use.datanode.hostname": true,
"dfs.datanode.kerberos.principal": "hdfs/_HOST@masked_host",
"dfs.namenode.kerberos.principal": "hdfs/_HOST@masked_host"
}';

For more information about HDFS credentials, see Advanced HDFS Pipeline Mode.

HTTP Example

The following example demonstrates how to create an HTTP link.

CREATE LINK test AS HTTP CREDENTIALS '{"headers": {"Authorization": "Basic cm9vdDp0ZXN0aW5nCg=="}}';

Refer to Using HTTP connection links for more information and examples.

Kafka Example

The credentials_json and config_json for Kafka can be complex. See CREATE PIPELINE for more information.

The following example demonstrates how to create a Kafka connection link for Confluent Cloud.

CREATE LINK mylink AS KAFKA
CONFIG '{
"security.protocol": "sasl_ssl",
"sasl.mechanism": "PLAIN"
}'
CREDENTIALS '{
"sasl.username": "your_sasl_username",
"sasl.password": "your_sasl_secret_key"
}';

MongoDB® Example

Here's a sample syntax to create a link to a MongoDB® endpoint:

CREATE LINK <linkname> AS MONGODB
CONFIG '{
"mongodb.hosts":"<Hostname>",
"collection.include.list": "<Collection list>",
"mongodb.ssl.enabled":"true",
"mongodb.authsource":"admin"}'
CREDENTIALS '{
"mongodb.user":"<username>",
"mongodb.password":"<password>"}';

For more information, refer to Replicate Data from MongoDB®.

MySQL Example

Here's a sample syntax to create a link to a MySQL endpoint:

CREATE LINK <linkname> AS MYSQL
CONFIG '{
"database.hostname": "<Hostname>",
"database.exclude.list": "<database_list>",
"database.port": 3306,
"database.ssl.mode":"required"}'
CREDENTIALS '{
"database.password": "<password>",
"database.user": "<username>"}';

For more information, refer to Replicate Data from MySQL.

The following example demonstrates how to use the OR REPLACE clause to update an existing connection link.

Create an S3 connection link called analytics_link:

CREATE LINK analytics_link AS S3
CREDENTIALS '{"aws_access_key_id":"original_key_id",
"aws_secret_access_key":"original_secret_access_key"}'
CONFIG '{"region":"us-west-1"}'
DESCRIPTION 'Analytics data storage';

If the credentials or configuration need to be updated, use CREATE OR REPLACE to modify the link:

CREATE OR REPLACE LINK analytics_link AS S3
CREDENTIALS '{"aws_access_key_id":"updated_key_id",
"aws_secret_access_key":"updated_secret_access_key"}'
CONFIG '{"region":"us-east-1"}'
DESCRIPTION 'Updated analytics data storage in new region';

The CREATE OR REPLACE statement updates the credentials and configuration for analytics_link. Any pipelines or operations using this link continue to function with the updated connection details, and their cursor positions are preserved.

Last modified: March 9, 2026

Was this article helpful?

Verification instructions

Note: You must install cosign to verify the authenticity of the SingleStore file.

Use the following steps to verify the authenticity of singlestoredb-server, singlestoredb-toolbox, singlestoredb-studio, and singlestore-client SingleStore files that have been downloaded.

You may perform the following steps on any computer that can run cosign, such as the main deployment host of the cluster.

  1. (Optional) Run the following command to view the associated signature files.

    curl undefined
  2. Download the signature file from the SingleStore release server.

    • Option 1: Click the Download Signature button next to the SingleStore file.

    • Option 2: Copy and paste the following URL into the address bar of your browser and save the signature file.

    • Option 3: Run the following command to download the signature file.

      curl -O undefined
  3. After the signature file has been downloaded, run the following command to verify the authenticity of the SingleStore file.

    echo -n undefined |
    cosign verify-blob --certificate-oidc-issuer https://oidc.eks.us-east-1.amazonaws.com/id/CCDCDBA1379A5596AB5B2E46DCA385BC \
    --certificate-identity https://kubernetes.io/namespaces/freya-production/serviceaccounts/job-worker \
    --bundle undefined \
    --new-bundle-format -
    Verified OK

Try Out This Notebook to See What’s Possible in SingleStore

Get access to other groundbreaking datasets and engage with our community for expert advice.