Security and Permissions

SQL Permissions

The Spark user must have access to the master aggregator/SingleStoreDB cluster.

Additionally, SingleStoreDB has a Permissions Matrix which describes the permissions required to run each command.

To perform any SQL operations through the SingleStore Spark Connector, you should have different permissions for different types of operations. The following matrix describes the minimum permissions required to perform some operations. The ALL PRIVILEGES permission allows you to perform any operation.

Operation

Min. Permission

Alternative Permission

READ from collection

SELECT

ALL PRIVILEGES

WRITE to collection

SELECT, INSERT

ALL PRIVILEGES

DROP database or collection

SELECT, INSERT, DROP

ALL PRIVILEGES

CREATE database or collection

SELECT, INSERT, CREATE

ALL PRIVILEGES

SSL Support

The SingleStore Spark Connector uses the SingleStore JDBC Driver under the hood and thus supports SSL configuration out of the box.

Ensure that your SingleStoreDB cluster has SSL configured. See SSL Secure Connections for more information.

Once you have setup SSL on your server, use the following options to enable SSL:

spark.conf.set("spark.datasource.singlestore.useSSL", "true")
spark.conf.set("spark.datasource.singlestore.serverSslCert", "PATH/TO/CERT")

Note: The serverSslCert option may be server’s certificate in DER form, or the server’s CA certificate. It can be used in one of the following three forms:

  • Full path to certificate: serverSslCert=/path/to/cert.pem

  • Relative to current classpath: serverSslCert=classpath:relative/cert.pem

  • Verbatim DER-encoded certificate string: ------BEGIN CERTIFICATE-----...

Depending on your SSL configuration, set these additional options:

spark.conf.set("spark.datasource.singlestore.trustServerCertificate", "true")
spark.conf.set("spark.datasource.singlestore.disableSslHostnameVerification", "true")

See The SingleStore JDBC Driver for more information. If you are still using the MariaDB JDBC driver, see MariaDB JDBC Connector for more information.

Connect with a Kerberos-authenticated User

You can use the SingleStore Spark Connector with a Kerberized user without any additional configuration. To use a Kerberized user, you need to configure the connector with the given SingleStoreDB database user that is authenticated with Kerberos (via the user option). See Kerberos Authentication for information on how to configure SingleStoreDB users with Kerberos.

Here is an example of configuring the Spark connector globally with a Kerberized SingleStoreDB user named krb_user.

spark = SparkSession.builder()
.config(“spark.datasource.singlestore.user”, “krb_user”)
.getOrCreate()

You do not need to provide a password when configuring a Spark Connector user that is Kerberized. The connector driver (SingleStore JDBC driver) authenticates the Kerberos user from the cache by the provided username. Other than omitting a password with this configuration, using a Kerberized user with the connector is the same as using a standard user. If you provide a password, it will be ignored.

Authenticate via JWTs

To authenticate your connection to a SingleStoreDB cluster using the SingleStore Spark connector with a JWT, specify the following parameters:

  • credentialType=JWT

  • password=<jwt-token>

Note

To authenticate your connection to the SingleStoreDB cluster using JWTs, the SingleStoreDB user must connect via SSL and use JWT for authentication. To create a SingleStoreDB user that can authenticate with a JWT, execute the following command:

CREATE USER 'email@example.com'@'%' IDENTIFIED WITH authentication_jwt REQUIRE SSL;

Last modified: May 3, 2023

Was this article helpful?