Security and Permissions

SQL Permissions

The Spark user must have access to the master aggregator/SingleStore cluster.

Additionally, SingleStore has a Permissions Matrix which describes the permissions required to run each command.

To perform any SQL operations through the SingleStore Spark Connector, you should have different permissions for different types of operations. The following matrix describes the minimum permissions required to perform some operations. The ALL PRIVILEGES permission allows you to perform any operation.


Min. Permission

Alternative Permission

READ from collection



WRITE to collection



DROP database or collection



CREATE database or collection



SSL Support

The SingleStore Spark Connector uses the SingleStore JDBC Driver under the hood and thus supports SSL configuration out of the box.

Ensure that your SingleStore cluster has SSL configured. See SSL Secure Connections for more information.

Once you have setup SSL on your server, you can enable SSL via setting the following options:

spark.conf.set("spark.datasource.singlestore.useSSL", "true")
spark.conf.set("spark.datasource.singlestore.serverSslCert", "PATH/TO/CERT")

Note: The serverSslCert option may be server’s certificate in DER form, or the server’s CA certificate can be used in one of the following three forms:

  • serverSslCert=/path/to/cert.pem (full path to certificate)

  • serverSslCert=classpath:relative/cert.pem (relative to current classpath)

  • or as verbatim DER-encoded certificate string ------BEGIN CERTIFICATE-----...

You may also want to set these additional options depending on your SSL configuration:

spark.conf.set("spark.datasource.singlestore.trustServerCertificate", "true")
spark.conf.set("spark.datasource.singlestore.disableSslHostnameVerification", "true")

See The SingleStore JDBC Driver for more information. If you are still using the MariaDB JDBC driver, see MariaDB JDBC Connector for more information.

Connecting with a Kerberos-authenticated User

You can use the SingleStore Spark Connector with a Kerberized user without any additional configuration. To use a Kerberized user, you need to configure the connector with the given SingleStore database user that is authenticated with Kerberos (via the user option). Please visit our Kerberos Authentication documentation to learn about how to configure SingleStore users with Kerberos.

Here is an example of configuring the Spark connector globally with a Kerberized SingleStore user named krb_user.

spark = SparkSession.builder()
    .config(“spark.datasource.singlestore.user”, “krb_user”)

You do not need to provide a password when configuring a Spark Connector user that is Kerberized. The connector driver (SingleStore JDBC driver) will be able to authenticate the Kerberos user from the cache by the provided username. Other than omitting a password with this configuration, using a Kerberized user with the connector is the same as using a standard user. Note that if you do provide a password, it will be ignored.