Skip to main content

Connection Pooling Support

SingleStore Spark connector supports connection pooling using the Apache Commons DBCP library. It implements the following configuration parameters:



Default Value


The maximum number of active connections that can be allocated from the specified pool at the same time. A negative value indicates an unlimited number of active connections.



The maximum number of connections that can remain idle in the pool without extra ones being released. A negative value indicates an unlimited number of idle connections.



The maximum number of milliseconds that the pool waits (when there are no available connections) for a connection to be returned before throwing an exception. If set to -1, the pool waits indefinitely.



The minimum amount of time an object may sit idle in the pool before it is eligible for eviction by the idle object evictor (if any).

1000 * 60 * 30


The maximum lifetime of the connector (in ms) after which the connection fails the next activation, passivation or validation test. If set to 0 or a negative number, the connection has an infinite lifetime.



The number of milliseconds to sleep between runs of the idle object evictor thread. If set to 0 or a negative number, no idle object evictor thread is run.


The Spark connector saves a global map that contains a BasicDataSource object for each set of connection parameters. Each executor has its own map. When a new connection is created, the connector retrieves the BasicDataSource object for the specified connection parameters and tries to create a connection using the getConnection method. When all the connections for the specified BasicDataSource object are removed, it is deleted from the map. Spark creates separate JVMs for each application, so each application has its own connection pools on drivers/executors. When the application is finished, all the pools are destroyed and the connection is closed.

See Connection Pool Options for a list of connection pool configuration parameters.