Concurrent Multi-Insert Examples

To perform a trickle load into SingleStore, you can run concurrent processes that each load data in batches using the INSERT command to insert up to several thousand rows in each statement. When loading a large volume of data, loading a row at a time is a resource-intensive and time-consuming process due to per-statement overhead, so batching is preferred.

For bulk loading into SingleStore, you can use the LOAD DATA command or pipelines. LOAD DATA is preferable for a small number of files that are not extremely large, loaded in an initial step before working with the data. For very large data sets where parallel loading is important, pipelines are preferred. Moreover, pipelines can perform continuous loads as new files or messages arrive.

The following examples demonstrate how to perform concurrent multi-inserts with different tools/languages.

In this section

Last modified: March 14, 2023

Was this article helpful?