INSERT

Inserts data into a table.

Syntax

INSERT [IGNORE] [INTO] <table_name> [(<column_name>,...)]
    [VALUES | VALUE] (expr,...),(...),...
    [ ON DUPLICATE KEY UPDATE
      <column_name> = insert_expr
        [, <column_name> = insert_expr] ... ]

INSERT [IGNORE] [INTO] <table_name> [(col_name,...)]
    SELECT [WITH(force_random_reshuffle=1)] ...
    [ ON DUPLICATE KEY UPDATE
      <column_name> = insert_expr
        [, <column_name> = insert_expr] ... ]

INSERT [IGNORE] [INTO] <table_name>
    SET <column_name>=expr, ...
    [ ON DUPLICATE KEY UPDATE
      <column_name> = insert_expr
        [, <column_name>= insert_expr] ... ]

 insert_expr:
      expr
    | VALUES(<column_name>)

Remarks

  • A field of type TIMESTAMP or TIMESTAMP(6) declared with the DEFAULT <value> modifier will be set to <value> if no explicit value is specified. Refer to the discussion of these types in the Data Types topic for more details.

  • SingleStore supports constants, DEFAULT, or nullary builtins such as NOW(), RAND(), or UNIX_TIMESTAMP() for expressions (expr) for INSERTs.

  • INSERT queries will fail if the maximum_table_memory limit has been reached.

    Refer to Addressing the Maximum Table Memory Error for more information.

  • INSERT IGNORE discards rows with duplicate keys for both singleton and multi-row inserts. In this case, SingleStore discards records with duplicate keys and, without rolling back, continues inserting records with unique keys. This can speed up performance of multi-INSERT because it avoids an extra roundtrip between the aggregators and leaves.

  • Multi-inserts are atomic in nature. For a multi-insert transaction, either all the rows are committed or the transaction is rolled back.

  • Multi-inserts may or may not have consecutive AUTO_INCREMENT values. See AUTO_INCREMENT behavior for more information.

  • When you use the IGNORE modifier, data conversion errors for a row being inserted are ignored, and the row will be inserted with default values for fields where the conversion failed.

  • If the ON DUPLICATE KEY UPDATE clause is specified, and a row is to be inserted that would result in a duplicate value in a PRIMARY KEY or UNIQUE index, SingleStore will instead perform an UPDATE of the old row.

  • The IGNORE modifier and ON DUPLICATE KEY UPDATE clause cannot be used in the same INSERT query, because their semantics contradict each other. For more information, see Query Errors.

  • When using ON DUPLICATE KEY UPDATE, the affected row count will be 0 if no existing row is changed, 1 if a new row is inserted, and 2 if an existing row is updated.

  • ON DUPLICATE KEY UPDATE cannot update unique key columns in columnstore tables. For rowstore tables, ON DUPLICATE KEY UPDATE is allowed as long as the updated column isn't part of the table's explicitly defined shard key or primary key if there's no explicitly defined shard key.

  • This command must be run on the master aggregator or a child aggregator node (see Cluster Management Commands).

    Note that when running this command on reference tables you must connect to the master aggregator.

  • INSERT ... SELECT into AUTO_INCREMENT columns can only be pushed down to leaves if the source column is also AUTO_INCREMENT, because AUTO_INCREMENT values are generated on the aggregator. This prevents insertion of 0 or NULL values in the AUTO_INCREMENT column, if the source column isn’t AUTO_INCREMENT. For more information, see AUTO_INCREMENT Behavior.

  • SingleStore supports INSERT ... SELECT across databases. The column data types that are being inserted must match the column data types from selected table:

    INSERT INTO db2.test2.t2 (col1, col2, ...) SELECT col1, col2,... FROM db1.test1.t1;
  • Writing to multiple databases in a transaction is not supported.

  • Scalar expression are not supported as part of the VALUES clause:

     INSERT INTO t1 (col1, col2) VALUES ((SELECT col1 FROM t2 LIMIT 1), 1);
  • Refer to the Permission Matrix for the required permission.

force_random_reshuffle

To reduce skew, you can force the INSERT … SELECT to redistribute the data with the force_random_reshuffle query hint. A common use case for this hint is on keylessly sharded tables, as it allows forced redistribution of data among partitions, rather than inserting it locally into the same partition.

On Duplicate Key Update

ON DUPLICATE KEY UPDATE allows you to perform an upsert of a row. For more information, see the Performing Upserts topic.

Examples

Simple Insert

INSERT INTO mytbl (v) VALUES ("hello"), ("goodbye");

This example shows a successful insert even when converting a NULL value to an int NOT NULL type, with the NULL replaced by 0:

CREATE TABLE mytbl2(a int not null);
INSERT IGNORE mytbl2 VALUES(null);
SELECT * FROM mytbl2;
+---+
| a |
+---+
| 0 |
+---+
1 row in set (0.24 sec)

ON DUPLICATE KEY UPDATE and force_random_reshuffle

This example shows a simple INSERT with both ON DUPLICATE KEY UPDATE and force_random_reshuffle. The rows from mytbl_new will be inserted into mytbl, and distributed across partitions.

INSERT INTO mytbl (column1, column2, column3)
SELECT WITH(force_random_reshuffle=1)
* FROM mytbl_new ON DUPLICATE KEY UPDATE
column1 = VALUES(column1),
column2 = VALUES(column2),
column3 = VALUES(column3);
Query OK, 29 rows affected (0.18 sec)

Last modified: March 8, 2024

Was this article helpful?