LOAD DATA
On this page
Important
SingleStore Helios only supports LOAD DATA
with the LOCAL
option.LOAD DATA LOCAL
must be run from a SQL client running on a computer that can access your SingleStore Helios instance, such as the MySQL client (mysql-client) or SingleStore client.
Important
SingleStore workspaces can be integrated with many third-party ETL and CDC tools.
Import data stored in a CSV, JSON, or Avro file into a SingleStore table (referred to as the destination table in this topic).
Remarks
The syntax and semantics of loading data from a CSV, JSON, or Avro file are detailed below.
REPLACE
, SKIP CONSTRAINT ERRORS
, and SKIP DUPLICATE KEY ERRORS
are supported with non-CSV pipelines.
During the import of data stored in any of these files, you can optionally apply operations to the data as follows:
-
Use the
WHERE
clause to do filtering on incoming data.Only rows that satisfy the expression in the WHERE
clause will be loaded into SingleStore Helios.For an example of how to use the WHERE
clause, see the examples section. -
Use the
SET
clause to set columns using specific values or expressions with variables.For example, if your input file has 9 columns but the table has a 10th column called foo
, you can addSET foo=0
orSET foo=@myVariable
.Note that column names may only be used on the left side of SET
expressions. -
Use the
CHARACTER SET
clause to import files with any supported character set into SingleStore.For more information, see Character Encoding.
Refer to the Permission Matrix for the required permission.
Important
If a query uses @
in a LOAD DATA
statement, SingleStore Helios interprets it as a reference to a LOAD DATA
assignment to a variable, not as a reference to a user-defined variable.
The behavior of SingleStore Helios’s LOAD DATA
command has several functional differences from MySQL’s command:
-
LOAD DATA
will load the data into SingleStore Helios in parallel to maximize performance.This makes LOAD DATA
in SingleStore Helios much faster on machines with a larger number of processors. -
LOAD DATA
supports loading compressed .gz files. -
The only supported
charset_
isname utf8
.
The mysqlimport
utility can also be used to import data into SingleStore Helios.mysqlimport
uses LOAD DATA
internally.
SingleStore Helios stores information about errors encountered during each LOAD DATA
operation, but the number of errors is limited to 1000 by default.LOAD DATA
statement.MAX_
at the end of the statement to change this limit.MAX_
to 0
.
You can also load data from Stage using the LOAD DATA
command.
Writing to multiple databases in a transaction is not supported.
CSV LOAD DATA
Syntax
LOAD DATA [LOCAL] INFILE '<file_name>'[REPLACE | IGNORE | SKIP { ALL | CONSTRAINT | DUPLICATE KEY | PARSER } ERRORS]INTO TABLE <table_name>[CHARACTER SET <character_set_name>][{FIELDS | COLUMNS}[TERMINATED BY '<string>'][[OPTIONALLY] ENCLOSED BY '<char>'][ESCAPED BY '<char>']][LINES[STARTING BY '<string>'][TERMINATED BY '<string>']][TRAILING NULLCOLS][NULL DEFINED BY <string> [OPTIONALLY ENCLOSED]][IGNORE <number> LINES][ ({<column_name> | @<variable_name>}, ...) ][SET <column_name> = <expression>,...][WHERE <expression>,...][MAX_ERRORS <number>][ERRORS HANDLE <string>]
Remarks
-
Error Logging and Error Handling are discussed at the end of this topic.
-
To specify the compression type of an input file, use the
COMPRESSION
clause.See Handling Data Compression for more information. -
If a CSV file appears to have the incorrect number of fields in any line, you can use the
SKIP PARSER ERRORS
option to skip the line.LOAD DATA
reports a warning for every line that is skipped.Important
Lines in a CSV file may appear to have the wrong number of fields if the
FIELDS TERMINATED BY
,FIELDS ENCLOSED BY
, orESCAPED BY
clauses are incorrectly configured.If LOAD DATA
incorrectly finds the start of the next line in a CSV after a parser error, it may parse all the subsequent lines incorrectly.For these reasons, investigate the CSV input and configuration settings mentioned above before using SKIP PARSER ERRORS
. -
The
SKIP ALL ERRORS
option is inclusive of theSKIP PARSER ERRORS
,SKIP DUPLICATE KEY ERRORS
andSKIP CONSTRAINT ERRORS
options, i.e. , specifying the SKIP ALL ERRORS
option in aLOAD DATA
query applies the behavior of the other three options. -
The
TERMINATED BY
clause allows you to define field, column, and line delimiters so that the input data is interpreted and read correctly.For example, use FIELDS TERMINATED BY
clause to load a CSV file where the fields are delimited by commas.Additionally, use the LINES TERMINATED BY '\r\n'
clause if the lines in the CSV file are terminated by carriage return/newline pairs. -
The
ENCLOSED BY
or equivalentOPTIONALLY ENCLOSED BY
clause allows you to specify a string that encloses the field values.For example, use the ENCLOSED BY '"'
clause to load a CSV file where the fields are enclosed within double quotation.Note that LOAD DATA
will still load a field value even if it is not enclosed. -
The
ESCAPED BY
clause allows you to specify the escape character.For example, if the input data contains special character(s), you may need to escape those characters to avoid misinterpretation. Also, you may need to redefine the default escape character to load a data set that contains the said character. -
Many characters can be an escape.
If the FIELDS ESCAPED BY
clause is empty, the character escape sequence will do nothing. -
You can also load data from Stage using the
LOAD DATA
command.Refer to Stage for more information. -
The
STARTING BY
clause allows you to load only those lines of data that include a specified string (or prefix).While loading data, the STARTING BY
clause skips the specified prefix and anything before it.It also skips the lines that do not contain the specified prefix. If no
FIELDS
orLINES
clause is specified, then SingleStore uses the following defaults:FIELDS TERMINATED BY '\t'ENCLOSED BY ''ESCAPED BY '\\'LINES TERMINATED BY '\n'STARTING BY ''
-
The
TRAILING NULLCOLS
clause allows the input file to contain rows having fewer than the number of columns in the table.These missing fields must be trailing columns in the row; they are inserted as NULL
values in the table.See Using the TRAILING NULLCOLS
Clause.
-
The
NULL DEFINED BY
clause insertsNULL
field values in the table for fields in the input file having the valuestring_
.to_ insert_ as_ null The OPTIONALLY ENCLOSED
option ensures that a quoted field is also treated asNULL
, not an empty string.Refer to Using the NULL DEFINED BY
Clause for more information.Note: If the string value
'NULL'
is passed to a number-type column (for example, DECIMAL), it is parsed as a string and converted to 0.To insert NULL
values instead, use theNULL DEFINED BY 'NULL' OPTIONALLY ENCLOSED
clause.You can use the ENCLOSED BY
clause in conjunction to specify the string that encloses theNULL
values. -
The
IGNORE <number> LINES
clause ignores the specified lines from the beginning of the input file.For example, use IGNORE 1 LINES
to skip the header line that contains the column names.
LOAD DATA from an AWS S3 Source
CSV files that are stored in an AWS S3 bucket can be loaded via a LOAD DATA query without a pipeline.
LOAD DATA S3 '<bucket name>'CONFIG '{"region" : "<region_name>"}'CREDENTIALS '{"aws_access_key_id" : "<key_id>","aws_secret_access_key": "<access_key>"}'INTO TABLE <table_name>;
Examples
Loading Data when the Order of the Columns in the Destination Table and Source File are Different
If the order of columns in the table is different from the order in the source file, you can name them explicitly.
LOAD DATA INFILE 'foo.tsv'INTO TABLE foo (fourth, third, second, first);
Skipping Columns in the Source File
You can skip columns in the source file using the @
sign.
LOAD DATA INFILE 'foo.tsv'INTO TABLE foo (bar, @, @, baz);
Specifying the Column Delimiter
The default column delimiter is the tab (t
) character, ASCII code 09.COLUMNS TERMINATED BY
clause:
LOAD DATA INFILE 'foo.csv'INTO TABLE fooCOLUMNS TERMINATED BY ',';
In the following example, field and line delimiters are used to read a file that contains fields separated by commas and lines terminated by carriage return/newline pairs:
LOAD DATA INFILE 'foo.csv' INTO TABLE foo FIELDS TERMINATED BY ',' LINES TERMINATED BY '\r\n';
Source File with Unusual Column Separators
The following example demonstrates loading a file that has unusual column separators (|||
):
LOAD DATA INFILE 'foo.oddformat'INTO TABLE fooCOLUMNS TERMINATED BY '|||';
Loading Data from Multiple Files
Using globbing, you can load data from multiple files in a single LOAD DATA
query.
The following query loads data from all the .
files with names starting with a digit:
LOAD DATA INFILE "[0-9]*.csv"INTO TABLE cust(ID,NAME,ORDERS);
The following query loads data from all the .
files with filenames having four characters:
LOAD DATA INFILE "????.csv"INTO TABLE cust(ID,NAME,ORDERS);
The following query loads data from all the .
files with filenames not starting with a number:
LOAD DATA INFILE "[!0-9]*.csv"INTO TABLE cust(ID,NAME,ORDERS);
Note
LOAD DATA LOCAL INFILE
does not support globbing.
LOAD DATA INFILE
supports globbing in filenames, but not in directory names.
CREATE PIPELINE
contains a LOAD DATA
clause.LOAD DATA
supports globbing, both in directory names and filenames.
Using the TRAILING NULLCOLS
Clause
The following example demonstrates how to use the TRAILING NULLCOLS
clause using the file numbers.
, with the following content:
1,2,3
4,5
6
Run the following commands:
CREATE TABLE foo(a INT, b INT, c INT);LOAD DATA INFILE 'numbers.csv' INTO TABLE foo COLUMNS TERMINATED BY ',' TRAILING NULLCOLS;SELECT * FROM foo;
+------+------+------+
| a | b | c |
+------+------+------+
| 1 | 2 | 3 |
| 4 | 5 | NULL |
| 6 | NULL | NULL |
+------+------+------+
Using the NULL DEFINED BY
Clause
The following example demonstrates how to use the NULL DEFINED BY
clause using the data.
file.
cat data.csv
DTB,'',25
SPD,,40
SELECT * FROM stockN;
+------+-------------+-------+
| ID | City | Count |
+------+-------------+-------+
| XCN | new york | 45 |
| ZDF | washington | 20 |
| XCN | chicago | 32 |
+------+-------------+-------+
The following query inserts the un-enclosed empty field as a NULL
value and the enclosed empty field as an empty string.
LOAD DATA INFILE '/data.csv'INTO TABLE stockNCOLUMNS TERMINATED BY ','OPTIONALLY ENCLOSED BY "'"NULL DEFINED BY '';SELECT * FROM stockN;
+------+-------------+-------+
| ID | City | Count |
+------+-------------+-------+
| XCN | new york | 45 |
| ZDF | washington | 20 |
| XCN | chicago | 32 |
| DTB | | 25 |
| SPD | NULL | 40 |
+------+-------------+-------+
If you add the OPTIONALLY ENCLOSED
option to the NULL DEFINED BY
clause in the query above, and run the following query instead, both the empty fields are inserted as a NULL
value:
LOAD DATA INFILE '/data.csv'INTO TABLE stockNCOLUMNS TERMINATED BY ','OPTIONALLY ENCLOSED BY "'"NULL DEFINED BY '' OPTIONALLY ENCLOSED;SELECT * FROM stockN;
+------+-------------+-------+
| ID | City | Count |
+------+-------------+-------+
| XCN | new york | 45 |
| ZDF | washington | 20 |
| XCN | chicago | 32 |
| DTB | NULL | 25 |
| SPD | NULL | 40 |
+------+-------------+-------+
Using the IGNORE LINES
Clause
In the following example, the IGNORE LINES
clause is used to skip the header line that contains column names in the source file:
LOAD DATA INFILE '/tmp/data.txt' INTO City IGNORE 1 LINES;
Using the ESCAPED BY
Clause
The following example demonstrates how to load data into the loadEsc
table using the ESCAPED BY
clause from the file contacts.
, whose contents are shown below.
GALE\, ADAM, Brooklyn
FLETCHER\, RON, New York
WAKEFIELD\, CLARA, DC
DESC loadEsc;
+-------+-------------+------+------+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+-------------+------+------+---------+-------+
| Name | varchar(40) | YES | | NULL | |
| City | varchar(40) | YES | | NULL | |
+-------+-------------+------+------+---------+-------+
Execute the following query:
LOAD DATA INFILE '/contacts.csv'INTO TABLE loadEsc COLUMNS TERMINATED BY ',' ESCAPED BY '\\' ;SELECT * FROM loadEsc;
+-------------------+-----------+
| Name | City |
+-------------------+-----------+
| GALE, ADAM | Brooklyn |
| FLETCHER, RON | New York |
| WAKEFIELD, CLARA | DC |
+-------------------+-----------+
In this query, the \
character escapes the comma (,
) between the first two fields of the contacts.\
(backslash) is the default escape character in a SQL query.\\
(double backslash) is used escape the backslash itself inside the query.
Warning
If you (accidentally) escape the TERMINATED BY
character in a file, the SQL query may return an error.
GALE\, ADAM\, Brooklyn
FLETCHER\, RON, New York
WAKEFIELD\, CLARA, DC
and then execute the following query
LOAD DATA INFILE '/contacts.csv'INTO TABLE loadEsc COLUMNS TERMINATED BY ',' ESCAPED BY '\\' ;
it returns the following error: ERROR 1261 (01000): Row 1 does not contain data for all columns.\
(backslash) escapes both the commas and LOAD DATA
perceives the first row as a single column.
Using the STARTING BY
Clause
The following example demonstrates how to skip the prefix ###
in the stockUpd.
data file using the STARTING BY
clause.
cat stockUpd.txt
###1,"xcg",
3,"dfg"
new product###4,"rfk",5
LOAD DATA INFILE 'stockUpd.txt'INTO TABLE stockFIELDS TERMINATED BY ','LINES STARTING BY '###';SELECT * FROM stock;
+----+------+----------+
| ID | Code | Quantity |
+----+------+----------+
| 1 | xcg | 10 |
| 4 | rfk | 5 |
+----+------+----------+
In this example, the STARTING BY
clause skips the prefix ###
in the first and third lines and anything before it.###
.
Filtering out Rows from the Source File
You can also filter out unwanted rows using the WHERE
clause.
LOAD DATA INFILE 'foo.oddformat'INTO TABLE foo (bar, baz)WHERE bar = 5;
Filtering out and Transforming Rows From the Source File
Complex transformations can be performed in both the SET
and WHERE
clauses.EventDate
field and an EventId
field:
10-1-2016,1
4-15-2016,2
1-10-2017,3
4-10-2017,4
You want to only load the rows with a date that is within three months from a certain date, 10/15/2016, for instance.
CREATE TABLE foo (EventDate date, EventId int);LOAD DATA INFILE 'date_event.csv'INTO TABLE fooFIELDS TERMINATED BY ','(@EventDate, EventId)SET EventDate = STR_TO_DATE(@EventDate, '%m-%d-%Y')WHERE ABS(MONTHS_BETWEEN(EventDate, date('2016-10-15'))) < 3;SELECT * FROM t;
+------------+---------+
| EventDate | EventId |
+------------+---------+
| 2016-10-01 | 1 |
| 2017-01-10 | 3 |
+------------+---------+
While both column names and variables can be referenced in the WHERE
clause column names can only be assigned to in the SET
clause.SELECT
statements cannot be evaluated.
Using REPLACE
This example uses the cust
table, which is defined as a columnstore table as follows:
CREATE TABLE cust(name VARCHAR(32), id INT(11), orders INT(11), SORT KEY(id), UNIQUE KEY(id) USING HASH, SHARD KEY(id));
Assume the directory /order_
has one file orders.
, which contains the following data:
Chris,7214,6
Elen,8301,4
Adam,3412,5
Rachel,9125,2
Susan,8301,7
George,3412,9
Create a LOAD DATA
statement with a REPLACE
clause:
LOAD DATA INFILE '/order_files/orders.csv' REPLACE INTO TABLE cust FIELDS TERMINATED BY ',';
As LOAD DATA
ingests the data from orders.
into the cust
table, it encounters the fifth and sixth records in the file, which contain the duplicate keys 8301
and 3412
.cust
), are replaced with the fifth and second records.
SELECT * FROM cust ORDER BY name;
+--------+------+--------+
| name | id | orders |
+--------+------+--------+
| Chris | 7214 | 6 |
| George | 3412 | 9 |
| Rachel | 9125 | 2 |
| Susan | 8301 | 7 |
+--------+------+--------+
Note
If you want to see more examples of loading data with vectors, refer to How to Bulk Load Vectors.
Updating Duplicate Key Data
The following examples will show how to use the VALUES()
function and a SELECT
statement to update data when there are duplicate keys.
Using the VALUES() Function
Create a table:
CREATE TABLE orders(comp_name VARCHAR(32), comp_id INT(11), total_orders INT(11),SORT KEY(comp_id), UNIQUE KEY(comp_id) USING HASH, SHARD KEY(comp_id));
Add data using the VALUES() function this will add the number of orders for duplicate keys.
INSERT INTO orders VALUES ('Feedfire',5246146,4),('Gabvine',4917885,8),('Devbug',5679096,12),('Zoomzone',6273216,0),('Browsecat',9803299,2),('Gabvine',4917885,2),('Devbug',5679096,7),('Feednation',7823499,4)ON DUPLICATE KEY UPDATE total_orders = VALUES(total_orders) + total_orders;
Verify the duplicate entries were added together.
SELECT * FROM orders;
+-----------+---------+--------------+
|comp_name | comp_id | total_orders |
+-----------+---------+--------------+
|Devbug | 5679096 | 19 |
|Feednation | 7823499 | 4 |
|Browsecat | 9803299 | 2 |
|Zoomzone | 6273216 | 0 |
|Gabvine | 4917885 | 10 |
|Feedfire | 5246146 | 4 |
+-----------+---------+--------------+
Using SELECT with ON DUPLICATE KEY UPDATE
The table in the previous example will be utilized along with a new table.
CREATE TABLE new_orders(comp_name VARCHAR(32), comp_id INT(11), total_orders INT(11),SORT KEY(comp_id), UNIQUE KEY(comp_id) USING HASH, SHARD KEY(comp_id));
Insert values into the newly created table.
INSERT INTO new_orders VALUES('Skynoodle',9727555,4),('Skynoodle',9727555,6),('Tagchat',7124266,5),('Zoomzone',6273216,0),('Devpulse',6726155,1),('Browsecat',9803299,3),ON DUPLICATE KEY UPDATE total_orders = VALUES(total_orders) + total_orders;
Verify table does not have duplicate records.
SELECT * FROM new_orders;
+-----------+---------+--------------+
|comp_name | comp_id | total_orders |
+-----------+---------+--------------+
|Browsecat | 9803299 | 3|
|Devpulse |6726155 | 1|
|Zoomzone |6273216 | 0|
|Tagchat |7124266 | 5|
|Skynoodle |9727555 | 10|
+-----------+---------+--------------+
The following statement uses INSERT, SELECT, and DUPLICATE KEYS DELETE to combine the data from both tables.
INSERT INTO orders (comp_name, comp_id, total_orders)SELECT * FROM new_ordersON DUPLICATE KEY DELETE WHEN VALUES(total_orders) = 0ELSE UPDATE comp_name = VALUES(comp_name),total_orders = VALUES(total_orders);
Verify that all records have been added to the order table, the duplicates combined, and any with a zero in the total_
Loading a Fixed Length File
This example demonstrates how to load the contents of the file fixed_
, whose contents are shown below.
APE602020-06-01
TR 252019-08-07
HSW8 2019-10-11
YTR122020-09-02
LOAD DATA
inserts each extracted row from fixed_
into the table foo
.
CREATE TABLE foo(a CHAR(3), b INT, c DATETIME);
Run the LOAD DATA
statement:
LOAD DATA INFILE '/fixed_length.csv'INTO TABLE foo (@current_row)SET a = TRIM(SUBSTR(@current_row,1,3)),b = TRIM(SUBSTR(@current_row,4,2)),c = TRIM(SUBSTR(@current_row,6,10));
SUBSTR()
extracts a substring from a string and TRIM()
removes the padding (spaces in this case) from the beginning and the ending of a string.LOAD DATA
statement extracts the line HSW8 2019-10-11
in fixed_
, it does the following to set b
: * It extracts, from HSW8 2019-10-11
, the substring starting at position 4 having a length of 2.8
.8
to yield 8
.
Retrieve the data from foo
:
SELECT * from foo ORDER BY a;
+------+------+---------------------+
| a | b | c |
+------+------+---------------------+
| APE | 60 | 2020-06-01 00:00:00 |
| HSW | 8 | 2019-10-11 00:00:00 |
| TR | 25 | 2019-08-07 00:00:00 |
| YTR | 12 | 2020-09-02 00:00:00 |
+------+------+---------------------+
Loading Data using Hex Field Terminator Syntax
Loading data into a table via a pipeline can be performed using a hexadecimal field terminator.
Syntax
CREATE TABLE <table name>(a int, b int);CREATE PIPELINE <pipeline name> ASLOAD DATA S3 's3://<bucket name>/<file name>.csv'CONFIG '{"region":"us-west-2"}'CREDENTIALS '{"aws_access_key_id": "XXXXXXXXXXXXXXXXXX","aws_secret_access_key": "XXXXXXXXXXXXX"'INTO TABLE <table name>(a, b) fields terminated by 0x2c;START PIPELINE <pipeline name>;SELECT * FROM <table name>;
**
+------+------+
| a | b |
+------+------+
| 1 | 2 |
+------+------+
JSON LOAD DATA
Syntax
LOAD DATA [LOCAL] INFILE 'file_name'[REPLACE | SKIP { CONSTRAINT | DUPLICATE KEY } ERRORS]INTO TABLE tbl_nameFORMAT JSONsubvalue_mapping[SET col_name = expr,...][WHERE expr,...][MAX_ERRORS number][ERRORS HANDLE string]subvalue_mapping:( {col_name | @variable_name} <- subvalue_path [DEFAULT literal_expr], ...)subvalue_path:{% | [%::]ident [::ident ...]}
Semantics
Error Logging and Error Handling are discussed at the end of this topic.
Extract specified subvalues from each JSON value in file_
.tbl_
, or to variables used for a column assignment in a SET
clause.DEFAULT
clause literal instead.WHERE
clause.
To specify the compression type of an input file, use the COMPRESSION
clause.
The file named by file_
must consist of concatenated UTF-8 encoded JSON values, optionally separated by whitespace.
Non-standard JSON values like NaN
, Infinity
, and -Infinity
must not occur in file_
.
If file_
ends in .
or .
, it will be decompressed.
JSON LOAD DATA
supports a subset of the error recovery options allowed by CSV LOAD DATA
.
Like CSV LOAD DATA
, JSON LOAD DATA
allows you to use globbing to load data from multiple files.
Writing to multiple databases in a transaction is not supported.
Extracting JSON Values
subvalue_
specifies which subvalues are extracted and the column or variable to which each one is assigned.
LOAD DATA
uses the ::
-separated list of keys in a subvalue_
to perform successive key lookups in nested JSON objects, as if applying the ::
SQL operator.::
operator, subvalue_
may not be used to extract an element of a JSON array.%
refers to the entire JSON value being processed.%::
may be omitted from paths which are otherwise non-empty.
If a path can’t be found in an input JSON value, then if the containing element of subvalue_
has a DEFAULT
clause, its literal_
will be assigned; otherwise, LOAD DATA
will terminate with an error.
Path components containing whitespace or punctuation must be surrounded by backticks.%::`a.
and `a.
will both extract 1
from the input object {"a.
.
Array elements may be indirectly extracted by applying JSON_
in a SET
clause.
Converting JSON Values
Before assignment or set clause evaluation, the JSON value extracted according to a subvalue_
is converted to a binary collation SQL string whose value depends on the extracted JSON type as follows:
JSON Type |
Converted Value |
---|---|
|
SQL |
|
|
|
Verbatim, from extracted string. |
|
All JSON string escape sequences, including escape sequences are converted to UTF-8. |
|
Verbatim, from extracted string. |
|
Verbatim, from extracted string. |
Conversion is not recursive.true
is not converted to "1"
when it is a subvalue of an object which is being extracted whole.
JSON LOAD DATA Examples
To use an ENCLOSED BY <char>
as a terminating field, a TERMINATED BY
clause is needed.ENCLOSED BY <char>
appearing within a field value can be duplicated, and they will be understood as a singular occurrence of the character.
If an ENCLOSED BY ""
is used, quotes are treated as follows:
-
"The ""NEW"" employee" → The "NEW" employee
-
The "NEW" employee → The "NEW" employee
-
The ""NEW"" employee → The ""NEW"" employee
Example 1
If example.
consists of:
{"a":{"b":1}, "c":null}{"a":{"b":2}, "d":null}
Then it can be loaded as follows:
CREATE TABLE t(a INT);LOAD DATA LOCAL INFILE "example.json" INTO TABLE t(a <- a::b) FORMAT JSON;SELECT * FROM t;
+------+
| a |
+------+
| 1 |
| 2 |
+------+
Example 2
If example2.
consists of:
{"b":true, "s":"A\u00AE\u0022A", "n":-1.4820790816978637e-25, "a":[1,2], "o":{"subobject":1}}{"b":false}"hello"
Then we can perform a more complicated LOAD DATA
:
CREATE TABLE t(b bool NOT NULL, s TEXT, n DOUBLE, a INT, o JSON NOT NULL, whole longblob);LOAD DATA LOCAL INFILE "example2.json" INTO TABLE t FORMAT JSON(b <- b default true,s <- s default NULL,n <- n default NULL,@avar <- a default NULL,o <- o default '{"subobject":"replaced"}',whole <- %)SET a = json_extract_double(@avar, 1)WHERE b = true;SELECT * FROM t;
+---+-------+-------------------------+------+--------------------------+-----------------------------------------------------------------------------------------------+
| b | s | n | a | o | whole |
+---+-------+-------------------------+------+--------------------------+-----------------------------------------------------------------------------------------------+
| 1 | A®"A | -1.4820790816978637e-25 | 2 | {"subobject":1} | {"b":true, "s":"A\u00AE\u0022A", "n":-1.4820790816978637e-25, "a":[1,2], "o":{"subobject":1}} |
| 1 | NULL | NULL | NULL | {"subobject":"replaced"} | hello |
+---+-------+-------------------------+------+--------------------------+-----------------------------------------------------------------------------------------------+
There are several things to note in the example above:
-
true
was converted to"1"
for columnsb
, but not for columnwhole
."1"
was further converted to theBOOL
value1
. -
The escapes
"\u00AE"
and"\u0022"
were converted to UTF-8 for columns
, but not for columnwhole
.Note that whole
would have become invalid JSON if we had translated"\u0022"
. -
The second row was discarded because it failed to match the
WHERE
clause. -
None of the paths in
subvalue_
could be found in the third row, somapping DEFAULT
literals like'{"subobject":"replaced"}'
were assigned instead. -
We assigned
a
to an intermediate variable so that we could extract an array element in theSET
clause. -
The
top-level
JSON values inexample2.
were not all JSON objects.json "hello"
is a validtop-level
JSON value.
Loading JSON Data from a CSV File
To use an ENCLOSED BY <char>
as a terminating field, a TERMINATED BY
clause is needed.ENCLOSED BY <char>
appearing within a field value can be duplicated, and they will be understood as a singular occurrence of the character.
If an ENCLOSED BY ""
is used, the quotes are treated as follows:
-
"The ""New"" employee" → The "NEW" employee
-
The "New" employee → The "NEW" employee
-
The ""NEW"" employee → The ""NEW"" employee
Example 1
An ENCLOSED BY
clause is required when a csv file has a JSON column enclosed with double quotation marks (" ").
CREATE TABLE employees(emp_id int, data JSON);
csv file contents
emp_id,data
159,"{""name"": ""Damien Karras"", ""age"": 38, ""city"": ""New York""}"
LOAD DATA INFILE '/tmp/<file_name>.csv' INTO TABLE employeesFIELDS TERMINATED BY ','ENCLOSED BY '"'IGNORE 1 LINES;SELECT * FROM employees;
+--------+-----------------------------------------------------+
| emp_id | data |
+--------+-----------------------------------------------------+
| 159 | {"age":38,"city":"New York","name":"Damien Karras"} |
+--------+-----------------------------------------------------+
Example 2
An ESCAPED BY
clause is required when a character is specified as an escape character for a string.
csv file contents
emp_id,data
298,"{\"name\": \"Bill Denbrough\", \"age\": 25, \"city\": \"Bangor\"}"
LOAD DATA INFILE '/tmp/<file_name>.csv' INTO TABLE employeesINTO TABLE employeesFIELDS TERMINATED BY ',' ENCLOSED BY '"' ESCAPED BY '\\'IGNORE 1 LINES;SELECT * FROM employees;
+--------+-----------------------------------------------------+
| emp_id | data |
+--------+-----------------------------------------------------+
| 298 | {"age":25,"city":"Bangor","name":"Bill Denbrough"} |
| 159 | {"age":38,"city":"New York","name":"Damien Karras"} |
+--------+-----------------------------------------------------+
Example 3
This example will fail as the JSON field in the csv file is not in the correct format.
csv file contents
emp_id,data
410,"{"name": "Annie Wilkes", "age": 45, "city":"Silver Creek"}"
LOAD DATA INFILE '/tmp/<file_name>.csv' INTO TABLE employeesFIELDS TERMINATED BY ','ENCLOSED BY '{'IGNORE 1 LINES;
ERROR 1262 (01000): Leaf Error (127.0.0.1:3307): Row 1 was truncated; it contained more data than there were input columns
Example 4
An ENCLOSED BY
clause is required when a csv file has a JSON column enclosed with curly brackets ({ }).
csv file contents
emp_id,data
089,{"name": "Wilbur Whateley","age": 62,"city": "Dunwich"}
LOAD DATA INFILE '/tmp/<file_name>.csv' INTO TABLE employeesFIELDS TERMINATED BY ','ENCLOSED BY '{'IGNORE 1 LINES;SELECT * FROM employees;
+--------+------------------------------------------------------+
| emp_id | data |
+--------+------------------------------------------------------+
| 298 | {"age":25,"city":"Bangor","name":"Bill Denbrough"} |
| 159 | {"age":38,"city":"New York","name":"Damien Karras"} |
| 89 | {"age":62,"city":"Dunwich","name":"Wilbur Whateley"} |
+--------+------------------------------------------------------+
LOAD DATA from an AWS S3 Source
JSON files that are stored in an AWS S3 bucket can be loaded via a LOAD DATA query without a pipeline.
LOAD DATA S3 '<bucket name>'CONFIG '{"region" : "<region_name>"}'CREDENTIALS '{"aws_access_key_id" : "<key_id>","aws_secret_access_key": "<access_key>"}'INTO TABLE <table_name>;
BSON LOAD DATA
The LOAD DATA
command supports loading BSON data from files using the FORMAT BSON
clause.LOAD DATA .
SQL statement is similar to LOAD DATA .
with the following exceptions:
-
The
FORMAT BSON
clause does not support default values. -
The
subvalue_
clause must be specified in themapping LOAD DATA .
SQL statement.. . FORMAT BSON -
The target columns in the
subvalue_
clause must bemapping BSON
type columns.If the target columns are non- BSON
type, they must be mapped to a user-defined variable and then assigned to the column using theSET
clause.
Refer to JSON LOAD DATA for more information.
Syntax
LOAD DATA [LOCAL] INFILE 'file_name'
[REPLACE | SKIP { CONSTRAINT | DUPLICATE KEY } ERRORS]
INTO TABLE tbl_name
FORMAT BSON
subvalue_mapping
[SET col_name = expr,...]
[WHERE expr,...]
[MAX_ERRORS number]
[ERRORS HANDLE string]
subvalue_mapping:
( {col_name | @variable_name} <- subvalue_path, ...)
subvalue_path:
{% | [%::]ident [::ident ...]}
Loading BSON Data from a File
The following example restores a MongoDB® backup into SingleStore.
This example uses the following sample data set.
use dbmdb.bsonExport.insertMany( [{ _id: 1, Code: "xv1f", Qty: 45 },{ _id: 2, Code: "nm3w", Qty: 30 },{ _id: 3, Code: "qoma", Qty: 20 },{ _id: 4, Code: "hr3k", Qty: 15 } ] )
{ acknowledged: true,
insertedIds: { '0': 1, '1': 2, '2': 3, '3': 4 } }
Create a binary export of the MongoDB® data using the mongodump
tool:
mongodump --uri="mongodb://<username>:<password>@<mongodb-endpoint>:27017/?authMechanism=PLAIN&tls=true&loadBalanced=true" --db="dbm" --collection="bsonExport" --out="<path_to_output_directory>"
This command creates a bsonExport.
file in the target output directory.
Create a table in your SingleStore database to store the BSON data:
CREATE TABLE bsonExport (_id BSON NOT NULL,_more BSON NOT NULL COMMENT 'KAI_MORE',`$_id` AS BSON_NORMALIZE_NO_ARRAY(`_id`) PERSISTED LONGBLOB COMMENT 'KAI_AUTO',SHARD KEY (`$_id`), PRIMARY KEY (`$_id`));
Load the bsonExport.
file into SingleStore using the following command:
LOAD DATA INFILE '<path_to_output_directory>/bsonExport.bson'INTO TABLE bsonEx FORMAT BSON (_id <- %::_id, @V1 <- %)SET _more = BSON_EXCLUDE_MASK(@V1,'{"_id":1}');
The BSON data has been ingested and is now stored in your SingleStore database.
SELECT _id:>JSON AS "_id", _more:>JSON AS "_more" FROM bsonEx;
+------+--------------------------+
| _id | _more |
+------+--------------------------+
| 4 | {"Code":"hr3k","Qty":15} |
| 3 | {"Code":"qoma","Qty":20} |
| 2 | {"Code":"nm3w","Qty":30} |
| 1 | {"Code":"xv1f","Qty":45} |
+------+--------------------------+
Avro LOAD DATA
Syntax for LOAD DATA Local Infile
LOAD DATA [LOCAL] INFILE 'file_name'WHERE/SET/SKIP ERRORS[REPLACE | SKIP { CONSTRAINT | DUPLICATE KEY } ERRORS]INTO TABLE tbl_nameFORMAT AVRO SCHEMA REGISTRY {"IP" | "Hostname"}subvalue_mapping[SET col_name = expr,...][WHERE expr,...][MAX_ERRORS number][ERRORS HANDLE string][SCHEMA 'avro_schema']subvalue_mapping:( {col_name | @variable_name} <- subvalue_path, ...)subvalue_path:{% | [%::]ident [::ident ...]}
See the associated GitHub repo.
Syntax for LOAD DATA AWS S3 Source
Avro-formatted data stored in an AWS S3 bucket can use a LOAD DATA query without a pipeline.
LOAD DATA S3 '<bucket name>'CONFIG '{"region" : "<region_name>"}'CREDENTIALS '{"aws_access_key_id" : "<key_id> ","aws_secret_access_key": "<access_key>"}'INTO TABLE <table_name>(`<col_a>` <- %,`<col_b>` <- % DEFAULT NULL ,) FORMAT AVRO;
This data can also be loaded from S3 with a connection link.
LOAD DATA LINK <link_name> '<bucket name>'INTO TABLE <table_name>(`<col_a>` <- %,`<col_b>` <- % DEFAULT NULL ,) FORMAT AVRO;
Semantics
Error Logging and Error Handling are discussed at the end of this topic.
LOAD DATA
for Avro does not support file name globbing (for example: LOAD DATA INFILE '/data/nfs/gp1/*.
).LOAD DATA
for Avro only supports loading a single file per statement.
Extract specified subvalues from each Avro value in file_
.tbl_
, or to variables used for a column assignment in a SET
clause.WHERE
clause.
To specify the compression type of an input file, use the COMPRESSION
clause.
Avro LOAD DATA
expects Avro data in one of two sub-formats
, depending on the SCHEMA
clause.
You can also load data from Stage using the LOAD DATA
command.
If no SCHEMA
clause is provided, file_
must name an Avro Object Container File as described in version 1.
-
The compression codec of the file must be
null
. -
Array and map values must not have more than 16384 elements.
-
The type name of a
record
must not be used in a symbolicreference to previously defined name
in any of its fields.It may still be used in a symbolic reference outside the record definition, however. For example, self-referential schemas like the following are rejected by
LOAD DATA
:{"type": "record","name": "PseudoLinkedList","fields" : [{"name": "value", "type": "long"},{"name": "next", "type": ["null", "PseudoLinkedList"]}]}
If a SCHEMA
clause is provided, the file must be a raw stream
consisting of only the concatenated binary encodings of instances of avro_
.avro_
must be a SQL string containing a JSON Avro schema.raw stream
files.
Warning
It’s an error to provide a SCHEMA
clause when loading an Object Container File because it contains metadata alongside the encoded values.
All optional Avro schema attributes except the namespace
attribute are ignored.logicalType
attributes are ignored.
If file_
ends in .
or .
, it will be decompressed.
Avro LOAD DATA
supports a subset of the error recovery options allowed by CSV LOAD DATA
.
Writing to multiple databases in a transaction is not supported.
The SCHEMA REGISTRY {"IP" | "Hostname"}
option allows LOAD DATA
to pull the schema from a schema registry.
Extracting Avro Values
subvalue_
specifies which subvalues are extracted and the column or variable to which each one is assigned.
LOAD DATA
uses the ::
-separated list of names in a subvalue_
to perform successive field name or union branch type name lookups in nested Avro records or unions.subvalue_
may not be used to extract elements of Avro arrays or maps.%
refers to the entire Avro value being processed.%::
may be omitted from paths which are otherwise non-empty.
If a path can’t be found in an input Avro value, then: * If a prefix of the path matches a record whose schema has no field matching the next name in the path, then LOAD DATA
will terminate with an error.LOAD DATA
will terminate with an error.null
will be extracted instead and LOAD DATA
will continue.
Path components naming union branches must use the two-part fullname of the branch’s type if that type is in a namespace.
Path components containing whitespace or punctuation must be surrounded by backticks.
Array and map elements may be indirectly extracted by applying JSON_
in a SET
clause.
For example, consider two Avro records with the union schema:
["int",{ "type" : "record","name" : "a","namespace" : "n","fields" : [{ "name" : "f1","type" : "int" }]}]
The paths %::`n.
and `n.
will both extract 1
from an instance of this schema whose JSON encoding is {"n.
.
They will extract null
from an instance whose encoding is {"int":2}
.
The paths %::int
and int
will extract 2
from the second instance and null
from the first.
Converting Avro Values
Before assignment or set clause evaluation, the Avro value extracted according to a subvalue_
is converted to an unspecified SQL type which may be further explicitly or implicitly converted as if from a SQL string whose value is as follows:
Avro Type |
Converted Value |
---|---|
|
SQL |
|
|
|
The string representation of the value |
|
The string representation of the value |
|
SQL |
|
SQL |
|
The string representation of the enum. |
|
Verbatim, from input bytes |
|
Verbatim, from input bytes |
|
Verbatim, from input bytes |
|
|
|
|
|
|
|
logicalType
attributes are ignored and have no effect on conversion.
Avro LOAD DATA Examples
Example 1
Consider an Avro Object Container Fileexample.
with the following schema:
{"type": "record","name": "data","fields": [{ "name": "id", "type": "long"},{ "name": "payload", "type": [ "null","string" ]}]}
example.
contains three Avro values whose JSON encodings are:
{"id":1,"payload":{"string":"first"}}{"id":1,"payload":{"string":"second"}}{"id":1,"payload":null}
example.
can be loaded as follows:
CREATE TABLE t(payload TEXT, input_record JSON);LOAD DATA LOCAL INFILE "example.avro"INTO TABLE tFORMAT AVRO( payload <- %::payload::string,input_record <- % );SELECT * FROM t;
+---------+----------------------------------------+
| payload | input_record |
+---------+----------------------------------------+
| first | {"id":1,"payload":{"string":"first"}} |
| second | {"id":1,"payload":{"string":"second"}} |
| NULL | {"id":1,"payload":null} |
+---------+----------------------------------------+
LOAD DATA
was able to parse example.
because Avro Object Container Files have a header which contains their schema.
Example 2
Consider a file named example.
, with the same values as example.
from Example 1 but in the raw stream format
.example.
consists of the binary encoded values and nothing else.SCHEMA
clause to tell LOAD DATA
to expect a raw stream
with the provided schema:
CREATE TABLE t(payload TEXT, input_record JSON);LOAD DATA LOCAL INFILE "example.raw_avro"INTO TABLE tFORMAT AVRO( payload <- %::payload::string,input_record <- % )schema'{"type": "record","name": "data","fields": [{ "name": "id", "type": "long"},{ "name": "payload", "type": [ "null", "string" ]}]}';SELECT * FROM t;
+---------+----------------------------------------+
| payload | input_record |
+---------+----------------------------------------+
| first | {"id":1,"payload":{"string":"first"}} |
| second | {"id":1,"payload":{"string":"second"}} |
| NULL | {"id":1,"payload":null} |
+---------+----------------------------------------+
Example 3
Consider an Object Container Fileexample3.
with a more complicated payload than Example 1.
{ "type": "record","namespace": "ns","name": "data","fields": [{ "name": "id", "type": "long" },{ "name": "payload", "type":[ "null",{ "type": "record","name": "payload_record","namespace": "ns","fields": [{ "name": "f_bytes", "type": "bytes"},{ "name": "f_string", "type": "string"},{ "name": "f_map", "type":{ "type": "map","values": { "type": "array","items": "int" }}}]}]}]}
The raw JSON encoding of the contents of this file can be seen in column c_
after the following LOAD DATA
:
CREATE TABLE t (c_id bigint,c_bytes longblob,c_string longblob,c_array_second int,c_whole_raw longblob,c_whole_json json);LOAD DATA INFILE "example3.avro"INTO TABLE tFORMAT AVRO( c_id <- %::id,c_bytes <- %::payload::`ns.payload_record`::f_bytes,c_string <- %::payload::`ns.payload_record`::f_string,@v_map <- %::payload::`ns.payload_record`::f_map,c_whole_raw <- %,c_whole_json <- %)SET c_array_second = JSON_EXTRACT_JSON(@v_map, "a", 1);SELECT * FROM t;
*** 1. row ***
c_id: 1
c_bytes: NULL
c_string: NULL
c_array_second: NULL
c_whole_raw: {"id":1,"payload":null}
c_whole_json: {"id":1,"payload":null}
*** 2. row ***
c_id: 2
c_bytes: "A
c_string: "A
c_array_second: 2
c_whole_raw: {"id":2,"payload":{"ns.payload_record":{"f_bytes":"\u0022\u0041","f_string":"\"A","f_map":{"a":[1,2]}}}}
c_whole_json: {"id":2,"payload":{"ns.payload_record":{"f_bytes":"\"A","f_map":{"a":[1,2]},"f_string":"\"A"}}}
There are several things to note:
-
We attempted to extract subvalues of the
payload_
branch of the union-typerecord payload
field.Since that wasn’t the selected member of the union in record 1, LOAD DATA
assignedNULL
toc_
andbytes @v_
.map -
We assigned the JSON encoding of
f_
tomap @v_
and then performed JSON map and array lookups in themap SET
clause to ultimately extract2
. -
f_
andstring f_
had the same contents, but we can see how their different Avro types affected their JSON encodings and interacted with the SQL JSON typebytes -
The JSON encoding of the Avro
string
valuef_
, as seen instring c_
, encodes special characters likewhole_ raw "
as the escape sequence\"
. -
The JSON encoding of the Avro
bytes
valuef_
, as seen inbytes c_
, encodes every byte with a JSON escape.whole_ raw -
When converting the JSON encoding of record 2 to the SQL JSON type while assigning to
c_
,whole_ json LOAD DATA
normalized both representations of the byte sequence"A
to\"A
.
-
Loading Parquet Data
The LOAD DATA
command supports loading Parquet files from AWS S3 or local files.LOAD DATA
clause in a CREATE PIPELINE .
Syntax for LOAD DATA AWS S3 or Local File Source
Parquet-formatted data stored in an AWS S3 bucket or the local filesystem can be loaded via a LOAD DATA query without a pipeline.LOAD DATA
clauses (SET
, WHERE
, etc.
For S3:
LOAD DATA S3 '<bucket name>'CONFIG '{"region" : "<region_name>"}'CREDENTIALS '{"aws_access_key_id" : "<key_id> ","aws_secret_access_key": "<access_key>"}'INTO TABLE <table_name>(`<col_a>` <- %,`<col_b>` <- % DEFAULT NULL ,) FORMAT PARQUET;
This data can also be loaded from S3 by using a connection link.
LOAD DATA LINK <link_name> '<bucket name>/<path>'INTO TABLE <table_name>(`<col_a>` <- %,`<col_b>` <- % DEFAULT NULL ,) FORMAT PARQUET;
For local file:
LOAD DATA INFILE '<path_to_file/file_name>'INTO TABLE <table_name>(val1 <- source1,val2 <- source2[ ... ]) [COMPRESSION { AUTO | NONE | LZ4 | GZIP }][ ... ]FORMAT PARQUET;
Handling Data Compression
The COMPRESSION
clause specifies how LOAD DATA
handles the compression of an input file.
Syntax for LOAD DATA Local Infile
LOAD DATA INFILE 'filename' COMPRESSION { AUTO | NONE | LZ4 | GZIP } INTO TABLE ...LOAD DATA INFILE 'filename' INTO TABLE `tablename` COMPRESSION { AUTO | NONE | LZ4 | GZIP } ...
Arguments
-
AUTO
: This is the default setting, it tellsLOAD DATA
to identify the compression type from the input file’s extension. -
NONE
: Specifies that the input file is uncompressed. -
LZ4
: Specifies that the input file is compressed withLZ4
compression algorithm. -
GZIP
: Specifies that the input file is compressed withGZIP
compression algorithm.
Remarks
-
If
COMPRESSION
is set toNONE
,LZ4
, orGZIP
,LOAD DATA
will not use the extension of the input file to determine the type of compression.For example, if you load a file test.
and specify thegz COMPRESSION
asNONE
, thenLOAD DATA
will handletest.
as an uncompressed file.gz
LOCAL
LOCAL
affects the expected file location, the search behavior for relative path names, and Error Handling behavior.
When you specify LOCAL
, the client reads file_
and sends it to the server.file_
is a relative path, it is relative to the current working directory of the client.
When LOCAL
is not specified, the file is read by the server, and needs to be located on the related server host.
Because files need to be sent from the client to the server, specifying LOCAL
can be slower.LOCAL
is not specified, the server needs access to the full data directory, meaning that any user who has the permissions to LOAD DATA
or CREATE PIPELINE
can read the directory.FILE READ
Permissions Matrix
LOCAL
does not support globbing (such as using wildcards in directory or filenames).
An example of using LOCAL
follows:
LOAD DATA LOCAL INFILE '/example-directory/foo.csv'INTO TABLE fooCOLUMNS TERMINATED BY ',';
Error Logging
When you run the LOAD DATA
command and use the ERRORS HANDLE
clause, LOAD DATA
logs errors to the information_
table.LOAD DATA
encountered as it processed the input file.
See the next section for example data that LOAD DATA .
populates in the information_
table.
Use the CLEAR LOAD ERRORS command to remove errors from information_
.
Error Handling
LOAD DATA
has several options to handle errors that it encounters as it processes the input file.LOAD DATA
statement, you can decide which option to use.
-
By default,
LOAD DATA
returns errors to the client application.Errors are returned one at a time. -
To ignore duplicate key/index value errors in the input file, use the
REPLACE
clause to replace existing rows with input rows.This clause first deletes the existing rows that have the same value for a primary key or unique index as the input rows, and then inserts the new row. -
To skip errors in the input file, use the
SKIP .
clause.. . ERRORS Data in the erroneous lines will not be inserted into the destination table. -
To ignore errors in the input file, use the
IGNORE
clause.This clause replaces invalid values with their defaults, discards extra fields, or discards erroneous lines completely. -
When
LOCAL
is specified, duplicate-key and data interpretation errors do not stop the operation.When LOCAL
is not specified, duplicate-key and data interpretation stop the operation.
Warning
In most cases, use SKIP .
instead of IGNORE
.IGNORE
without understanding how it behaves, LOAD DATA
may produce unexpected results as it inserts data into the destination table.
The four error handling options are discussed in the following topics.
Default Error Handling
By default, LOAD DATA
returns errors to the client application.
Error Handling Example
Create a new table with a PRIMARY KEY
column:
CREATE TABLE orders(id BIGINT PRIMARY KEY,customer_id INT,item_description VARCHAR(255),order_time TIMESTAMP NOT NULL);
The following CSV file will loaded be into this table as orders.
.2
in line 4.
1,372,Apples,2016-05-09
3,307,Oranges,2016-07-31,1000
2,138,Pears,2016-07-14
2,236,Bananas,2016-06-23
Load the data into the table:
LOAD DATA INFILE 'orders.csv'INTO TABLE ordersFIELDS TERMINATED BY ',';
ERROR 1262 (01000): Row 2 was truncated; it contained more data than there were input columns
After removing the extra column from row 2:
ERROR 1062 (23000): Leaf Error (127.0.0.1:3308): Duplicate entry '2' for key 'PRIMARY'
After removing the duplicate primary key entry, the LOAD DATA
statement is successful and the input file is loaded into the table.
REPLACE
Error Handling
SingleStore Helios’s REPLACE
behavior allows you to replace the existing rows with the new rows; only those rows that have the same value for a primary key or unique index as the input rows are replaced.
LOAD DATA
inserts source file rows into the destination table in the order in which the rows appear in the source file.REPLACE
is specified, source files that contain duplicate unique or primary key values will be handled in the following way:
-
If the destination table’s schema specifies a unique or primary key column, and
-
The source file contains a row with the same primary or unique key value as the destination table, then
-
The row in the destination table that has the same unique or primary key value as the row in the source file will be deleted and a new row from the source file that matches the primary key value will be inserted into the destination table.
Note: If the source file contains multiple rows with the same primary or unique key value as the destination table, then only the last row in the source file with the same primary or unique key value (as the destination table) replaces the existing row in the destination table.
Note: REPLACE
cannot be combined with SKIP DUPLICATE KEY ERRORS
.REPLACE
and SKIP DUPLICATE KEY ERRORS
does not throw a duplicate key error; REPLACE
replaces the old row with the new row, while SKIP DUPLICATE KEY ERRORS
discards the new row and retains the old row.
REPLACE
Error Handling Example
Create a new table with a PRIMARY KEY
column:
CREATE TABLE orders(id BIGINT PRIMARY KEY,customer_id INT,item_description VARCHAR(255),order_time DATETIME NOT NULL);
A row with a primary key 4
is inserted as follows:
INSERT INTO orders VALUES(4,236,"Bananas",2016-06-23);
The following CSV file is loaded into the table as orders.
.4
in line 2:
1,372,Apples,2016-05-09
4,138,Pears,2016-07-14
3,307,Oranges,2016-07-31
Load the data into the table:
LOAD DATA INFILE 'orders.csv'REPLACEINTO TABLE ordersFIELDS TERMINATED BY ','ERRORS HANDLE 'orders_errors';
Line 2 in the source file contained a duplicate primary key 4
.REPLACE
error handler deletes the row 4,236,"Bananas",2016-06-23
in the destination table and replaces it with the value 4,138,Pears,2016-07-14
from the source file.
SKIP . . . ERRORS
Error Handling
SingleStore Helios’s SKIP .
behavior allows you to specify an error scenario that, when encountered, discards an offending row.
-
SKIP DUPLICATE KEY ERRORS
: Any row in the source data that contains a duplicate unique or primary key will be discarded.If the row contains invalid data other than a duplicate key, an error will be generated. See SKIP DUPLICATE KEY ERRORS below. -
SKIP CONSTRAINT ERRORS
: Inclusive ofSKIP DUPLICATE KEY ERRORS
.If a row violates a column’s NOT NULL
constraint, or the row contains invalid JSON or Geospatial values, the row will be discarded.If the row contains invalid data outside the scope of constraint or invalid value errors, an error will be generated. See SKIP CONSTRAINT ERRORS below. -
SKIP ALL ERRORS
: Inclusive ofSKIP DUPLICATE KEY ERRORS
andSKIP CONSTRAINT ERRORS
.Also includes any parsing errors in the row caused by issues such as an invalid number of fields. See SKIP ALL ERRORS below.
SKIP DUPLICATE KEY ERRORS
When SKIP DUPLICATE KEY ERRORS
is specified, source files that contain duplicate unique or primary key values will be handled in the following way:
-
If the destination table’s schema specifies a unique or primary key column, and
-
The source file contains one or more rows with a duplicate key value that already exists in the destination table or exists elsewhere in the source file, then
-
Every duplicate row in the source file will be discarded and will not be inserted into the destination table.
SKIP DUPLICATE KEY ERRORS
cannot be combined with REPLACE
.
SKIP DUPLICATE KEY ERRORS Example
Create a new table with a PRIMARY KEY
column:
CREATE TABLE orders(id BIGINT PRIMARY KEY,customer_id INT,item_description VARCHAR(255),order_time TIMESTAMP NOT NULL);
The following CSV file will loaded be into this table as orders.
.2
in line 3:
1,372,Apples,2016-05-09
2,138,Pears,2016-07-14
2,236,Bananas,2016-06-23
3,307,Oranges,2016-07-31
Load the data into the table:
LOAD DATA INFILE 'orders.csv'SKIP DUPLICATE KEY ERRORSINTO TABLE ordersFIELDS TERMINATED BY ','ERRORS HANDLE 'orders_errors';
Note that only 3 rows were inserted even though 4 rows were present in the source file.INFORMATION_
table:
SELECT load_data_line_number, load_data_line, error_messageFROM INFORMATION_SCHEMA.LOAD_DATA_ERRORSWHERE handle = 'orders_errors'ORDER BY load_data_line_number;
+-----------------------+---------------------------+--------------------------------+
| load_data_line_number | load_data_line | error_message |
+-----------------------+---------------------------+--------------------------------+
| 3 | 2,236,Bananas,2016-06-23 | Duplicate entry for unique key |
+-----------------------+---------------------------+--------------------------------+
SKIP CONSTRAINT ERRORS
SKIP CONSTRAINT ERRORS
is inclusive of SKIP DUPLICATE KEY ERRORS
if REPLACE
is not specified.NOT NULL
constraint and fields that contain invalid JSON or Geospatial values, and handles the offending rows in the following ways:
NOT NULL Constraint
-
If a column in the destination table specifies a
NOT NULL
constraint, and -
The source file contains one or more rows with a null value for the constraint column, then
-
The offending row(s) will be discarded and will not be inserted into the destination table.
Invalid JSON or Geospatial Data
-
If a column in the destination table specifies a
JSON
,GEOGRAPHYPOINT
, orGEOGRAPHY
data type, and -
The source file contains one or more rows with invalid values for fields of these types, then
-
The offending row(s) will be discarded and will not be inserted into the destination table.
SKIP CONSTRAINT ERRORS
can also be combined with the REPLACE
clause.
SKIP CONSTRAINT ERRORS Example
Create a new table with a JSON
column type that also has a NOT NULL
constraint:
CREATE TABLE orders(id BIGINT PRIMARY KEY,customer_id INT,item_description VARCHAR(255),order_properties JSON NOT NULL);
The following CSV file will loaded be into this table as orders.
.\N
) for JSON in line 4:
1,372,Apples,{"order-date":"2016-05-09"}
2,138,Pears,{"order-date"}
3,236,Bananas,{"order-date":"2016-06-23"}
4,307,Oranges,\N
Load the data into the table:
LOAD DATA INFILE 'orders.csv'SKIP CONSTRAINT ERRORSINTO TABLE ordersFIELDS TERMINATED BY ','ERRORS HANDLE 'orders_errors';
Note that only 2 rows were inserted even though 4 rows were present in the source file.INFORMATION_
table:
SELECT load_data_line_number, load_data_line, error_messageFROM INFORMATION_SCHEMA.LOAD_DATA_ERRORSWHERE handle = 'orders_errors'ORDER BY load_data_line_number;
+-----------------------+-----------------------------+--------------------------------------------------------------+
| load_data_line_number | load_data_line | error_message |
+-----------------------+-----------------------------+--------------------------------------------------------------+
| 2 | 2,138,Pears,{"order-date"} | Invalid JSON value for column 'order_properties' |
| 4 | 4,307,Oranges,\N | NULL supplied to NOT NULL column 'order_properties' at row 4 |
+-----------------------+-----------------------------+--------------------------------------------------------------+
SKIP ALL ERRORS
SKIP ALL ERRORS
is inclusive of SKIP DUPLICATE KEY ERRORS
and SKIP CONSTRAINT ERRORS
in addition to any parsing error.
-
If one or more rows in the source file cause
.
or. . DUPLICATE KEY . . . .
errors, or. . CONSTRAINT . . . -
If one or more rows in the source file cause parsing errors such as invalid delimiters or an invalid number of fields,
-
The offending row(s) will be discarded and will not be inserted into the destination table.
SKIP ALL ERRORS
can also be combined with REPLACE
.
SKIP ALL ERRORS Example
Create a new table with a JSON
column type that also has a NOT NULL
constraint:
CREATE TABLE orders(id BIGINT PRIMARY KEY,customer_id INT,item_description VARCHAR(255),order_properties JSON NOT NULL);
The following CSV file will loaded be into this table as orders.
.
-
Line 2 contains only 3 fields
-
Line 3 has a duplicate primary key
-
Line 4 has a null value for a
NOT NULL
constraint
1,372,Apples,{"order-date":"2016-05-09"}
2,138,Pears
1,236,Bananas,{"order-date":"2016-06-23"}
4,307,Oranges,\N
Load the data into the table:
LOAD DATA INFILE 'orders.csv'SKIP ALL ERRORSINTO TABLE ordersFIELDS TERMINATED BY ','ERRORS HANDLE 'orders_errors';
Only 1 row was written, despite the source file containing 4 rows.NOT NULL
constraint.INFORMATION_
table:
SELECT load_data_line_number, load_data_line, error_messageFROM INFORMATION_SCHEMA.LOAD_DATA_ERRORSWHERE handle = 'orders_errors'ORDER BY load_data_line_number;
+-----------------------+--------------------------------------------+--------------------------------------------------------------+
| load_data_line_number | load_data_line | error_message |
+-----------------------+--------------------------------------------+--------------------------------------------------------------+
| 2 | 2,138,Pears | Row 2 doesn't contain data for all columns |
| 3 | 1,236,Bananas,{"order-date":"2016-06-23"}. | Duplicate entry for unique key |
| 4 | 4,307,Oranges,\N | NULL supplied to NOT NULL column 'order_properties' at row 4 |
+-----------------------+--------------------------------------------+--------------------------------------------------------------+
IGNORE
Error Handling
SingleStore Helios’s IGNORE
behavior is identical to MySQL’s IGNORE
behavior, and exists only to support backwards compatibility with applications written for MySQL.IGNORE
either discards malformed rows, discards extra fields, or replaces invalid values with default data type values.IGNORE
was not specified, it will be converted to a warning instead.
Consequences of Using IGNORE Instead of SKIP ERRORS
Unlike SKIP .
which discards offending rows, IGNORE
may change the inserted row’s data to ensure that it adheres to the table schema.
In a best case scenario where a malformed row uses the proper delimiters and contains the correct number of fields, the row can be partially salvaged.
However, the worst case scenario can be severe.
Due to the potential consequences of using IGNORE
, in most cases SKIP .
is a better option.IGNORE
’s behavior for each error scenario, continue reading the sections below:
Duplicate Unique or Primary Key Values
When IGNORE
is specified, source files that contain duplicate unique or primary key values will be handled in the following way:
-
If the destination table’s schema specifies a unique or primary key column, and
-
The source file contains one or more rows with a duplicate key value that already exists in the destination table or exists elsewhere in the source file, then
-
Every duplicate row in the source file will be discarded (ignored) and will not be inserted into the destination table.
Duplicate Unique or Primary Key Values Example
Create a new table with a PRIMARY KEY
column:
CREATE TABLE orders(id BIGINT PRIMARY KEY,customer_id INT,item_description VARCHAR(255),order_time DATETIME NOT NULL);
The following CSV file will loaded be into this table as orders.
.2
in line 3:
1,372,Apples,2016-05-09
2,138,Pears,2016-07-14
2,236,Bananas,2016-06-23
3,307,Oranges,2016-07-31
Load the data into the table:
LOAD DATA INFILE 'orders.csv'IGNOREINTO TABLE ordersFIELDS TERMINATED BY ','ERRORS HANDLE 'orders_errors';
Note that only 3 rows were inserted even though 4 rows were present in the source file.INFORMATION_
table:
SELECT load_data_line_number, load_data_line, error_messageFROM INFORMATION_SCHEMA.LOAD_DATA_ERRORSWHERE handle = 'orders_errors'ORDER BY load_data_line_number;
+-----------------------+---------------------------+--------------------------------+
| load_data_line_number | load_data_line | error_message |
+-----------------------+---------------------------+--------------------------------+
| 3 | 2,236,Bananas,2016-06-23 | Duplicate entry for unique key |
+-----------------------+---------------------------+--------------------------------+
Line 3 in the source file contained a duplicate primary key and was discarded because line 2 was inserted first.
Values with Invalid Types According to the Destination Table’s Schema
When IGNORE
is specified, source files that contain rows with invalid types that violate the destination table’s schema will be handled in the following way:
-
If the source file contains one or more rows with values that do not adhere to the destination table’s schema,
-
Each value of an invalid type in a row will be replaced with the default value of the appropriate type, and
-
The modified row(s) will be inserted into the destination table.
IGNORE
behaves in a potentially unexpected way for columns that have a DEFAULT
value specified.DEFAULT
value is ignored.
Example
Create a new table with a PRIMARY KEY
column:
CREATE TABLE orders(id BIGINT PRIMARY KEY,customer_id INT,item_description VARCHAR(255),order_time DATETIME NOT NULL);
The following CSV file will be loaded be into this table as orders.
.NULL
value for order_
, whereas the table schema does not allow NULL
values for this field.
1,372,Apples,2016-05-09
2,138,Pears,2016-07-14
3,236,Bananas,2016-06-23
4,307,Oranges,\N
Load the data into the table:
LOAD DATA INFILE 'orders.csv'IGNOREINTO TABLE ordersFIELDS TERMINATED BY ','ERRORS HANDLE 'orders_errors';
Note that 4 rows were inserted despite the fact that line 4 in the source file contained a null value for a NOT NULL
column.INFORMATION_
table:
SELECT load_data_line_number, load_data_line, error_messageFROM INFORMATION_SCHEMA.LOAD_DATA_ERRORSWHERE handle = 'orders_errors'ORDER BY load_data_line_number;
+-----------------------+------------------+--------------------------------------------------------+
| load_data_line_number | load_data_line | error_message |
+-----------------------+------------------+--------------------------------------------------------+
| 4 | 4,307,Oranges,\N | NULL supplied to NOT NULL column 'order_time' at row 4 |
+-----------------------+------------------+--------------------------------------------------------+
To see what was inserted by replacing the invalid DATETIME
value with a default value, query the table:
SELECT * FROM orders ORDER BY 1;
+----+-------------+------------------+---------------------+
| id | customer_id | item_description | order_time |
+----+-------------+------------------+---------------------+
| 1 | 372 | Apples | 2016-05-09 00:00:00 |
| 2 | 138 | Pears | 2016-07-14 00:00:00 |
| 3 | 236 | Bananas | 2016-06-23 00:00:00 |
| 4 | 307 | Oranges | 0000-00-00 00:00:00 |
+----+-------------+------------------+---------------------+
In this example, the invalid null DATETIME
value was replaced with its default value: 0000-00-00 00:00:00
.
Rows That Contain an Invalid Number of Fields
When IGNORE
is specified, source files that contain rows with an invalid number of fields will be handled in one of two ways:
Too Few Fields
-
If the source file contains one or more rows with too few fields according to the destination table’s schema,
-
Each row’s empty field(s) will be updated with default values, and
-
The row will be inserted into the destination table.
Too Many Fields
-
If the source file contains one or more rows with too many fields according to the destination table’s schema,
-
Each extra field in the row(s) will be discarded (ignored), and
-
The row will be inserted into the destination table.
Example
Create a new table with a PRIMARY KEY
column:
CREATE TABLE orders(id BIGINT PRIMARY KEY,customer_id INT,item_description VARCHAR(255),order_time DATETIME NOT NULL);
The following CSV file will loaded be into this table as orders.
.
-
Line 2 contains only 3 fields instead of 4 and does not have a
TIMESTAMP
: -
Line 4 contains an extra field, for a total of 5
1,372,Apples,2016-05-09
2,138,Pears
3,236,Bananas,2016-06-23
4,307,Oranges,2016-07-31,Berries
Load the data into the table:
LOAD DATA INFILE 'orders.csv'IGNOREINTO TABLE ordersFIELDS TERMINATED BY ','ERRORS HANDLE 'orders_errors';
Note that 4 rows were inserted despite the invalid number of fields for two of the rows.INFORMATION_
table:
SELECT load_data_line_number, load_data_line, error_messageFROM INFORMATION_SCHEMA.LOAD_DATA_ERRORSWHERE handle = 'orders_errors'ORDER BY load_data_line_number;
+-----------------------+----------------------------------+---------------------------------------------------------------------------+
| load_data_line_number | load_data_line | error_message |
+-----------------------+----------------------------------+---------------------------------------------------------------------------+
| 2 | 2,138,Pears | Row 2 doesn't contain data for all columns |
| 4 | 4,307,Oranges,2016-07-31,Berries | Row 4 was truncated; it contained more data than there were input columns |
+-----------------------+----------------------------------+---------------------------------------------------------------------------+
Note that there is a warning for the missing value in row 2 and the extra value in row 4.
SELECT * FROM orders ORDER BY 1;
+----+-------------+------------------+---------------------+
| id | customer_id | item_description | order_time |
+----+-------------+------------------+---------------------+
| 1 | 372 | Apples | 2016-05-09 00:00:00 |
| 2 | 138 | Pears | 0000-00-00 00:00:00 |
| 3 | 236 | Bananas | 2016-06-23 00:00:00 |
| 4 | 307 | Oranges | 2016-07-31 00:00:00 |
+----+-------------+------------------+---------------------+
Line 2 did not have a DATETIME
value, so the default value for its type was inserted instead.
Performance Considerations
Shard Keys
Loading data into a table with a shard key requires reading the necessary columns on the aggregator to compute the shard key before sending data to the leaves.
Keyless Sharding
Loading data into a keylessly sharded table (no shard key is declared, or shard()
is specified) will result in batches of data loaded into different partitions, in a round-robin fashion.
Retrieve loading status
The information_
table reports information about rows and bytes read by in-progress LOAD DATA
queries.
It also reports activity and database names, which you can use to find corresponding rows in workload profiling tables.
Important
Result sets will only be returned if LMV_
is queried on the same aggregator as the in-progress LOAD_
queries.
information_ schema. LMV_ LOAD_ DATA_ STATUS Table Schema
Column Name |
Description |
---|---|
|
The connection ID. |
|
The name of the database activity. |
|
The name of the database associated with the file being loaded into the workspace. |
|
Bytes read from the input file stream. |
|
A count of rows read in from the source file (including skipped rows). |
SELECT * FROM information_schema.LMV_LOAD_DATA_STATUS;
+------+------------------------------------+---------------+------------+-----------+
| ID | ACTIVITY_NAME | DATABASE_NAME | BYTES_READ | ROWS_READ |
+------+------------------------------------+---------------+------------+-----------+
| 2351 | load_data_company_0e8dec6d07d9cba5 | trades | 94380647 | 700512 |
+------+------------------------------------+---------------+------------+-----------+
Related Topics
Last modified: November 18, 2024