CREATE TABLE
Creates a new table.
As of version 7.3 of SingleStore, Columnstore is the default table storage format. Columnstore is also known as Universal Storage. See the Columnstore section for more information.
Syntax
CREATE [ROWSTORE] [REFERENCE | TEMPORARY | GLOBAL TEMPORARY] TABLE [IF NOT EXISTS] <table_name> (<create_definition>,...) [<table_options>] [[AS] SELECT ...] CREATE TABLE [IF NOT EXISTS] tbl_name { LIKE old_tbl_name | (LIKE old_tbl_name) } <create_definition>: <column_name> { <column_definition> | AS <computed_column_definition> } | [CONSTRAINT [symbol]] PRIMARY KEY [<index_type>] (<index_column_name>,...) [<index_option>] ... | { INDEX | KEY } [<index_name>] [<index_type>] (<index_col_name>,...) [<index_option>] ... | [CONSTRAINT [symbol]] UNIQUE [INDEX|KEY] [<index_name>] [<index_type>] (<index_column_name>,...) [<index_option>] ... | [CONSTRAINT [symbol]] SHARD KEY [<index_type>] (<index_column_name>,...) [METADATA_ONLY] [<index_option>] ... | SORT KEY (<index_column_name>,... [DESC]), | COLUMN GROUP [column_group_name] ( * ), | FULLTEXT [<index_name>] (<index_column_name>,...) <column_definition>: <data_type> [NOT NULL | NULL] [DEFAULT <default_value>] [ON UPDATE <update_value>] [AUTO_INCREMENT] [UNIQUE [KEY] | [PRIMARY] KEY] [SPARSE] [SERIES TIMESTAMP] <computed_column_definition>: computed_column_expression PERSISTED [data_type | AUTO] <data_type>: BIT[(<length>)] | TINYINT[(<length>)] [UNSIGNED] | SMALLINT[(<length>)] [UNSIGNED] | INT[(<length>)] [UNSIGNED] | INTEGER[(<length>)] [UNSIGNED] | BIGINT[(<length>)] [UNSIGNED] | REAL[(<length>,<decimals>)] [UNSIGNED] | DOUBLE[(<length>,<decimals>)] [UNSIGNED] | DECIMAL[(<length>[,<decimals>])] [UNSIGNED] | NUMERIC[(<length>[,<decimals>])] [UNSIGNED] | DATETIME | DATETIME(6) | TIMESTAMP | TIMESTAMP(6) | DATE | TIME | CHAR[(<length>)] [CHARACTER SET <character_set_name>] [COLLATE <collation_name>] | VARCHAR(<length>) [CHARACTER SET <character_set_name>] [COLLATE <collation_name>] | TINYBLOB | BLOB | MEDIUMBLOB | LONGBLOB | TINYTEXT [BINARY] | TEXT [BINARY] | MEDIUMTEXT [BINARY] | LONGTEXT [BINARY] | ENUM(<value1>,<value2>,<value3>,...) | SET(<value1>,<value2>,<value3>,...) | JSON [COLLATE <collation_name>] | GEOGRAPHY | GEOGRAPHYPOINT <index_column_name>: <column_name> [(<length>)] [ASC | DESC] <index_type>: | USING { BTREE | HASH } <index_option>: KEY_BLOCK_SIZE [=] <value> | <index_type> | COMMENT '<string>' | BUCKET_COUNT [=] <value> | WITH (<index_kv_options>) | UNENFORCED [RELY | NORELY] <index_kv_options>: <index_kv_option> [, <index_kv_option>] ... <index_kv_option>: RESOLUTION = <value> | COLUMNSTORE_SEGMENT_ROWS = <value> | COLUMNSTORE_FLUSH_BYTES = <value> <table_options>: <table_option> [[,] <table_option>] ... <table_option>: AUTO_INCREMENT [=] <value> | COMMENT [=] '<string>' | AUTOSTATS_ENABLED = { TRUE | FALSE } | AUTOSTATS_CARDINALITY_MODE = {INCREMENTAL|PERIODIC|OFF} | AUTOSTATS_HISTOGRAM_MODE = {CREATE|UPDATE|OFF} | AUTOSTATS_SAMPLING = {ON|OFF} | COMPRESSION = SPARSE
Remarks
Note
Unless CREATE ROWSTORE TABLE ...
or SORT KEY()
are specified, the value of the default_table_type
engine variable determines the type of table (columnstore
or rowstore
) that is created.
When default_table_type
is set to columnstore
, you can create a columnstore table using standard CREATE TABLE syntax.
default_table_type
is set to columnstore
by default.
The setting of default_table_type
applies to temporary tables. For TEMPORARY
or GLOBAL TEMPORARY
tables, if default_table_type
is set to columnstore
, you must use CREATE ROWSTORE TEMPORARY TABLE
and CREATE ROWSTORE GLOBAL TEMPORARY TABLE
to create temporary tables.
For more information about the data types listed above, and for an explanation of
UNSIGNED
, refer to the Data Types topic.The
SET
data type restricts the values that can be inserted for a table column. Only the set of strings that are listed for a column at the time of table creation can be inserted.<table_name>
is the name of the table to create in the SingleStoreDB database.The following note applies when the engine variable
table_name_case_sensitivity
is set toOFF
: After you create a table, you cannot create another table having the same table name with a different case. Refer to the Database Object Case Sensitivity topic for more information.CREATE TABLE
is slower in SingleStoreDB than in MySQL. See Code Generation for more information.The
KEY
syntax is equivalent to usingINDEX
syntax when used inCREATE TABLE
. The convention is to use theKEY
syntax.INDEX
syntax is generally used when creating an index on an existing table. See CREATE INDEX for more information.The
BTREE
index type creates a skip list index in SingleStoreDB. This index has very similar characteristics to a BTREE index.If you do not want to specify a column (or columns) to sort on, or do not care about the sort order for your data, you can specify an empty key (e.g.
SORT KEY()
).The
SORT KEY()
order can be specified as ascending (SORT KEY(index_column_name)
) or descending (SORT KEY(index_column_name DESC)
). SingleStoreDB does not support scanning aSORT KEY()
in reverse order to its sort order:CREATE TABLE t1(col1 int, SORT KEY(col1) DESC); **** EXPLAIN SELECT * FROM t1 ORDER BY col1 DESC; +------------------------------------------------------------------------------------------------------------------+ | EXPLAIN | +------------------------------------------------------------------------------------------------------------------+ | GatherMerge [remote_0.col1 DESC] partitions:all alias:remote_0 | | Project [t1.col1] | | OrderedColumnStoreScan test1.t1, KEY col1 (col1 DESC) USING CLUSTERED COLUMNSTORE table_type:sharded_columnstore | +------------------------------------------------------------------------------------------------------------------+ 3 rows in set (0.00 sec) EXPLAIN SELECT * FROM t1 ORDER BY col1; +-----------------------------------------------------------------------------------------------------------+ | EXPLAIN | +-----------------------------------------------------------------------------------------------------------+ | GatherMerge [remote_0.col1] partitions:all alias:remote_0 | | Project [t1.col1] | | Sort [t1.col1] | | ColumnStoreScan test1.t1, KEY col1 (col1 DESC) USING CLUSTERED COLUMNSTORE table_type:sharded_columnstore | +-----------------------------------------------------------------------------------------------------------+ 4 rows in set (0.01 sec)
SORT KEY()
is not allowed when usingCREATE ROWSTORE TABLE ...
.KEY() USING CLUSTERED COLUMNSTORE
is a legacy syntax that is equivalent toSORT KEY()
. We recommend usingSORT KEY()
.BUCKET_COUNT
is specific to theHASH
index type. It controls the bucket count of the hash table. It applies to rowstore hash indexes only and does not effect columnstore hash indexes.The
UNENFORCED
index option can be used on aUNIQUE
constraint to specify that the unique constraint is unenforced. See Unenforced Unique Constraints.RESOLUTION
is specific to index on geospatial columns. See Working with Geospatial Features for more information.COLUMNSTORE_SEGMENT_ROWS
,COLUMNSTORE_FLUSH_BYTES
controls configuration variables specific to columnstore tables. See Advanced Columnstore Configuration Options) for more information.SingleStore supports binary,
utf8
, andutf8mb4
character sets. TheCOLLATE
clause in theCREATE TABLE
statement can be used to override the server character set and collation, which are used as default values for the table.For more information, see Table-Level Override.
AUTOSTATS_ENABLED
controls if automatic statistics should be collected on this table. There are three categories of autostats -AUTOSTATS_CARDINALITY_MODE
,AUTOSTATS_HISTOGRAM_MODE
, andAUTOSTATS_SAMPLING
. SingleStoreDB allows you to independently control how each category of statistics is automatically gathered. Multiple autostats settings can be combined in a singleCREATE TABLE
statement. See Automatic Statistics for more information.This command can be run on the master aggregator node, or a child aggregator node (see Node Requirements for SingleStoreDB Commands ).
This command causes implicit commits. See COMMIT for more information.
Specify the
AUTO
option instead of the<data_type>
in the<computed_column_definition>
clause to automatically infer the data type of the column from the<computed_column_expression>
.<computed_column_expression>
defines the value of a computed column using other columns in the table, constants, built-in functions, operators, and combinations thereof. For more information see Persistent Computed Columns.Temporary tables, created with the
TEMPORARY
option, will be deleted when the client session terminates. For ODBC/JDBC, this is when the connection closes. For interactive client sessions, it is when the user terminates the client program.Global temporary tables, created with the
GLOBAL TEMPORARY
option, exist beyond the duration of a client session. If failover occurs, the global temporary tables lose data and enter an errored state; they need to be dropped and recreated. This command can be run only on the master aggregator. See Global Temporary Tables for details.The
SERIES TIMESTAMP
clause specifies a table column as the default column defining time order for implicit use by time series functions. This setting can be specified only for a single table column. The column can be one of the following data types:DATE
,TIME
,DATETIME
,DATETIME(6)
,TIMESTAMP
orTIMESTAMP(6)
. It is recommended to use either of theDATETIME
orDATETIME(6)
types instead of one of theTIMESTAMP
types because the automatic update behavior ofTIMESTAMP
is subject to change. See Timestamp Behavior for details.The
SERIES TIMESTAMP
clause does not affect the data type of a table column, rather it specifies the behavior of the column in the time-series-specific functions likeFIRST()
,LAST()
, andTIME_BUCKET()
.Keyless sharding distributes data across partitions uniformly at random but with the limitation that it does not allow single partition queries or local joins since rows are not assigned to specific partitions. Keyless sharding is the default for tables that do not have primary key or explicit shard key. You can explicitly declare a table as keyless sharded by specifying a shard key with an empty list of columns in the
SHARD KEY()
constraint in the table definition.The
METADATA_ONLY
option on theSHARD KEY
syntax prevents an index being created on the shard key. It will decrease overall memory usage. It can cause queries to run slower. It can only be used when creating your table.The optional
COLUMN GROUP
clause creates a materialized copy of each row as a separate index. It is supported on columnstore tables only. This structure can be used to accelerate full-row retrievals and updates for wide columnstore tables.The
column_group_name
argument is optional. If a column group name is not specified when creating a table, one is chosen by the engine. UsingCOLUMN GROUP [column_group_name] (*)
creates a column group index on all columns in the table.The following is an example of a columnstore table that creates a column group on all columns:
CREATE TABLE col_group1(id BIGINT, col1 VARCHAR(10), col2 VARCHAR(10), ..., coln INT, COLUMN GROUP col_gp_inx (*));
See the Permission Matrix for the required permission.
MySQL Compatibility
SingleStoreDB’s syntax differs from MySQL mainly in the datatypes and storage it supports, and some specific index hints.
KEY_BLOCK_SIZE [=] <value>
: value is currently ignored.
DEFAULT Behavior
If DEFAULT <default_value>
is specified in <column_definition>
, and no value is inserted in the column, then <default_value>
will be placed in the column during an INSERT
operation. If the column is of the type TIMESTAMP
, TIMESTAMP(6)
, DATETIME
, or DATETIME(6)
, then you can update <default_value>
to one of the following values: CURRENT_TIMESTAMP()
, CURRENT_TIMESTAMP(6)
, NOW()
, or NOW(6)
. For more information, see Data Types.
ON UPDATE Behavior
If ON UPDATE <update_value>
is specified in <column_definition>
, and if any other column is updated but the specified column is not explicitly updated, then update_value
will be placed in the column during an UPDATE
operation. If the column is of the type TIMESTAMP
, TIMESTAMP(6)
, DATETIME
, or DATETIME(6)
, then you can update <update_value>
to one of the following values: CURRENT_TIMESTAMP()
, CURRENT_TIMESTAMP(6)
, NOW()
, or NOW(6)
.
ON UPDATE
can be used with these TIMESTAMP/DATETIME[(6)]
types only, and you can only use one of the time functions as the argument. For more information, see Data Types.
AUTO_INCREMENT Behavior
AUTO_INCREMENT
can be used to automatically generate a unique value for new rows. When you insert a new row, and the AUTO_INCREMENT
field is DEFAULT
, NULL
, or 0
, SingleStoreDB will automatically assign a value. It’s important to understand that AUTO_INCREMENT
only guarantees that automatically-generated values are unique. In general, it does not guarantee that they:
are consecutive or sequential
are monotonically increasing
start from any particular value
are distinct from explicitly-set values
If you explicitly set a value in an INSERT
or UPDATE
statement, it may collide with past or future automatically-generated values.
A table can have only one AUTO_INCREMENT
column. The AUTO_INCREMENT
column must be included in an index (not necessarily a PRIMARY
or UNIQUE
key, a regular key is also allowed).
If syntax such as CREATE TABLE table_1 SELECT * FROM table_2
is used to create table_1
where table_2
has an AUTO_INCREMENT
column, it will be created as a non-auto-increment column in table_1
.
See LAST_INSERT_ID for more information on AUTO_INCREMENT
behavior.
If the AUTO_INCREMENT
behavior described here does not satisfy your requirements you can create your own sequence generator using LAST_INSERT_ID
. See the sequence generator stored procedure example.
Warning
Restarting an aggregator, such as during upgrades or host machine maintenance, will introduce a large gap between any AUTO_INCREMENT
values inserted before the restart and any values inserted after. In the case of reference tables, this same behavior might also occur when a child aggregator is promoted to master aggregator. Depending on how often you restart your aggregators, you could see many jumps in values from a specific aggregator.
These jumps are because each aggregator defines and manages its own range of values to start incrementing from to prevent collisions in a table. With each restart, a new batch of values is used. For sharded tables, the range of AUTO_INCREMENT
values increases to the next 1,000,000 after each restart (e.g. 2,430,403 before restart -> 3,000,000 after). For reference tables, the batch size jumps to the next 1,000. And as with previous versions of SingleStoreDB, these values are also encoded with the aggregator ID, as described in the next section.
AUTO_INCREMENT in Sharded Tables
On a sharded (distributed) table, AUTO_INCREMENT
can only be used on a BIGINT
column (as they usually use the entire 64 bits). Each aggregator computes and tracks its own AUTO_INCREMENT
values and uses those values when new rows are added to a table. AUTO_INCREMENT
values in sharded tables are assigned using the high 14 bits to encode the aggregator ID and the bottom 50 bits for a per-aggregator unique value. The values on each aggregator are usually, but not always, sequential; therefore, inserts on an individual aggregator generate values which are unique and usually sequential. And because each aggregator manages its own AUTO_INCREMENT
values, the automatically-generated values from inserts across multiple aggregators are only unique, never sequential.
Here is an example to illustrate how AUTO_INCREMENT
values are generated across aggregators in a cluster as new rows are inserted into table tb
.:
SELECT * FROM tb ORDER BY b; **** +-------------------+------+------------+ | a | b | c | +-------------------+------+------------+ | 1 | 1 | from MA | | 2 | 2 | from MA | | 3 | 3 | from MA | | 4 | 4 | from MA | | 5 | 5 | from MA | | 13510798882111489 | 6 | from CA 96 | | 13510798882111490 | 7 | from CA 96 | | 13510798882111491 | 8 | from CA 96 | | 13510798882111492 | 9 | from CA 96 | | 13510798882111493 | 10 | from CA 96 | | 14636698788954113 | 11 | from CA 20 | | 14636698788954114 | 12 | from CA 20 | | 14636698788954115 | 13 | from CA 20 | | 14636698788954116 | 14 | from CA 20 | | 14636698788954117 | 15 | from CA 20 | | 6 | 16 | from MA | | 15762598695796737 | 17 | from CA 17 | | 13510798882111494 | 18 | from CA 96 | | 7 | 19 | from MA | | 14636698788954118 | 20 | from CA 20 | +-------------------+------+------------+
As shown in the example above, automatically-generated AUTO_INCREMENT
values can differ depending on which aggregator you run the inserts on. Of course, if you ran some inserts on one aggregator and some inserts on another aggregator, you would get different automatically generated values. Also note that automatically-generated values and explicitly-set values can collide in sharded tables.
AUTO_INCREMENT in Reference Tables
The AUTO_INCREMENT
value for a reference table is tracked by the master aggregator. It is guaranteed that the next AUTO_INCREMENT
value will always be greater than any value previously seen in this column. These generated values are usually sequential, but not always. Contrarily to the behavior for sharded tables, explicitly setting a value in an INSERT
or UPDATE
statement will not create a collision with future automatically generated values.
The next example shows some queries using AUTO_INCREMENT
fields on reference tables.
CREATE REFERENCE TABLE t(id INT AUTO_INCREMENT PRIMARY KEY); INSERT INTO t values(); INSERT INTO t values(5); INSERT INTO t values(); SELECT id FROM t ORDER BY id; **** +----+ | id | +----+ | 1 | | 5 | | 6 | +----+ UPDATE t SET id=9 WHERE id=5; INSERT INTO t values(); SELECT id FROM t ORDER BY id; **** +----+ | id | +----+ | 1 | | 6 | | 9 | | 10 | +----+ DELETE FROM t; INSERT INTO t values(); SELECT id FROM t order by id; **** +----+ | id | +----+ | 11 | +----+
Setting AUTO_INCREMENT Starting Values
It is possible to override the starting AUTO_INCREMENT
value for reference tables by setting the AUTO_INCREMENT
option on a CREATE TABLE
statement.
The following example shows how to set the AUTO_INCREMENT
start value during table creation:
CREATE REFERENCE TABLE t (id int AUTO_INCREMENT PRIMARY KEY) AUTO_INCREMENT = 7; INSERT INTO t VALUES (), (); SELECT * FROM t; +----+ | id | +----+ | 7 | | 8 | +----+
This syntax has no effect on sharded tables. It will not return an error, for compatibility with external tools, but it will explicitly present a warning and no operation will be done.
AUTO_INCREMENT During Replication
When replicating data between clusters, the secondary cluster has all the replicated AUTO_INCREMENT
values from the primary cluster. When you failover to a secondary cluster, SingleStoreDB synchronizes the secondary cluster by looking for the maximum value in the range of AUTO_INCREMENT
values on every aggregator.
Examples
CREATE TABLE IF NOT EXISTS my_MemSQL_table(id INT PRIMARY KEY AUTO_INCREMENT, v VARCHAR(10) NOT NULL); CREATE REFERENCE TABLE pages( page_id INT PRIMARY KEY AUTO_INCREMENT, page_url VARCHAR(1000) );
CREATE TEMPORARY
CREATE TEMPORARY
or CREATE ROWSTORE TEMPORARY
(if default_table_type
is set to columnstore, you must use the latter syntax) creates a table that will be deleted when the client session terminates.
Examples
CREATE TEMPORARY TABLE IF NOT EXISTS temp1 (id INT AUTO_INCREMENT PRIMARY KEY, a INT, b INT, SHARD KEY(id)); CREATE ROWSTORE TEMPORARY TABLE IF NOT EXISTS temp1 (id INT AUTO_INCREMENT PRIMARY KEY, a INT, b INT, SHARD KEY(id));
CREATE GLOBAL TEMPORARY
CREATE GLOBAL TEMPORARY
or CREATE ROWSTORE GLOBAL TEMPORARY
(if default_table_type
is set to columnstore, you must use the latter syntax), creates a table that exists beyond the duration of a client session. If a failover occurs, a global temporary table loses data and enters an errored state; the global temporary table needs to be dropped and recreated.
Examples
CREATE GLOBAL TEMPORARY TABLE IF NOT EXISTS temp1 (id INT AUTO_INCREMENT PRIMARY KEY, a INT, b INT, SHARD KEY(id)); CREATE ROWSTORE GLOBAL TEMPORARY TABLE IF NOT EXISTS temp1 (id INT AUTO_INCREMENT PRIMARY KEY, a INT, b INT, SHARD KEY(id));
CREATE TABLE AS SELECT
CREATE TABLE AS SELECT
(also referred to as CREATE TABLE ... SELECT
) can create one table from results of a SELECT
query.
Here is the basic syntax. You can create the new table and set shard keys, sort keys, and or other indexes:
CREATE [ROWSTORE] [REFERENCE | TEMPORARY | GLOBAL TEMPORARY] TABLE [IF NOT EXISTS] <table_name_2> (column_name(s), [SHARD KEY(column_name)] | [SORT KEY(column_name)] | [KEY(column_name)] AS SELECT [*] | [column_name(s)] FROM table_name_1;
Here is an example of a CREATE TABLE AS SELECT
command with a shard key, sort key and an index:
CREATE TABLE ctas_table (a BIGINT, b BIGINT, SHARD KEY(a), SORT KEY(b), KEY(a)) AS SELECT * FROM orig_table;
The table will include a column for each column of the SELECT
query. You can define indexes, additional columns, and other parts of the table definition in the create_definition. Persisted computed columns can also be specified this way. Some examples:
CREATE TABLE table_1 (PRIMARY KEY (a, b)) AS SELECT * FROM table_2; CREATE TABLE table_1 (SORT KEY (a, b)) AS SELECT * FROM table_2; CREATE TABLE table_1 (a int, b int) AS SELECT c, d FROM table_2; CREATE TABLE table_1 (b AS a+1 PERSISTED int) AS SELECT a FROM table_2;
In the case that the old table (table_2
in the above examples) has an AUTO_INCREMENT
column, it will be created as a non-auto-increment column in the new table (table_1
).
Example 1
Extract time column from an event table to build a times table.
CREATE TABLE events(type VARCHAR(256),time TIMESTAMP);
INSERT INTO events VALUES('WRITE', NOW());
CREATE TABLE times(id INT AUTO_INCREMENT KEY);
SELECT time FROM events;
SELECT * FROM times; **** +----+---------------------+ | id | time | +----+---------------------+ | 1 | 2016-03-25 15:38:12 | +----+---------------------+
Example 2
SELECT * FROM courses ORDER BY course_code, section_number; **** +-------------+----------------+-----------------+ | course_code | section_number | number_students | +-------------+----------------+-----------------+ | CS-101 | 1 | 20 | | CS-101 | 2 | 16 | | CS-101 | 3 | 22 | | CS-101 | 4 | 25 | | CS-101 | 5 | 22 | | CS-150 | 1 | 10 | | CS-150 | 2 | 16 | | CS-150 | 3 | 11 | | CS-150 | 4 | 17 | | CS-150 | 5 | 9 | | CS-201 | 1 | 14 | | CS-201 | 2 | 17 | | CS-301 | 1 | 7 | | CS-301 | 2 | 10 | +-------------+----------------+-----------------+
CREATE TABLE IF NOT EXISTS distinct_courses (PRIMARY KEY(course_code)) AS SELECT DISTINCT(course_code) FROM courses;
SELECT * FROM distinct_courses ORDER by course_code; **** +-------------+ | course_code | +-------------+ | CS-101 | | CS-150 | | CS-201 | | CS-301 | +-------------+
COMPRESSION = SPARSE and SPARSE behavior
SingleStoreDB supports sparse data compression for rowstore tables. Nullable structured columns can use sparse data compression. The data types of these columns include numbers, dates, datetimes, timestamps, times, and varchars.
Columns that use sparse data compression only store non-NULL
data values. Example 4 discusses an excellent sparse data compression use case, which also includes the query to retrieve actual memory usage of rowstore tables that use sparse data compression.
Sparse compression has the following limitations:
The
SPARSE
clause cannot be used for key columns. However, if a rowstore table uses sparse data compression using theCOMPRESSION = SPARSE
clause, then the key columns are stored in-row.The
SPARSE
clause cannot be used for columns where the non-NULL
size of the column is greater than 15 bytes.
Refer to the Data Types topic for details.
Examples
Example 1: Creating a Rowstore Table Having All Sparse Columns
The following example demonstrates the COMPRESSION = SPARSE
clause. This clause indicates that all columns in the table will use sparse data compression.
CREATE ROWSTORE TABLE transaction( id BIGINT NOT NULL, explanation VARCHAR(70), shares DECIMAL(18, 2), share_price DECIMAL(18, 2), total_amount as shares * share_price PERSISTED DECIMAL(18,2), transaction_date DATE, dividend_exdate DATE, misc_expenses DECIMAL(18, 2), country_abbreviation CHAR(6), correction_date DATE, settlement_date DATE ) COMPRESSION = SPARSE;
Example 2: Creating a Rowstore Table Having Selected Sparse Columns
The following example demonstrates the SPARSE
clause. This clause is applied to the columns that will use sparse data compression.
CREATE ROWSTORE TABLE transaction( id BIGINT NOT NULL, explanation VARCHAR(70) SPARSE, shares DECIMAL(18, 2) SPARSE, share_price DECIMAL(18, 2), total_amount as shares * share_price PERSISTED DECIMAL(18,2), transaction_date DATE, dividend_exdate DATE SPARSE, misc_expenses DECIMAL(18, 2) SPARSE, country_abbreviation CHAR(6), correction_date DATE SPARSE, settlement_date DATE SPARSE );
Example 3: Listing Whether Columns use Sparse Compression
The following query lists the columns in the transaction
table that was created in Example 2. The query indicates, for each column, whether the column uses sparse compression.
SELECT column_name, is_sparse FROM information_schema.columns WHERE table_name = 'transaction'; **** +----------------------+-----------+ | column_name | is_sparse | +----------------------+-----------+ | id | NO | | explanation | YES | | shares | YES | | share_price | NO | | total_amount | NO | | transaction_date | NO | | dividend_exdate | YES | | misc_expenses | YES | | country_abbreviation | NO | | correction_date | YES | | settlement_date | YES | +----------------------+-----------+
Example 4: An Excellent Sparse Compression Use Case
Sparse rowstore compression works best on a wide table with more than half NULL
values. The distribution of the NULL
values in the table does not contribute to the amount of memory used.
For example, consider this wide table t
having three-hundred columns:
CREATE ROWSTORE TABLE t ( c1 double, c2 double, … c300 double) COMPRESSION = SPARSE;
In SingleStore 7.3, table t
was loaded with 1.05 million rows, two-thirds of which are NULL
. To retrieve the actual memory usage (in GB) of table t
, run the following command:
SELECT table_name, SUM(memory_use) memory_usage FROM information_schema.table_statistics WHERE table_name = 't' GROUP BY table_name; **** +-------------+--------------+ | table_name | memory_usage | +-------------+--------------+ | t | 1.23 | +-------------+--------------+
The following table lists the memory usage of table t
, with and without sparse compression:
Compression Setting | Memory Use | Savings (Percent) |
---|---|---|
NONE | 2.62 GB | NA |
SPARSE | 1.23 GB | 53% |
For this wide table with two-thirds NULL
values, you can store more than twice the data in the same amount of RAM.
FULLTEXT behavior
SingleStoreDB supports full-text search across text columns in a columnstore table using the FULLTEXT
index type. A full-text index can only be added during CREATE TABLE
and only on the text types CHAR
, VARCHAR
, TEXT
, and LONGTEXT
Data Types
Warning
A FULLTEXT
index cannot be dropped or altered after the table is created, and if the table is dropped, the index is deleted automatically.
If you query a column c
that is part of a multi-column FULLTEXT
index, where the query uses a FULLTEXT MATCH
on c
, the index on c
will be applied.
This differs from a multi-column non-FULLTEXT
index, where behavior is as follows: if you query column c
that is part of index i
, where the query uses an equality filter on c
, the index on c
will only be applied if c
is the leftmost column in i
.
Any column that is part of a FULLTEXT
index can be queried, even if it is not the leftmost. Searches across FULLTEXT
columns are done using the SELECT ... MATCH AGAINST
syntax. For more information, see MATCH.
Errors
These are the possible errors you may encounter when using FULLTEXT
.
Error | Error String |
---|---|
Invalid Type specified for column | Invalid type specified for |
Specifying |
|
Specifying the same column multiple times | Column may only be specified once in a |
Specifying a column that is not defined on the table | Column not defined |
Specifying | Only column store tables may have a |
Examples
This example creates a FULLTEXT
index for both the title column and the body column. Either column could be queried separately using MATCH <column_name>
, and the index on the column would be applied.
CREATE TABLE articles ( id INT UNSIGNED, year int UNSIGNED, title VARCHAR(200), body TEXT, SORT KEY (id), FULLTEXT (title,body));
USING HASH behavior
The USING HASH
clause creates a hash index in a table.
If a rowstore or columnstore table is being created, the following applies:
You can create single-column or multi-column hash indexes.
When you create a unique single-column hash index, the shard key can contain only one column and that column must be the same column that you have created the index on. When you create a unique multi-column hash index, the shard key must be a subset of the columns that you have created the index on.
You can create multiple single-column hash indexes on a reference table.
If a columnstore table is being created, the following applies:
You can create at most one unique hash index. You can create multiple multi-column hash indexes.
You cannot create a unique hash index on a
FLOAT
,REAL
, orDOUBLE
column.
Example
The following example creates a columnstore table with three hash indexes. One of these indexes has a multi-column key.
CREATE TABLE articles ( id INT UNSIGNED, month int UNSIGNED, year int UNSIGNED, title VARCHAR(200), body TEXT, SHARD KEY(title), SORT KEY (id), KEY (id) USING HASH, UNIQUE KEY (title) USING HASH, KEY (month,year) USING HASH);
The query SELECT * FROM articles WHERE title = 'Interesting title here';
uses title
’s hash index since the query uses an equality predicate. The query runs faster than if the hash index had not been used.
The query SELECT * FROM articles WHERE year > 2010 AND month > 5;
does not use the hash index on month
and year
since the query does not use an equality predicate.
See the ColumnstoreFilter in the Query Plan Operations topic for an example EXPLAIN
plan for a columnstore query that uses a hash index.
See Highly Selective Joins for an example of a columnstore query with a join that uses a hash index.
Using HASH behavior for multiple columns
KEY(<column 1 name>,<column 2 name>, ... <column n name>)
is equivalent to KEY(<column 1 name>,<column 2 name>,... <column n name>) USING HASH
. For example,
CREATE TABLE t(a INT, b INT, c INT, KEY(a,b));
is equivalent to:
CREATE TABLE t(a INT, b INT, SORT KEY(), KEY(a,b) USING HASH);
A query against t
with an equality filter on a
, an equality filter on b
, or equality filters on both a
and b
could benefit from KEY(a,b) USING HASH
. A query that uses both equality filters would be the most efficient.
Depending on the cardinality, the performance of the query may be worse than the performance of the same query, where t
is a rowstore table and KEY(a,b)
is defined on that table.
Storage of CHAR(<length>) as VARCHAR(<length>)
For a column defined as type CHAR
of length len
, store the column as a VARCHAR
of length len
if len
greater than or equal to the value of the engine variable varchar_column_string_optimization_length
. If the value of the variable is 0
, the column is not stored as a VARCHAR
.
Storing a CHAR
of length len
as a VARCHAR
of length len
will allow the column to realize the performance benefit of a VARCHAR
, in many cases. For example, CHARs use 3 bytes of memory, and VARCHARs use 1 byte. VARCHAR can have better performance and use less memory during query execution because SingleStoreDB does not need to allocate and process full-length strings with 3 bytes per char, as it does when using CHAR.
Example
Suppose the value of varchar_column_string_optimization_length
is 3
and you run:
CREATE TABLE t(a CHAR(4));
The column a
is stored as a VARCHAR(4)
.