Query Errors
On this page
ISSUE: Inconsistent Query Run Times
If you receive very different response times when you run the same query (or groups of queries on the same leaf) repeatedly, Transparent Huge Pages (THP) may be the cause.
Linux organizes RAM into pages that are ordinarily 4KB.
You can check for THP by running cat /sys/kernel/mm/*transparent_
.
Note: Having THP enabled may also be the cause of high system CPU (sometimes called red CPU).
How to Disable THP
SingleStore recommends disabling transparent huge pages (THP) at boot time on all nodes (master aggregator, child aggregators, and leaves) in the cluster.
To disable THP, add the following lines to the end of rc.
before the exit line (if present), and reboot the host machines.
For Debian-based distributions, add the following to /etc/rc.
:
echo never > /sys/kernel/mm/transparent_hugepage/enabledecho never > /sys/kernel/mm/transparent_hugepage/defragecho 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defragecho no > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
For Red Hat/CentOS distributions, add the following to /etc/rc.
:
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabledecho never > /sys/kernel/mm/redhat_transparent_hugepage/defragecho 0 > /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/defragecho no > /sys/kernel/mm/redhat_transparent_hugepage/khugepaged/defrag
Note
The khugepaged/defrag
option will be 1
or 0
on newer Linux versions (e.yes
or no
on older versions (e.cat /sys/kernel/mm/*transparent_
and keep only the matching setting.1
or 0
update the line with echo 0
.yes
or no
, update the line with echo no
.
ISSUE: Long Running Queries Blocking DDL Operations and Workload
SingleStore supports execution of online DDL statements, which means that you can perform read and write operations while the DDL command is being executed on a table.default_
is reached (default value is 180000 milliseconds or 3 minutes), after which it throws a timeout error.
Increasing the timeout duration for a DDL operation is not a recommended option since it delays the start of the operation.
Solution
-
Optimize your workload.
If you have long running queries, then you can optimize your current workload by running frequent maintenance and backups, or by using a different workload. For example, run the analytical queries in your workload during the lowest traffic time. Additionally, you can use cluster replication to replicate the data to a secondary read-only cluster, and then run all long running or analytical queries on the secondary cluster. This drastically reduces the workload on the primary cluster and improves workload performance.
In this section
- ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
- ERROR 1706 (HY000): Feature Multi-table UPDATE/DELETE with a reference table as target table is not supported by MemSQL
- ERROR 1706 (HY000): Leaf Error (127.0.0.1:3307): Feature INSERT IGNORE … ON DUPLICATE KEY UPDATE is not supported by MemSQL
- ERROR 2408 (HY000): ER_COMPILATION_TIMEOUT: Query compilation timed out and cannot be executed
- ERROR 1064 ER_PARSE_ERROR: Unhandled exception Type: ER_PARSE_ERROR (1064)
- ERROR 1956 ER_PARAMETER_CAPACITY_EXCEEDED: Query cannot be completed because the parameter array capacity of 1048576 was exceeded.
- Error: "Query execution memory use (XXX MB) has reached the maximum memory for its resource pool 'xxx' (XX%). “
- Why do I get errors about UNIQUE KEYs?
Last modified: November 22, 2022