On this page
ERROR 1158 (08S01): Leaf Error (): Error reading packet ### from the connection socket (): Connection timed out
The presence of extremely large numbers of duplicates in combination with
LOAD DATA IGNORE can cause the leaves to have to wait so long they time out.
If you see this error when running
LOAD DATA IGNORE, verify that the data does not have a lot of duplicates.
A SingleStoreDB node is unable to connect to another SingleStoreDB node.
Here are some possible solutions to solve this problem:
Ensure that all nodes are able to connect to all other nodes on the configured port (the default is 3306).
Update any firewall rules that block connectivity between the nodes.
One way to verify connectivity is to run the command
FILL CONNECTION POOLSon all SingleStoreDB nodes.
If this fails with the same error, then a node is unable to connect to another node.
Some queries require different amounts of connectivity.
For example, some queries only require aggregator-leaf connections while others require aggregator-leaf as well as leaf-leaf connections. As a result, it is possible for some queries to succeed while others fail with this error.
If all nodes are able to connect to all other nodes, the error is likely because your query or queries require opening too many connections at once.
FILL CONNECTION POOLSon all SingleStoreDB nodes to pre-fill connection pools.
If the connection pool size is too small for your workload, adjust the max_ pooled_ connections configuration variable, which controls the number of pooled connections between each pair of nodes.
ERROR 1970 (HY000): Subprocess /var/lib/memsql/master-3306/extractors/kafka-extract –get-offsets –kafka-version=0.
8. 2. 2 timed out
This error occurs when there are connectivity issues between a SingleStoreDB node and the data source (e.
To solve this issue, edit the value of
When the MySQL client connects to
localhost, it attempts to use a socket file instead of TCP/IP.
/etc/mysql/my. when the MySQL client is installed on the system.
localhost attempts to connect to MySQL and not SingleStoreDB.
There are two solutions to solve this problem:
127.as the host instead of
0. 0. 1
mysql -h 127.instead of
0. 0. 1 -u root
mysql -h localhost -u root.
If you omit the host (
mysql -u root), the MySQL client will implicitly use
For SingleStoreDB, change the
socketvalue in the
/etc/mysql/my.file to the location of your SingleStoreDB socket file as shown in the example below:
[client]port = 3306socket = /var/lib/memsql/data/memsql.sock
This error occurs when the incorrect path is provided for the ca-cert-pem file when using the
--ssl_ flag in the connection string to the SingleStoreDB node.
The solution is to verify you are using the correct path to the ca-cert.
This error occurs when you attempt to create a connection into the affected memsql node and either you did not add the required SSL configurations to the
memsql. file, or you did add the required SSL configurations to the
memsql. file but you did NOT restarted the target memsql node.
Check to make sure the correct ssl configurations have been written to the
memsql.file of the target memsql node.
Check to make sure the target memsql node has been restarted since updating its
When a distributed join occurs, the leaves within the cluster must reshuffle data amongst themselves, which requires the leaves to connect to one another.
Use the following steps to troubleshoot this scenario:
Confirm you are able to access SingleStoreDB from one leaf to another in the cluster.
This will eliminate network connection issues.
You are able to connect manually from one leaf to another because doing so does not utilize the DNS cache on the leaf.
SHOW LEAVESon an affected leaf (e.
g. leaf X) in the cluster. The
Opened_columns should reveal what leaves the affected leaf has open connections with.
Connections Verify that leaf Y is not in this list.
When leaves connect to each other, they cache connection information (leaf-1 is at IP 192.
0. 2. 1, leaf-2 is at IP 192. 0. 2. 2, etc. ). If the IPs of these leaves ever change the cache will not automatically update. This will ultimately result in an unsuccessful connection attempt because the other leaves in the cluster are using old IP address information. The solution is to flush the DNS cache and connection pools on all affected nodes. You can do so by running the following:
sql FLUSH HOSTS; FLUSH CONNECTION POOLS;
FLUSH HOSTSclears the DNS cache on the node.
This must be performed all affected nodes in the cluster.
FLUSH CONNECTION POOLSshuts down all existing connections and closes idle pooled connections.
Last modified: November 22, 2022