SingleStore transactions use standard 2-phase locking for concurrency control to ensure serializability. Row locks are acquired as rows are written to and are held until the transaction that acquired them commits or rolls back.
SingleStore takes row locks on write operations (for example,
UPDATE). These row locks resolve write conflicts in the user’s workload. They are not used to make the underlying data structure backing the index (skiplist or hashtable) thread safe. Row locks are not taken on read operations. There is no other locking mechanism in the SingleStore storage engine beyond these transaction locks. These locks are different from the traditional databases which also have locks inside the index backing data structure. Unlike SingleStore, the locks inside indexes in other systems are taken on both the read and write operations and are a source of contention in highly concurrent workloads.
Columnstore tables either lock at the row level or the partition level, depending on the
columnstore_table_lock_threshold value. See the Locking in Columnstores topic for more information. SingleStore takes locks before changing the metadata for a blob file. (There is one blob file per column that stores 100K rows of compressed data). The file contents themselves are not updated, instead the bitmap metadata that tracks the deleted rows in the file is updated.
SingleStore doesn’t implement a deadlock detection algorithm to break deadlocks. Deadlocks result in row lock timeout errors. Transactions that cannot acquire locks after the lock_wait_timeout duration will raise the ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction and rollback the current statement.
Acquiring Row Locks
Consider the following example table:
CREATE TABLE tabs (c1 INT PRIMARY KEY, c2 INT, c3 INT, KEY(c2));
There are two different ways queries acquire row locks in SingleStore:
- The query is scanning a secondary key to run the write operation.
For example, the query
DELETE FROM tabs WHERE c2 = 1will seek and scan the secondary index on
c2to run the
DELETEoperation. When the query finds a row where
c2 = 1, it will seek into the primary key to acquire the row lock before marking the row as deleted.
- The query is scanning the primary key to run the write operation.
For example, the query
DELETE FROM tabs WHERE c1 = 1will seek and scan the primary key on
c1to run the
DELETEoperation. Since the query is already scanning the primary key, the index scan itself will acquire row locks to avoid an extra seek into the primary key (as is done when
DELETEis scanning the secondary key).
While scanning the primary key, the lock will be acquired before running any filters in the
WHERE clause since the index scan operation itself cannot run those filters. The scan runs inside the storage layer, and the filters are run inside the query execution layer. This means even rows that will not be deleted are locked. For example, the query
DELETE FROM tabs WHERE c1 = 1 AND c3 = 1 locks rows where
c3 is not
1. Although SingleStore releases the row lock if the filter doesn’t match (for example where
c3 is not
1), for applications running concurrent
DELETE queries, this operation is often not fast enough to avoid deadlocks. For example, the queries
DELETE FROM tabs WHERE c1 = 1 AND c3 = 1 and
DELETE FROM tabs WHERE c1 = 1 AND c3 = 2 will never deadlock if
c1 is not the primary key.
Extra Deadlocks in SingleStore Due to Sharding
In a single box database, concurrent write operations on a table that scan the same index in a specific direction (either forward or reverse) cannot deadlock. For example, consider a transaction A that gets a lock on row(n) and is waiting for a lock on row(n+1). Now, consider another transaction B that gets a lock on row(n+1) and is waiting for a lock on row(n). If both the transactions scan the index in the same order, they will see the rows in the same order, and this deadlock ordering is impossible.
In SingleStore, the index is partitioned up amongst shards that are spread across a cluster of leaf nodes. Even if the threads are scanning the index in the same order on each partition, they can still deadlock. For example, consider two
DELETE commands that run at the same time, starting on two different partitions (say partitions 0 and 1). Let’s say, one of the two
DELETE operations runs first on partition 0 and the second runs first on partition 1. Now, they may hit the lock acquisition deadlock in the example above.
DELETE operations don’t have this issue. You can achieve a single partition
DELETE operation by using a shard key filter in the
If the deadlocks are too frequent, see the Lock wait timeout exceeded; try restarting transaction topic.