SingleStore DB

Run UPDATEs and DELETEs as Distributed Transactions

The engine variable disable_update_delete_distributed_transactions controls whether UPDATEs and DELETEs will run as distributed transactions. When this variable is set to OFF (the default value), UPDATEs and DELETEs are atomic operations; the updates made by these operations will succeed or fail completely across all partitions.

Caution

When disable_update_delete_distributed_transactions is set to OFF, UPDATEs and DELETEs will run slower, as compared to when the variable is set to ON. However, when disable_update_delete_distributed_transactions is set to ON, UPDATEs and DELETEs are not atomic operations and consistency across all partitions is not guaranteed.

Locking

Running multiple UPDATEs and DELETEs as distributed transactions increases the likelihood that these operations will block one another, potentially leading to deadlock.

The following scenario demonstrates how deadlock would occur, without the wait-die algorithm enabled. This algorithm is explained in the next section.

Two UPDATEs (call them u1 and u2), are scheduled to be run at nearly the same time. u1 and u2 will each run in their own distributed transaction, and both will update the same rows in the same two database partitions. These events occur sequentially:

  • u1 locks the rows in partition 1 and starts updating the rows in partition 1.

  • u2 locks the rows in partition 2 and starts updating the rows in partition 2.

  • u1 attempts to lock the rows in partition 2, but cannot, because the rows have already been locked by u2.

  • u2 attempts to lock the rows in partition 1, but cannot, because the rows have already been locked by u1.

  • Deadlock has occurred, because neither u1 nor u2 can make any further progress.

Avoiding Deadlock by Using the Wait-Die Algorithm

Singlestore uses a wait-die algorithm to avoid deadlocks that could occur when multiple UPDATEs and DELETEs run as distributed transactions. When such a transaction t attempts to acquire a row lock, but that lock is already being held by another transaction, the algorithm decides whether t should wait for the row lock to be freed, or rollback t and retry t later. This decision is based on the transaction's age. Older transactions will always wait. Newer transactions will not wait and rollback.

In the scenario described in the previous section if the wait-die algorithm were enabled, u2 would die because it is a newer transaction than u1. Then, u1 would update the rows in partition 2 and following that, u2 be retried in a new distributed transaction.

The next section explains how to enable the wait-die algorithm.

Engine Variables for Configuring how the Wait-Die Algorithm Operates

You can configure engine variables to specify if the wait-die algorithm is enabled, and how the algorithm operates.

deadlock_avoidance_strategy

Possible values are wait_die (the default), and none. With wait_die, the wait-die algorithm is enabled. This is a global variable that can sync to all aggregators and leaves.

Caution

deadlock_avoidance_strategy should be set to wait_die, unless you are certain that distributed UPDATE and DELETE transactions will not deadlock, or disable_update_delete_distributed_transaction is set to ON.

wait_die_retry_on_die_sleep_factor

This variable is used to compute the amount of time that the wait-die algorithm waits for locks to be released (which a distributed transaction needs to acquire), before rolling back the transaction. The wait-die algorithm will wait wait_die_retry_on_die_sleep_factor milliseconds * <the number of locks the distributed transaction has already acquired> . The default value of this variable is 250 milliseconds. If deadlocks in distributed UPDATE and DELETE transactions are likely, this variable should be set to a much lower value, near 0. This is a global variable that can sync to all aggregators and leaves.

lock_wait_timeout

If deadlock_avoidance_strategy is set to wait_die, the wait-die algorithm will honor lock_wait_timeout. See the list of sync variables for more information on lock_wait_timeout.