SingleNodeLockingModel

Version 6

    Problem

    The current approach of acquiring locks on all data owners, in order to assure consistency, is deadlock prone when multiple transactions access the same key concurrently.

    E.g. key “a” so that consistentHash(“a”) = {N3, N4} (see diagram below).

    Two transactions write on key “a” on nodes N1 and N2, at the same time. With the right timing RPCs can take place in the order indicated in the diagram below. Both 3 and 4 deadlock in this case:

    • both transactions wait for the (potentially entire) duration of lockAcquisitionTimeout (10s) just to rollback after that
    • threads running the transactions are also blocked for the same time, not doing any useful work
    • other nodes trying to accessing the keys already locked by the two transactions are blocked as well, potentially generating more deadlocks

    1.png

    Solution

    Disregarding the number of replicas (numOwners) to which we need to replicate the data,a single node is elected from these replicas to coordinate the lock acquisition. The election is made using the consistent hash function, the node responsible for coordinating lock acquisition being the main data owner.

    Main data owner of a key is defined as follows: mainDataOwner(key) =  consistentHash(key).get(0).

    After it acquires a local node, the main data owner is also in charge to propagate the changes to the remaining nodes. The diagram below depicts this scenario.

    2.png

    Transactional scenario

    This section discusses the situation when 1 and 3 in the above diagram are transactions.

     

    1. at 1, N1 sends a prepare message to the main data owner N3
    2. N3 acquires a lock on “a” on behalf of the transaction originated at N1
    3. N3 then propagates the prepare to N4 (arrow 2). From a locking perspective this call is guaranteed to be successful. That's because the elected lock coordinator for "a" has already acquired the lock on the main data owner (N3)
    4. N3 acknowledges lock acquisition to N1
    5. at this point N1 confirms the prepare to the transaction manager and expects a commit/rollback from it
    6. when it receives the commit/rollback it multicasts a commit/rollback message to all the nodes where the transaction is prepared. In above example that is N3 and N4
    7. when a receive the commit/rollback  message, the data owners N3 and N4 apply the changes then release the locks
    8. transaction originated at N2 can now acquire the lock (arrow 3) and then run steps 1-7.

     

     

    Non-transactional scenario

    The non-transactional scenario is a simplification of the transactional one, where last phase (commit) is skipped. The fact that locks are being acquired in the same way, i.e. coordinated by the main data owner, in both transactional and non transactional scenarios assures update-consistency between mixed concurrent access: e.g. when a transaction and a thread that is not in the scope of a transaction try to update the same data.

     

    Related

    The JIRA that tracks this change in Infinispan is: ISPN-1137

    OptimisticLockingInInfinispan should also be in place before going on and implementing this. OptimisticLockingInInfinispan is tracked by ISPN-1131