4 Replies Latest reply on May 9, 2014 2:27 AM by rvansa

    Locking mechanism

    tomas11

      Hi

       

      What locking mechanism would help me to pass following test? I tried REPEATABLE_READ and READ_COMMITTED and had no success.

       

      There is Cache<UniqueID, Set<UniqueID>> cache;

      Test run 4 threads each adding additional UniqueID to Set<UniqueID> under the same key. Since each thread adds 1000 values at the end there should be cache with 4000 UniqueIDs. But it won't happend - there are less. I guess reason is that during read an "old" Set<UniqueID> is obtained.

       

      Is there some mechanism to cover this case? To lock cache value for read operation?

       

      Thanks

       

      @Test
      public void conccurentWriteTest() throws InterruptedException {
          UniqueID key = new UniqueID();
          Thread thread1 = generateThread(key);
          Thread thread2 = generateThread(key);
          Thread thread3 = generateThread(key);
          Thread thread4 = generateThread(key);
          thread1.start();
          thread2.start();
          thread3.start();
          thread4.start();
          while(thread1.isAlive() || thread2.isAlive() || thread3.isAlive() || thread4.isAlive()) {
          }
         
          Assert.assertEquals(4000, cache.get(key).size());
      }
      
      private Thread generateThread(final UniqueID key) {
           return new Thread(new Runnable() {
                  @Override
                  public void run() {
                      int i = 0;
                      while (i<1000) {
                          i++;
                          Set<UniqueID> keySet = cache.get(key);
                          if(keySet == null) {
                               keySet = new HashSet<>();
                          }
                          keySet.add(new UniqueID())
                          cache.put(key, keySet);
                      }
                  }
              });
      }
      
      
      
        • 1. Re: Locking mechanism
          rvansa

          It's no wonder that there are less keys, the set got from the cache is always a copy.

          a) Use transactions http://infinispan.org/docs/6.0.x/user_guide/user_guide.html#_configuring_transactions

          b) Use atomic operations such as cache.putIfAbsent(...) and cache.replace(...)

          • 2. Re: Locking mechanism
            mircea.markus

            In between you read the set(line 25)  and you write it back (line 30), other thread will update it, and you overwrite that update.

            I see two ways for you to handle this:

            1. Use pessimistic locking: lock the key before reading it and release the lock after writing it. This will guarantee that only one thread updates the value at a time
            2. Use DeltaAware (or (FineGrained)AtomicHashMap) to update only parts of the map
            • 3. Re: Locking mechanism
              tomas11

              Thanks for answers.

               

              I use this transaction.

               

              <transaction transactionMode="TRANSACTIONAL" useEagerLocking="true" transactionManagerLookupClass="org.infinispan.transaction.lookup.GenericTransactionManagerLookup"

                          cacheStopTimeout="30000" lockingMode="PESSIMISTIC" >

                          <recovery enabled="true"/>

              </transaction>

               

              but this does not seem to do a job. Should be this enough to declare it in configuration file or is there need to explicitly call lock in code?

               

              How could atomic operations such as cache.putIfAbsent(...) and cache.replace(...) help me in this case?

              • 4. Re: Re: Locking mechanism
                rvansa

                Configuring transactions is not enough, you have to use them:

                 

                TransactionManager tm = cache.getAdvancedCache().getTransactionManager();

                 

                tm.begin();

                try {

                        cache.getAdvancedCache().lock(key);

                Set<UniqueID> keySet = cache.get(key);

                if(keySet == null) { 

                keySet = new HashSet<>();

                }

                keySet.add(new UniqueID());

                cache.put(key, keySet);

                tm.commit();

                } catch (Exception e) {

                tm.rollback();

                }

                 

                This could also work with optimistic transactions, but I'd have to enable write skew checks in configuration. But pessimistic transactions are more appropriate here.

                 

                As for the atomic operations, you could just do the same as for ConcurrentHashMap:

                 

                UniqueID newId = new UniqueID())

                for (;;) {

                Set<UniqueID> originalSet = cache.get(key);

                if (originalSet == null) { 

                Set<UniqueID> newSet = new HashSet<UniqueID>(Collections.singleton(newId));

                if (cache.putIfAbsent(key, newSet) == null) break;

                } else {

                Set<UniqueID> newSet = new HashSet<UniqueID>(keySet);

                       newSet.add(new UniqueID());

                       if (cache.replace(key, keySet, newSet)) break;

                }

                }

                1 of 1 people found this helpful