0 Replies Latest reply on Apr 29, 2013 7:27 AM by Deyan Pandulev

    Deadlock using pessimistic locking mode

    Deyan Pandulev Newbie

      System Information:

      OS: Debian Squeeze

      JDK: 1.6.0_32

      AS: Glassfish 3.1.1 Open Source

      ISPN: 5.2.1

       

       

      Bellow is a test code using DummyTransactionManager to test Infinispan pessimistic lock capabilities. I am testing using replication cluster. Whenever there are more than one nodes forming a cluster deadlocks are produced.

       

       

      public class SimpleClusterWriter {
      
          public static void main(String[] args) throws Exception {
      
          DefaultCacheManager dcm = new DefaultCacheManager("simple_cluster_node1.xml");
      
          Cache<String, String> c = dcm.getCache("simpleCache");
      
          final AdvancedCache<String, String> ac = c.getAdvancedCache();
          final TransactionManager tm = ac.getTransactionManager();
          final String key = "k";
      
          final Random r = new Random();
          while (true) {
      
              final String value = Integer.toString(r.nextInt(999));
      
              tm.begin();
      
              try {
      
              String oldValue = c.get(key);
              System.out.println("Will change old value => " + oldValue + " to the new value => " + value);
      
              ac.lock(key);
              ac.put(key, value);
      
              System.out.println("[W] [r] for rollback?");
      
              Thread.currentThread().sleep(r.nextInt(5) * 1000L);
      
              if (r.nextBoolean()) {
                  throw new RuntimeException("Should rollback value => " + value);
              }
      
              } catch (Exception e) {
              System.err.println("Executing rollback...");
              tm.setRollbackOnly();
      
              System.out.println(e);
              } finally {
      
              System.out.println("Transaction => " + tm.getTransaction() + " has status => " + tm.getStatus());
      
              if (tm.getStatus() == Status.STATUS_ACTIVE) {
                  tm.commit();
              } else {
                  tm.rollback();
              }
              }
      
          }
      
          }
      }
      

       

      The xml configuration bellow:

       

       

      <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="urn:infinispan:config:5.2 http://docs.jboss.org/infinispan/schemas/infinispan-config-5.2.xsd"
          xmlns="urn:infinispan:config:5.2">
      
          <global>
              <transport clusterName="test" nodeName="node1" machineId="Linux01">
                  <properties>
                      <property name="configurationFile" value="jgroups-udp.xml" />
                  </properties>
              </transport>
      
          </global>
      
          <namedCache name="simpleCache">
              <clustering mode="replication">
                  <stateTransfer chunkSize="0" fetchInMemoryState="true"
                      timeout="620000" />
                  <sync replTimeout="20000" />
              </clustering>
      
              <locking isolationLevel="REPEATABLE_READ" concurrencyLevel="5000"
                  writeSkewCheck="false" lockAcquisitionTimeout="60000" />
      
              <transaction
                  transactionManagerLookupClass="org.infinispan.transaction.lookup.GenericTransactionManagerLookup"
                  autoCommit="false" transactionMode="TRANSACTIONAL" lockingMode="PESSIMISTIC" />
      
                  <deadlockDetection enabled="true"/>
          </namedCache>
      
      </infinispan>
      

       

      After I was unable to solve the problem I decided to deploy the same logic in Glassfish AS to see whether the TransactionManager used there will behave differently.

       

      Bellow is the Config used to inject EmbeddedCacheManager:

       

       

      public class Config {
      
          @Produces
          @ApplicationScoped
          public EmbeddedCacheManager defaultClusteredCacheManager() throws IOException {
          return new DefaultCacheManager("simple_cluster_node1.xml");
          }
      
      }
      

       

      The xml configuration is the same as above.

       

      The @Singleton that injects the manager:

       

      @Startup
      @Singleton
      @ConcurrencyManagement(ConcurrencyManagementType.BEAN)
      @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
      public class SimpleClusterSingleton implements ClusterEJBInterface {
      
          @Inject
          private EmbeddedCacheManager ecm;
      
          private Cache<String, String> cache;
      
          private final Random r = new Random();
      
          public SimpleClusterSingleton() {
      
          }
      
          @PostConstruct
          private void init() {
          this.cache = ecm.getCache("simpleCache");
          }
      
          @PreDestroy
          private void uninit() {
      
          }
      
          public String getClusterValue(String key) {
          return this.cache.get(key);
          }
      
          public void setClusterValue(String key, String value) {
          this.cache.getAdvancedCache().lock(key);
      
          synchronized (this.r) {
              try {
              Thread.currentThread().sleep(r.nextInt(4) * 1000L);
              } catch (InterruptedException e) {
              e.printStackTrace();
              }
          }
      
          this.cache.put(key, value);
      
          synchronized (this.r) {
              if (r.nextInt(100) < 10) {
              throw new EJBException("For key => " + key + " value => " + value + " should be rollbacked.");
              }
          }
          }
      
      }
      

       

      As you can see the logic here is almost the same as the standalone example. On random basis rollback are made. After deploying the app and testing using simple standalone reader for the values (again forming the cluster) I have strange behavior.

      Steps to reproduce:

      1. Deploy the app on the ap server and run standalone client which uses the same xml config forming replication server with two nodes.
      2. Put some (key,value) pairs to test rollback. Everything work correctly.
      3. Undeploy the app only and stop the app server. Here is remaining only node 2 which is the standalone client.
      4. Start the app server and deploy the app.
      5. After repeating step 2 after the first rollback I have the following error on any other put operation.

       

       

      Apr 29, 2013 1:58:22 PM org.infinispan.interceptors.InvocationContextInterceptor handleAll
      ERROR: ISPN000136: Execution error
      org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [1 seconds] on key [k] for requestor [DldGlobalTransaction{coinToss=-8384596737959352603, lockIntention=k, affectedKeys=[], locksAtOrigin=[]} GlobalTransaction:<node1-16423>:13:remote]! Lock held by [DldGlobalTransaction{coinToss=3078967057874863995, lockIntention=null, affectedKeys=[], locksAtOrigin=[]} GlobalTransaction:<node1-16423>:12:remote]
          at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:213)
          at org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLock(LockManagerImpl.java:186)
          at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.lockKeyAndCheckOwnership(AbstractTxLockingInterceptor.java:186)
          at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.lockAndRegisterBackupLock(AbstractTxLockingInterceptor.java:123)
          at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitLockControlCommand(PessimisticLockingInterceptor.java:250)
      

       

      I have the reason to believe that rollback is not behaving properly. There should be no unfinished transaction here. Any help will be greatly appreciated.