1 Reply Latest reply on Aug 20, 2009 7:25 AM by manik

    Problems with Transactions under medium load

    egeesken

      Hi I am working with a JBoss PojoCache (PojoCache 3.0. GA Naga, CoreCache 3.1.0 Cascabel) BUT i am using it in a WebLogic Server 10.2 with JDK 1.5 on an Solaris 64 bit Environment.
      Everything is working fine !! But now we entered the stress test phase and we are running in a huge of TransactionLock Exceptions starting from

      org.jboss.cache.lock.TimeoutException: read lock for /G/VALIDSHEETS/1000/M could not be acquired by Thread[[ACTIVE] ExecuteThread: '23' for queue: 'weblogic.kernel.Default (self-tuning)',5,Pooled Threads] after 50000 ms. Locks: Read lock owners: []
      Write lock owner: null
      

      java.lang.IllegalStateException: Cannot mark the transaction for rollback. xid=BEA1-10EDB1C915B28181CB68, status=Rolled back. [Reason=weblogic.transaction.internal.TimedOutException: Transaction timed out after 30 seconds
      


      I have changed the nodeLockingScheme from mvcc to pessimistic, now the lock exceptions occurs less often but they are not gone under stress.
      With stress i mean 20 concurrent users and our target is more than 100 concurrent users.

      My Cache Configuration is:
      <?xml version="1.0" encoding="UTF-8"?>
      <jbosscache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:jboss:jbosscache-core:config:3.1">
      
       <!-- Configure the TransactionManager -->
       <transaction transactionManagerLookupClass="org.jboss.cache.transaction.GenericTransactionManagerLookup"/>
      
       <jmxStatistics enabled="false"/>
      
       <locking
       isolationLevel="READ_COMMITTED"
       lockParentForChildInsertRemove="false"
       lockAcquisitionTimeout="50000"
       nodeLockingScheme="pessimistic"
       writeSkewCheck="false"
       useLockStriping="true"
       concurrencyLevel="500"/>
      
      
       <eviction wakeUpInterval="5000">
       <default algorithmClass="org.jboss.cache.eviction.LRUAlgorithm" eventQueueSize="2000000">
       <property name="maxNodes" value="3000000" />
       <property name="timeToLive" value="1000000" />
       </default>
      
       <!-- configurations for various regions-->
       <region name="/_default_">
       <property name="maxNodes" value="3000000" />
       <property name="timeToLive" value="1000000" />
       </region>
       <region name="/G">
       <property name="maxNodes" value="500000" />
       <property name="timeToLive" value="6000000" />
       <property name="maxAge" value="6000000" />
       </region>
       <region name="/U">
       <property name="maxNodes" value="3000000" />
       <property name="timeToLive" value="3600000" />
       <property name="maxAge" value="3600000" />
       </region>
       </eviction>
      </jbosscache>
      


      Is there anything i can do avoiding these lock exceptions? Any idea?

      The process of our application is the following:

      The user requests data
      The BusinessObjectManager looks for the data in the cache
      If it can not be found in the cache the BusinessObjectManager requests the data from SAP and then attaches the data to the cache.

      If the user changes the data, the cached records and the dependend ones are detached and if requested again, read from SAP and attached to the cache again.

      Can anyone help?
      I am really under pressure, these stress test errors will jeopardize our project.

      Any help will be perfect.

      Regards Edmund





        • 1. Re: Problems with Transactions under medium load
          manik

          I recommend using MVCC, it performs a lot better than pessimistic locking and removes the possibility of deadlocks. Also pessimistic is legacy code. Note that solutions below all relate to MVCC and have no effect on pessimistic locks.

          If you see frequent lock acquisition timeouts, it could be because

          a) your transactions take too long. Solution: Increase lock acquisition timeout and transaction timeout in your WebLogic transaction manager.

          b) Your node-to-lock ratio is too high. Solution: Increase concurrencyLevel. As a rule of thumb, make sure your concurrency level is greater than the number of fqns in the cache. E.g., if you expect 10000 fqns, set your concurrency level to a larger number. Note that this will increase your memory footprint though.

          Another approach is to disable striped locking altogether (set useLockStriping to false. See this doc). This will increase concurrency, but again cause memory usage to go up.

          HTH
          Manik