7 Replies Latest reply on Jul 4, 2008 5:03 AM by manik

    Excessive ReentrantLock$NonfairSync objects

    molsen-ee

      I'm using jboss-cache (jbosscache-core-2.1.1.GA) to implement a cache on a Tomcat/Apache environment and am having some issues with memory usage.

      When I have the Isolation level set to anything other than "NONE" there is an excessive number of objects relating to ConcurrentHashMap's created and residing in memory. When 400k Nodes' are created there are over 7 million ConcurrentHashMap structures, approximately 16 per node. See the jmap histo dump below for the memory usage.

      num #instances #bytes class name
      --------------------------------------
       1: 7058684 225877888 java.util.concurrent.ConcurrentHashMap$Segment
       2: 7058738 169409712 java.util.concurrent.locks.ReentrantLock$NonfairSync
       3: 7058684 116611080 [Ljava.util.concurrent.ConcurrentHashMap$HashEntry;
       4: 443647 35648184 [Ljava.util.HashMap$Entry;
       5: 441170 35293456 [Ljava.util.concurrent.ConcurrentHashMap$Segment;
       6: 441115 24702440 org.jboss.cache.lock.ReadWriteLockWithUpgrade
       7: 441115 21173520 org.jboss.cache.UnversionedNode
       8: 443357 17734280 java.util.HashMap
       9: 441170 17646800 java.util.concurrent.ConcurrentHashMap
       10: 441115 17644600 org.jboss.cache.invocation.NodeInvocationDelegate
       11: 455905 11396456 [Ljava.lang.Object;
       12: 460983 11063592 java.util.HashMap$Entry
       13: 270774 11019496 [C
       14: 442649 10623576 java.util.ArrayList
       15: 441377 10593048 java.util.concurrent.ConcurrentHashMap$HashEntry
       16: 441142 10587408 java.util.RegularEnumSet
       17: 441116 10586784 org.jboss.cache.Fqn
       18: 441115 10586760 org.jboss.cache.lock.IdentityLock
       19: 441117 7057872 org.jboss.cache.util.concurrent.ConcurrentHashSet
       20: 441115 7057840 org.jboss.cache.lock.LockStrategyRepeatableRead
       21: 441115 7057840 org.jboss.cache.lock.ReadWriteLockWithUpgrade$WriterLock
       22: 441115 7057840 org.jboss.cache.lock.ReadWriteLockWithUpgrade$ReaderLock
       23: 441115 7057840 org.jboss.cache.lock.LockMap
      


      The cache is initialized and populated in a tomcat request filter when the tomcat server is first started. The cache seems to work properly but it is consuming incredible amounts of memory.

        • 1. Re: Excessive ReentrantLock$NonfairSync objects
          manik

          16 per node doesn't sound correct. I can see 1 per node for the child map, as well as another one for the locks, but that's about it. You must remember though that this is per NODE - so an Fqn like /a/b/c involves 3 nodes.

          A lot of this stuff is due for major improvements in JBC 3.x where there won't be a CHM per node for locks, just one for children.

          Perhaps the CHMs you see are uncollected objects? Does your app frequently create and delete nodes? Does this change significantly after a System.gc() call?

          • 2. Re: Excessive ReentrantLock$NonfairSync objects
            brian.stansberry

            The 16 refers to the internal data structures (e,g, java.util.concurrent.ConcurrentHashMap$Segment) of which by default CHM creates 16 per map.

            • 3. Re: Excessive ReentrantLock$NonfairSync objects
              molsen-ee

              After populating the Cache I made a call to System.gc(), no change in the memory footprint.

              So is there any way around this and still have an Isolation level higher than 'NONE'?

              If it takes 511,898,680 bytes of overhead (the space used by the 7 million objects at the top of the histo:live dump) to cache 441,115 nodes (approx 9,000,000 bytes of cached data) then we won't be able to use this system.

              • 4. Re: Excessive ReentrantLock$NonfairSync objects
                manik

                I have a striped lock manager in 3.x (currently in dev, alpha soon out) which can potentially be backported to 2.2.X. This will limit the locks created and the corresponding overhead.

                If you think you can wait for 3.x (should be released in a couple of months, with alphas and betas in the coming weeks) then great. Otherwise, raise a JIRA to backport this striped lock manager to 2.2.X. and it could be in 2.2.1 as an option.

                • 5. Re: Excessive ReentrantLock$NonfairSync objects
                  jason.greene

                  As brian said, the locks come from CHM, which by default uses 16 segments (16 hashmaps + 16 reentrant locks). This should probably be configurable.

                  • 6. Re: Excessive ReentrantLock$NonfairSync objects
                    manik

                    Yes, but the CHMs used in the nodes to hold children don't use the default 16 segments:

                    
                     // Less segments to save memory
                     children = new ConcurrentHashMap<Object, Node<K, V>>(4, .75f, 4);
                    


                    Ah, but the LockMaps do!! Each LockMap - used to hold information on concurrent readers when using pessimistic locking - uses a ConcurrentHashSet, which is a wrapper around a CHM that implements Set.

                    JBCACHE-1383

                    • 7. Re: Excessive ReentrantLock$NonfairSync objects
                      manik

                      I've patched this in trunk and branch 2.2.X.