16 per node doesn't sound correct. I can see 1 per node for the child map, as well as another one for the locks, but that's about it. You must remember though that this is per NODE - so an Fqn like /a/b/c involves 3 nodes.
A lot of this stuff is due for major improvements in JBC 3.x where there won't be a CHM per node for locks, just one for children.
Perhaps the CHMs you see are uncollected objects? Does your app frequently create and delete nodes? Does this change significantly after a System.gc() call?
The 16 refers to the internal data structures (e,g, java.util.concurrent.ConcurrentHashMap$Segment) of which by default CHM creates 16 per map.
After populating the Cache I made a call to System.gc(), no change in the memory footprint.
So is there any way around this and still have an Isolation level higher than 'NONE'?
If it takes 511,898,680 bytes of overhead (the space used by the 7 million objects at the top of the histo:live dump) to cache 441,115 nodes (approx 9,000,000 bytes of cached data) then we won't be able to use this system.
I have a striped lock manager in 3.x (currently in dev, alpha soon out) which can potentially be backported to 2.2.X. This will limit the locks created and the corresponding overhead.
If you think you can wait for 3.x (should be released in a couple of months, with alphas and betas in the coming weeks) then great. Otherwise, raise a JIRA to backport this striped lock manager to 2.2.X. and it could be in 2.2.1 as an option.
As brian said, the locks come from CHM, which by default uses 16 segments (16 hashmaps + 16 reentrant locks). This should probably be configurable.
Yes, but the CHMs used in the nodes to hold children don't use the default 16 segments:
// Less segments to save memory children = new ConcurrentHashMap<Object, Node<K, V>>(4, .75f, 4);
Ah, but the LockMaps do!! Each LockMap - used to hold information on concurrent readers when using pessimistic locking - uses a ConcurrentHashSet, which is a wrapper around a CHM that implements Set.
I've patched this in trunk and branch 2.2.X.