-
1. Re: Additional memory consumption | ConcurrentHashMap
sannegrinovero Feb 24, 2012 9:55 AM (in response to agnihere)yes those "infrastructure" objects take quite some space, but to really double the amount of memory used you must be using many and very small values, is that the case?
There is an upcoming patch for version 5.2 which will make it possible to replace the actual map implementation used internally: Java 8 will have a new concurrent map which has no internal segments at all, and the implementation is already available so it can be backported in a separate jar for us to use.
Are you measuring a well warmed up JVM ? Those objects are mostly a cost only before JIT decides to compile the map implementation. And of course profiling is going to lie as well one needs to look at the heap sizes with a good amount of entries in it, but not instrument those entries preventing JIT to reach the optimal code.
-
2. Re: Additional memory consumption | ConcurrentHashMap
agnihere Feb 24, 2012 11:01 AM (in response to sannegrinovero)No, our "values" in fact are quite coarse-grained. Each key in fact holds about 5MB data. We have used HeapDump only for memory analysis.
-
3. Re: Additional memory consumption | ConcurrentHashMap
sannegrinovero Feb 24, 2012 11:17 AM (in response to agnihere)Very interesting. Would be nice if you could run a comparison with the new ConcurrentHashMap when we'll have the patch to replace implementations merged. I'll ping Manik, who was having an experimental branch, maybe you could try it out.
-
4. Re: Additional memory consumption | ConcurrentHashMap
agnihere Mar 15, 2012 2:05 AM (in response to sannegrinovero)Hi Sanne,
Has this issue been observed and fixed in 5.1.2.Final? If not, can you please point me to the experimental branch you're talking about above?
Thanks!
-
5. Re: Additional memory consumption | ConcurrentHashMap
sannegrinovero Mar 15, 2012 5:12 PM (in response to agnihere)Hi,
no we never observed such high memory consumption, but yes you could try the new map already as explained at http://infinispan.blogspot.com/2012/03/jdk-8-backported-concurrenthashmaps-in.html
While some of these internal objects are "bulky", taking anywhere close to 5MB per entry is far from reasonable, there must be something wrong; I'm bringing this up to team mates, but I'd appreciate some help from you. Is there any test you could share? or the dumps?
-
6. Re: Additional memory consumption | ConcurrentHashMap
manik Mar 19, 2012 8:31 AM (in response to sannegrinovero)This does sound pretty improbable. If you do have heap dumps or profiler snapshots you could share, preferably on 5.1.2, that would be much appreciated.
-
7. Re: Additional memory consumption | ConcurrentHashMap
agnihere Mar 19, 2012 9:41 AM (in response to manik)Hi,
Please find attached heap dump results.
thanks.
-
15Mar_Dump_Anonymised.xlsx 275.6 KB
-
-
8. Re: Additional memory consumption | ConcurrentHashMap
dan.berindei Mar 21, 2012 5:17 PM (in response to agnihere)From the heap dump report you posted it would appear that you have about 2124 ConcurrentHashMaps, and between them those maps have 11186942 Segments. Since the number of segments is equal to the concurrencyLevel, that would imply that you have a concurrency level of ~ 5000, which is a bit excessive (it's supposed to be the number of concurrent threads writing to the map).
Almost all of the ConcurrentHashMap$HashEntry arrays and ReentrantLock$NonfairSyncs are also retained by the CHM Segments (because Segment extends ReentrantLock).
If I got it right, decreasing your concurrency level from 5000 to 500 should make your memory usage much more reasonable. If I didn't, please post your config and we'll look further.