7 Replies Latest reply on Mar 15, 2016 6:54 AM by Sanne Grinovero

    Metaspace memory leak with infinispan

    Andrej Dmitrenko Newbie

      Hi all

      I use infinispan for embedded cache and criteria query in cache. It is works great!

      I have two questions:

      1) I start infinispan in my application and call embeddedCacheManager.stop() at application destroy.

      I run my application in oracle weblogic 12.1.3 server. And after redeploy my application I see metaspace memory leak.

      I have checked heap dump in memory analizer.

      The cause in thread local variable threadCounterHashCode in org.infinispan.commons.util.concurrent.jdk8backported.EquivalentConcurrentHashMapV8$CounterHashCode class

      After undeploy aplication there is a thread which keep thread local reference to threadCounterHashCode, and threadCounterHashCode  keep link to classloader and that's why classloader cannot be removed from metaspace memory.

      I cannot control this thread.

      How should I clean memory in this case? Is it possible?

       

      2)Cache initialization in one thread  takes near 50 seconds for 40000 entities.

      I would like to speed up this process. How can I do it?

        • 1. Re: Metaspace memory leak with infinispan
          William Burns Expert

          Andrej Dmitrenko wrote:

           

          1) I start infinispan in my application and call embeddedCacheManager.stop() at application destroy.

          I run my application in oracle weblogic 12.1.3 server. And after redeploy my application I see metaspace memory leak.

          I have checked heap dump in memory analizer.

          The cause in thread local variable threadCounterHashCode in org.infinispan.commons.util.concurrent.jdk8backported.EquivalentConcurrentHashMapV8$CounterHashCode class

          After undeploy aplication there is a thread which keep thread local reference to threadCounterHashCode, and threadCounterHashCode  keep link to classloader and that's why classloader cannot be removed from metaspace memory.

          I cannot control this thread.

          How should I clean memory in this case? Is it possible?

          Unfortunately this class was a copied version of the ConcurrentHashMap from Doug Lea's workbench.  So we didn't write the code directly.  I don't think without using reflection that you can clear this out.  But what you can do for sure is log a JIRA for us to upgrade this to the new version of the file which no longer this class

           

          Andrej Dmitrenko wrote:

           

          2)Cache initialization in one thread  takes near 50 seconds for 40000 entities.

          I would like to speed up this process. How can I do it?

          Unfortunately there isn't much detail here.  How many nodes do you have installed and what type of cache do you have (REPL, DIST)?  How are you loading the data?  Are you using the putAll method on the Cache?  If you are doing individual puts that will be a lot slower.  Also I would recommend using multiple threads if possible.

          • 2. Re: Metaspace memory leak with infinispan
            Andrej Dmitrenko Newbie

            Hi, William

            Thank you for fast reply!

            I have created JIRA tickect [ISPN-6375] Metaspace memory leak with infinispan - JBoss Issue Tracker

            Sorry for not enough information for second question.

            I have only one node with local indexed cache, my configuration looks like:

            <local-cache name="white-list-cache" statistics="false" statistics-available="false">

               <indexing index="LOCAL">

               <property name="default.directory_provider">ram</property>

               </indexing>

            </local-cache>

            I load data from db and use putAll method in one thread.

            If I use multiple threads for loading it is loads in 4 times faster. Thank you for recommendations. Is it all that I can do for speed up cache initialization?

            • 3. Re: Metaspace memory leak with infinispan
              Tristan Tarrant Master

              I do not know if it still matters (our indexing has improved greatly), but bulk loading was usually faster if loading data with the SKIP_INDEXING flag set and then launching the massindexer at the end. sannegrinovero might say otherwise though

              • 4. Re: Metaspace memory leak with infinispan
                Gustavo Fernandes Apprentice

                Regarding the indexing process, the more threads are used to put, the faster indexing will be.

                 

                If you can live with a small delay between the time data is put in the cache and when it is available for searches, you can enable async indexing, and you should experience a large increase in throughput.

                 

                <property name="default.worker.execution">async</property>
                

                 

                For your case, bulk loading 40K entries, this means that after all threads finished their put, there will a small delay until the index actually contains all the 40K entries.

                This delay should not be big in your case, I imagine something around a few hundred milliseconds.

                • 5. Re: Metaspace memory leak with infinispan
                  Andrej Dmitrenko Newbie

                  Is it the same as call putAllAsync?

                  In javadoc of putAllAsync I see

                  Asynchronous version of {@link #putAll(Map)}. This method does not block on remote calls, even if your cache mode
                  * is synchronous. Has no benefit over {@link #putAll(Map)} if used in LOCAL mode.


                  I use local mode

                  • 6. Re: Metaspace memory leak with infinispan
                    Gustavo Fernandes Apprentice

                    It's is not the same thing, the async worker applies only to the index writing.

                    • 7. Re: Metaspace memory leak with infinispan
                      Sanne Grinovero Master

                      Regarding the metaspace issue, I suspect that might be caused by ISPN-4390.

                       

                      About indexing: we made sure the indexing speed delay of a single entry scales well with the number of threads (i.e. parallel writes don't interfere with each other). Each individual put operation will still trigger an index commit, although we'll merge the commits of multiple threads.

                       

                      Following Tristan's suggestion to use SKIP_INDEXING and then trigger the MassIndexer will only have a single commit at the end, but it will have you process the entry set twice, so which solution is better might depend on many other factors.

                       

                      The best approach I think is to combine batching with multiple threads: each batch will still be only one commit. The trick to remember is that we spend most of the time during "index commit", so having multiple entries processed per commit should give you almost-linear speedup improvements (a batch of 50 entries will be 50 times faster than a single entry).

                       

                      N.B. there are commits generated by us explicitly, and Lucene will also auto-flush and commit when the buffers are full. So make sure you configure it to use large write buffers, i.e. see configuration properties such as

                      default.indexwriter.ram_buffer_size = 256

                       

                      Hibernate Search