1 2 Previous Next 25 Replies Latest reply on Feb 6, 2015 9:27 AM by gustavonalle Go to original post
      • 15. Re: Re: Unable to acquire lock exception
        sannegrinovero

        You're having timeouts, which are caused either by some deadlock in your application or in our code - but we know of only one possible deadlock in Infinispan as I suggested above, and you can avoid it by tuning.

        The other reason for which you could have timeouts is because it's all too slow and inefficient for the load you're putting on the system - which is also solvable by some tuning.

         

        So you need to fix the overall efficiency. Upgrading version is a very reliable way to do that, since we just improved efficiency for this very same kind of configurations. Improvements were very significant (as in higher than two orders of magnitude in our tests), but I can't tell if it's enough for what you're doing.

         

        And now you're posting a stacktrace with a NullPointerException on your model. That's also something you'll need to fix: if your system is going to throw thousands of exceptions per second, performance will be horrible and you'll have a good reason to expect Timeouts.

        • 16. Re: Unable to acquire lock exception
          jeetu

          Thanks for the response Sanne.

           

          Tuning is exactly the place where we need help with. We are also analyzing our application for any deadlock situations and dealing with it.

           

          Our caching solution is proactive jobs loading the distributed indexed cache with inventory data obtained from a webservice interface and then query the cache. The loading of data is tracked by a lookup table (a distributed cache with keys and values) based framework that will record the loading progress and based on it (STARTED OR COMPLETED), the application will query the cache only if the loading is complete for a certain enterprise data. All of this is driven through ejb methods being called by jobs set through timerejb.

           

          What i observed is that with the infinispan configuration that i have uploaded today, the putAll (sync) is taking quite a long time to complete. In this case around 2 hrs for 10k objects. Our custom framework that proactively loads things into cache tried to putAll the same records again (framework is written such that, it looks at the load status and if the load status is in STARTED state for too long, it tries to load things again), which is when we have the "Unable to aquire Lock" exceptions are being thrown.

           

          Any things that you think can be tuned from the latest configuration, that can speed up the putAll??

           

          Regards

          Jithendra

          • 17. Re: Unable to acquire lock exception
            sannegrinovero

            I'm sorry I'd like to help but the problem is too vague. I can't guess what's wrong with your system, you need to at least be able to narrow down the problem.. for example verify if you have a network latency problem, that would need some platform level tuning (plenty of kernel options) or JGroups parameter tuning. Or you might have excessive locks in the JVM, in which case you'd need to identify those, and narrow down to a specific issue. Or you might have a heap configuration issue... Too many options for me guess from the stacktrace.

             

            If you were able to describe what exactly is your problem, then we could suggest appropriate configuration changes.

             

            The only thing I can say for sure is that you need to avoid those NullPointerException(s) and upgrade Infinispan.

            • 18. Re: Unable to acquire lock exception
              rvansa

              Haven't followed the full discussion, but if you're using non-transactional cache, I would strongly discourage use of putAll - the implementation is simply inefficient, resulting in quadratic (to number of nodes) amount of commands in the system. See https://www.mail-archive.com/infinispan-dev@lists.jboss.org/msg06008.html for closer description - nothing has improved since then. I'd recommend to use couple of putAsync calls instead.

              • 19. Re: Unable to acquire lock exception
                jeetu

                Thanks Radim for your input.

                 

                I read through your mail archive on the putAll performance. Is there a JIRA that i can follow to know in future, when this is resolved. This is exactly what we are doing in our app.

                We use a non-transactional cache and we put a bunch of keys ( upto 30k at a time), which is taking forever to complete (though we have only two nodes in the distributed cluster).

                 

                You also said as a work around, to use couple of putAsyncs. How do you think, you propose a work around for our case?? create chunks of items and do multiple putAllAsynchs?? Please suggest.

                 

                Regards

                Jithendra

                • 20. Re: Unable to acquire lock exception
                  rvansa

                  30k is really a lot. What I was suggesting is to execute cache.putAsync() commands, or cache.put() from threadpool - no putAll/putAllAsync (those differ only in the thread that is waiting). The system can handle usually a couple of hundred parallel operations per node.

                   

                  Another workaround would be to find manually who are the owners (Infinispan exposes this through DistributionManager) and use DistributedExecutors to send those chunks to be inserted on given nodes manually with LOCAL_ONLY flag. However, this is not a straightforward solution (especially it's not resilient against topology change during the loading).

                  • 21. Re: Unable to acquire lock exception
                    jeetu

                    We are going ahead with your first suggestion. I will let you know of the outcome. Thanks a lot for your response.

                     

                    Regards

                    Jithendra

                    • 22. Re: Unable to acquire lock exception
                      jeetu

                      Hi Radim,

                       

                      With your suggestion, i created a thread pool with 35 threads and did a cache.put of the items. With this, i see a lot of these exceptions on our e2e environment. I am also attaching the infinispan-config and jgroups-tcp configs for your reference. Please let us know, what's happening. We are in a very critical state. Please help.

                       

                      04:28:59,028 ERROR [stderr] (pool-23-thread-67) Exception in thread "pool-23-thread-67" org.infinispan.util.concurrent.TimeoutException: Node lxomavmtceap615-46391 time

                      d out

                      04:28:59,028 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispat

                      cher.java:174)

                      04:28:59,028 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:536)

                      04:28:59,028 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:290)

                      04:28:59,028 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.query.indexmanager.RemoteIndexingBackend.sendCommand(RemoteIndexingBackend.java:116)

                      04:28:59,028 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.query.indexmanager.RemoteIndexingBackend.applyWork(RemoteIndexingBackend.java:64)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.query.indexmanager.InfinispanBackendQueueProcessor.applyWork(InfinispanBackendQueueProcessor.j

                      ava:80)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.performOperations(DirectoryBasedIndexManager.jav

                      a:113)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.hibernate.search.backend.impl.WorkQueuePerIndexSplitter.commitOperations(WorkQueuePerIndexSplitter.java:4

                      9)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.hibernate.search.backend.impl.BatchedQueueingProcessor.performWorks(BatchedQueueingProcessor.java:82)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.hibernate.search.backend.impl.TransactionalWorker.performWork(TransactionalWorker.java:86)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.query.backend.QueryInterceptor.performSearchWorks(QueryInterceptor.java:235)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.query.backend.QueryInterceptor.performSearchWork(QueryInterceptor.java:229)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.query.backend.QueryInterceptor.updateIndexes(QueryInterceptor.java:223)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.query.backend.QueryInterceptor.processPutKeyValueCommand(QueryInterceptor.java:446)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.query.backend.QueryInterceptor.visitPutKeyValueCommand(QueryInterceptor.java:164)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitPutKeyValueCommand(AbstractLockingInterce

                      ptor.java:46)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:35)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.MarshalledValueInterceptor.visitPutKeyValueCommand(MarshalledValueInterceptor.jav

                      a:125)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:35)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:172)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.statetransfer.StateTransferInterceptor.visitPutKeyValueCommand(StateTransferInterceptor.java:95)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.CacheMgmtInterceptor.updateStoreStatistics(CacheMgmtInterceptor.java:148)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:134)

                      04:28:59,029 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:35)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1576)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.cache.impl.CacheImpl.putInternal(CacheImpl.java:1054)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1046)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1646)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:245)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at com.ctl.vipr.chain.load.WorkerThread.run(WorkerThread.java:34)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at java.lang.Thread.run(Thread.java:745)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67) Caused by: org.jgroups.TimeoutException: timeout waiting for response from lxomavmtceap615-46391, request: org.jgroups.blocks.UnicastRequest@174ac196, req_id=51480, mode=GET_ALL, target=lxomavmtceap615-46391

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:429)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:372)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:167)

                      04:28:59,030 ERROR [stderr] (pool-23-thread-67)         ... 52 more

                      • 23. Re: Unable to acquire lock exception
                        rvansa

                        I thought you were already aware that stack trace does not show much info. It carries the information that a message is sent to indexer node: 'org.infinispan.query.indexmanager.RemoteIndexingBackend.sendCommand(RemoteIndexingBackend.java:116)' - there's a thread that should processes this command, but it's too slow/busy.

                         

                        Sanne has already given you pointers how to speed it up, using asynchronous execution and fine-tuning some parameters. If you've optimized it as well as you can, the only thing you can do here is to apply backpressure to the writers: when an exception is thrown, wait a while and try again.

                        • 24. Re: Unable to acquire lock exception
                          jeetu

                          Hi Radim,

                           

                          Yes i understand that the stack trace does not show everything, that's why i though the config files may help. But anyways, let me explain on what's happening with me now that i could constantly replicate.

                           

                          To set the context, the cache in question is a distributed, indexed cache in SYNC mode. Here is the configuration snippet of the cache that i will be talking about.

                          <distributed-cache name="qci-cache" mode="SYNC">

                                <store-as-binary />

                                <locking isolation="REPEATABLE_READ" striping="false" acquire-timeout="30000" concurrency-level="500" write-skew="false" />

                                <state-transfer timeout="480000" await-initial-transfer="false"/>

                                <indexing index="LOCAL">

                                  <property name="hibernate.search.default.locking_cachename">ViprLuceneQciIndexesLocking</property>

                                  <property name="hibernate.search.default.data_cachename">ViprLuceneQciIndexesData</property>

                                  <property name="hibernate.search.default.metadata_cachename">ViprLuceneQciIndexesMetadata</property>

                                  <property name="hibernate.search.analyzer">com.ctl.vnom.lib.cache.analyzer.StandardAnalyzerWithNoStopWords</property>

                                  <property name="hibernate.search.model_mapping">com.ctl.vnom.cache.searchmapping.ViprQciSearchMappingFactory</property>

                                  <property name="hibernate.search.lucene_version">LUCENE_4_10_2</property>

                                  <property name="hibernate.search.default.indexmanager">org.infinispan.query.indexmanager.InfinispanIndexManager</property>

                                  <property name="hibernate.search.default.indexwriter.merge_factor">30</property>  

                                </indexing>

                              </distributed-cache>

                           

                               The three index caches are replicated caches with just a default configuration.

                               <replicated-cache name="ViprLuceneQciIndexesData" >

                              </replicated-cache>

                              <replicated-cache name="ViprLuceneQciIndexesLocking" >

                              </replicated-cache>

                              <replicated-cache name="ViprLuceneQciIndexesMetadata" >

                              </replicated-cache>

                           

                          I followed your suggestion and created a thread pool that does the cache.put for the various items that i am giving it. The cluster has two nodes. The node that's doing the put is doing a nice job ( i guess!) with putting the items in cache and indexing them. The index is being updated locally and the Index manager is shipping the updates across to the second node in the cluster.

                           

                          But what i see in the second node are a lot of these below logs with an exception log right after each of these sets of async deletion of index segments logs.

                          03:12:25,607 DEBUG [org.infinispan.lucene.impl.DirectoryImplementor] (Hibernate Search: async deletion of index segments-4) Removed file: _2asd.fdt from index: com.ctl.vnom.api.common.InventoryItem

                          03:12:25,608 DEBUG [org.infinispan.lucene.impl.DirectoryImplementor] (Hibernate Search: async deletion of index segments-1) Removed file: _2asd.cfs from index: com.ctl.vnom.api.common.InventoryItem

                          03:12:25,608 DEBUG [org.infinispan.lucene.impl.DirectoryImplementor] (Hibernate Search: async deletion of index segments-2) Removed file: _2asd.cfe from index: com.ctl.vnom.api.common.InventoryItem

                          03:12:25,608 DEBUG [org.infinispan.lucene.impl.DirectoryImplementor] (Hibernate Search: async deletion of index segments-4) Removed file: _2asd.fdx from index: com.ctl.vnom.api.common.InventoryItem

                          03:12:25,606 ERROR [org.hibernate.search.exception.impl.LogErrorHandler] (Hibernate Search: Index updates queue processor for index com.ctl.vnom.api.common.InventoryItem-1) HSEARCH000058: HSEARCH000117: IOException on the IndexWriter: java.io.FileNotFoundException: Error loading metadata for index file: _2asd_Lucene41_0.tim|M|com.ctl.vnom.api.common.InventoryItem

                            at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:134) [infinispan-lucene-directory.jar:7.0.0.Final]

                            at org.infinispan.lucene.impl.DirectoryLuceneV4.openInput(DirectoryLuceneV4.java:101) [infinispan-lucene-directory.jar:7.0.0.Final]

                            at org.apache.lucene.store.Directory.copy(Directory.java:193) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.store.TrackingDirectoryWrapper.copy(TrackingDirectoryWrapper.java:50) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:4719) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:509) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:476) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:510) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:621) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2942) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3097) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3064) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.commitIndexWriter(IndexWriterHolder.java:153) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                            at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.commitIndexWriter(IndexWriterHolder.java:166) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                            at org.hibernate.search.backend.impl.lucene.PerChangeSetCommitPolicy.onChangeSetApplied(PerChangeSetCommitPolicy.java:29) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                            at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.afterTransactionApplied(AbstractWorkspaceImpl.java:88) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                            at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:107) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                            at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:50) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_71]

                            at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]

                            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]

                            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]

                            at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]

                           

                          Editing my reply here. When i traced back in the logs, i saw this when the index was getting created. The first occurrence i could see.

                           

                          02:57:15,444 DEBUG [org.hibernate.search.indexes.serialization.avro.impl.AvroSerializationProvider] (remote-thread--p3-t63) HSEARCH000079: Serialization protocol version 1.1 initialized

                          02:57:15,444 INFO  [org.hibernate.search.indexes.impl.DirectoryBasedIndexManager] (remote-thread--p3-t63) HSEARCH000168: Serialization service Avro SerializationProvider v1.1 being used for index 'com.ctl.vnom.api.common.InventoryItem'

                          02:57:15,522 DEBUG [org.infinispan.query.indexmanager.IndexManagerBasedLockController] (remote-thread--p3-t101) ISPN014012: Waiting for index lock was successfull: 'true'

                          02:57:15,537 DEBUG [org.hibernate.search.backend.impl.lucene.WorkspaceFactory] (remote-thread--p3-t101) Starting workspace for index com.ctl.vnom.api.common.InventoryItem using an exclusive index strategy

                          02:57:15,537 DEBUG [org.infinispan.query.indexmanager.ClusteredSwitchingBackend] (remote-thread--p3-t101) Swapping from backend LockAcquiringBackend to LocalIndexingBackend

                          02:57:15,615 DEBUG [org.infinispan.lucene.impl.DirectoryImplementor] (Hibernate Search: async deletion of index segments-1) Removed file: _0.cfs from index: com.ctl.vnom.api.common.InventoryItem

                          02:57:15,615 DEBUG [org.infinispan.lucene.impl.DirectoryImplementor] (Hibernate Search: async deletion of index segments-1) Removed file: _0.cfe from index: com.ctl.vnom.api.common.InventoryItem

                          02:57:15,631 DEBUG [org.infinispan.lucene.impl.DirectoryImplementor] (Hibernate Search: async deletion of index segments-1) Removed file: _0.fdt from index: com.ctl.vnom.api.common.InventoryItem

                          02:57:15,631 DEBUG [org.infinispan.lucene.impl.DirectoryImplementor] (Hibernate Search: async deletion of index segments-1) Removed file: _0.fdx from index: com.ctl.vnom.api.common.InventoryItem

                          02:57:15,600 ERROR [org.hibernate.search.exception.impl.LogErrorHandler] (Hibernate Search: Index updates queue processor for index com.ctl.vnom.api.common.InventoryItem-1) HSEARCH000058: HSEARCH000117: IOException on the IndexWriter: java.io.FileNotFoundException: Error loading metadata for index file: _0_Lucene41_0.pos|M|com.ctl.vnom.api.common.InventoryItem

                            at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:134) [infinispan-lucene-directory.jar:7.0.0.Final]

                            at org.infinispan.lucene.impl.DirectoryLuceneV4.openInput(DirectoryLuceneV4.java:101) [infinispan-lucene-directory.jar:7.0.0.Final]

                            at org.apache.lucene.store.Directory.copy(Directory.java:193) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.store.TrackingDirectoryWrapper.copy(TrackingDirectoryWrapper.java:50) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:4719) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.DocumentsWriterPerThread.sealFlushedSegment(DocumentsWriterPerThread.java:509) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:476) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:510) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:621) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2942) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3097) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3064) [lucene-core-4.10.2.jar:4.10.2 1634293 - mike - 2014-10-26 05:51:56]

                            at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.commitIndexWriter(IndexWriterHolder.java:153) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                            at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.commitIndexWriter(IndexWriterHolder.java:166) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                            at org.hibernate.search.backend.impl.lucene.PerChangeSetCommitPolicy.onChangeSetApplied(PerChangeSetCommitPolicy.java:29) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                            at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.afterTransactionApplied(AbstractWorkspaceImpl.java:88) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                            at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:107) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                            at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:50) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_71]

                            at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]

                            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]

                            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]

                            at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]

                           

                          what i dont understand is, why the async deletor deleting the segments, while it's still needed? What's wrong with the index creation?

                           

                          Please let me know.

                           

                          Regards

                          Jithendra

                          • 25. Re: Unable to acquire lock exception
                            gustavonalle

                            Hi, in order to help troubleshooting the issue, could you please collect logs from the two nodes and send us as requested in the ISPN-2981 discussion?

                            org.infinispan should be DEBUG and

                            org.infinispan.lucene TRACE

                            1 2 Previous Next