1 2 Previous Next 25 Replies Latest reply on Feb 6, 2015 9:27 AM by Gustavo Fernandes

    Unable to acquire lock exception

    Jithendra reddy Newbie

      Hi Team,

       

      I have been trying to get infinispan working for our project with the distribution and hibernate search querying working for quite a while now. more than 2 years i must say. We keep upgrading and something or the other comes up at a critical time and we drop the ball of getting this solution to the production.

       

      Having said all these, we are on Infinispan 7.0.0.Final. Querying works ok but there are these irritating locking exception that keep showing, testing our confidence in the solution to push to production.

       

      I have made sure that we are using the correct configuration with whatever documentation and tips that were available from the community and else where. We have lockstriping as false, which was widely suggested, to get rid of these locking exceptions. But still we get them. Please somebody look into this and suggest us. We dont want to miss getting this solution to the production this time.

       

      I have attached our cache configuration file. The lock exception is below. i am worried about the null part highlighted below.

       

      04:00:47,122 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (transport-thread--p2-t12) ISPN000136: Execution error: org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after 30 seconds fo

      r key 4809212330822~CRIS and requestor Thread[transport-thread--p2-t12,5,main]. Lock is held by Thread[transport-thread--p2-t2,5,main], while request came from null

              at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:198) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLockNoCheck(LockManagerImpl.java:181) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.interceptors.locking.AbstractLockingInterceptor.lockKey(AbstractLockingInterceptor.java:127) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitPutMapCommand(NonTransactionalLockingInterceptor.java:64) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:55) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:55) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:172) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.statetransfer.StateTransferInterceptor.visitPutMapCommand(StateTransferInterceptor.java:100) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutMapCommand(CacheMgmtInterceptor.java:117) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

        • 1. Re: Unable to acquire lock exception
          Jithendra reddy Newbie

          Attaching the infinispan configuration file.

          • 2. Re: Unable to acquire lock exception
            Sanne Grinovero Master

            Hi, could you paste a longer stacktrace?

            and if the stack is triggered when handling an incoming remote call, it would be very useful to have the stack from the invoker side too.

            From the section above I can see a lock is being held, but we would need to know which lock (which entry) and which component is invoking it.

            • 3. Re: Unable to acquire lock exception
              Jithendra reddy Newbie

              Hi Sanne,

               

              Apologies for not getting back to you on this sooner. Here's the full stack trace. Not from the same instance though. I looked into the other node, but i didnot see anything prominent over there. Mind you, these are my e2e server nodes and not much debug is turned on.

               

              05:31:17,526 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (transport-thread--p2-t18) ISPN000136: Execution error: org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after 30 seconds for key NTM000012620153 and requestor Thread[transport-thread--p2-t18,5,main]. Lock is held by Thread[transport-thread--p2-t20,5,main], while request came from null

                      at org.infinispan.util.concurrent.locks.LockManagerImpl.lock(LockManagerImpl.java:198) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.util.concurrent.locks.LockManagerImpl.acquireLockNoCheck(LockManagerImpl.java:181) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.locking.AbstractLockingInterceptor.lockKey(AbstractLockingInterceptor.java:127) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitPutMapCommand(NonTransactionalLockingInterceptor.java:64) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:55) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:55) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:172) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.statetransfer.StateTransferInterceptor.visitPutMapCommand(StateTransferInterceptor.java:100) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutMapCommand(CacheMgmtInterceptor.java:117) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:55) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1576) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.cache.impl.CacheImpl.putAllInternal(CacheImpl.java:1098) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.cache.impl.CacheImpl.access$300(CacheImpl.java:121) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.cache.impl.CacheImpl$3.call(CacheImpl.java:1227) [infinispan-core.jar:7.0.0.Final]

                      at org.infinispan.cache.impl.CacheImpl$3.call(CacheImpl.java:1222) [infinispan-core.jar:7.0.0.Final]

                      at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_65]

                      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_65]

                      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_65]

                      at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_65]

               

              Let me know, what else is needed.

               

              Regards

              Jithendra

              • 4. Re: Unable to acquire lock exception
                Sanne Grinovero Master

                Hi,

                what I understand is that you're doing some large "putAll" operation, using the async APIs, and having indexing enabled.

                Some things to keep in mind:

                - putAll will attempt to lock all of its keys

                - any write will acquire the lock on the entry being written, for the time it takes to perform the operation

                - having indexing enabled slows down each write

                 

                So the three points above might exacerbate a deadlock problem which you might have on those keys. Is this only reproducible when you are doing multiple such operations in parallel?

                 

                Another explanation is that your executors are running out of threads; a fairly recent performance test proved that the indexing backend could deadlock at high load when delegating to a different node, so a solution for that is in the works; see also: [infinispan-dev] Indexing deadlock (solution suggestion)

                If this is indeed the same problem, there are some possible workaround which you can apply: tuning indexing performance makes the deadlock less likely, and configuring your threadpools to be larger will also prevent the deadlock.


                I'm sorry you had this problem since long, and keep upgrading.. still I would suggest to keep upgrading: in Infinispan 7.1 we applied some very significant performance improvements for the indexwriting component. The real deadlock cause is not resolved yet, but the improvements are significant making it very unlikely in practice; and it won't trigger unless your load on the system is able to exhaust all executors. Seems we're working on the same subject now, so please either be patient for a little longer or I can help you figure out a workaround - assuming that you are indeed hitting the same problem!? I'm assuming it is, because a symptom is the stacktrace you posted on Infinispan - hibernate search - Update exception


                Happy to discuss alternative configurations, you'd need to give me some more details of the general goal and requirements; For example enabling the "async" backend avoids all of these issues.

                • 5. Re: Unable to acquire lock exception
                  Daniel Chapman Newbie

                  Hi - Jithendra is on my team - to answer the use case - We have to use embedded mode.  We're trying to store all our inventory in a distributed, cache to reduce our memory footprint and have 4 nodes at the moment in production, plans to scale to 8 where distributed really starts to see benefit.  The front end pages allow an all-you-can-eat search facility - This is why we went with infinispan in the first place, due to the ease and flexibility of Lucene search ability.  From our experience these issues have been very difficult to debug, and it seems 90% or more of infinispan users are either Local cache or not using index/search capabilities.    So finding solutions has been rough.

                   

                  We're in load test for February release.  We would like to hear your workarounds because we cannot spare any more time in trying to get a solid cache solution in place.  If we don't feel comfortable in tonights load test, we'll need to back out and go with our backup plan.  Any help you can provide based on this config is extremely appreciated.

                   

                  Thanks!

                    Dan C>

                  • 6. Re: Unable to acquire lock exception
                    Sanne Grinovero Master

                    ok, so let me start with more specific suggestions.

                    Did you try this option?

                     

                    hibernate.search.default.worker.execution = async

                    hibernate.search.default.index_flush_interval = [value]

                     

                    They are documented in this table here: Hibernate Search

                    You'll need latest Infinispan 7.1.0.CR2 combined with Hibernate Search 5.0.1.Final.

                    These two enable a new much improved async flush policy, in which you get the benefits of the async approch but can put some boundaries to the drawbacks. For example if you could state that the all-you-can-eat search facility is fine with a 5 seconds delay between a write on Infinispan and the change to be effectively visible into search results, this would massively improve performance and also avoid the deadlock issue.

                     

                    Even if you can't allow usage of "async", the update includes several performance improvements for the synch backend as well. It's possible the update alone could solve your problem.

                     

                    Also, I just noticed you are using the jgroups backend. This is ok but but not optimal, it was designed for usage of Hibernate Search as a possible alternative to the JMS backend (for people using the Hibernate Search engine combined with Hibernate ORM). When using Infinispan directly, I would highly recommend to use the ad-hoc "infinispan" IndexManager.

                    It's possible I previously suggested to use the jgroups backend, but that was probably long before the indexmanager was created.

                    Taking one of your caches as an example, configured as:

                     

                            <property name="hibernate.search.default.locking_cachename">ViprLuceneTmsIndexesLocking</property>

                            <property name="hibernate.search.default.data_cachename">ViprLuceneTmsIndexesData</property>

                            <property name="hibernate.search.default.metadata_cachename">ViprLuceneTmsIndexesMetadata</property>

                            <property name="hibernate.search.model_mapping">com.ctl.vnom.cache.searchmapping.ViprTmsSearchMappingFactory</property>

                            <property name="hibernate.search.analyzer">com.ctl.vnom.lib.cache.analyzer.StandardAnalyzerWithNoStopWords</property>

                            <property name="hibernate.search.default.directory_provider">infinispan</property>

                            <property name="hibernate.search.lucene_version">LUCENE_4_10_2</property>

                            <property name="hibernate.search.default.worker.execution">sync</property>

                            <property name="hibernate.search.services.jgroups.clusterName">tms-search-jgroups-cluster</property>

                            <property name="hibernate.search.default.indexmanager">directory-based</property>

                            <property name="hibernate.search.default.worker.thread_pool">5</property>

                            <property name="hibernate.search.default.worker.buffer_queue.max">10000</property>

                            <property name="hibernate.search.default.worker.backend">jgroups</property>

                            <property name="hibernate.search.default.indexwriter.merge_factor">10</property>

                            <property name="hibernate.search.INCIDENT_INDEX.exclusive_index_use">true</property>

                     

                    I would make these changes:

                    • Remove the "hibernate.search.default.directory_provider" property
                    • Remove the "hibernate.search.services.jgroups.clusterName" property
                    • Remove all properties related to "worker" - especially the jgroups backend
                    • Use a larger merge_factor .. 30 would be a good start
                    • Remove "hibernate.search.INCIDENT_INDEX.exclusive_index_use"
                    • Set the property "hibernate.search.default.indexmanager" to value "org.infinispan.query.indexmanager.InfinispanIndexManager"

                     

                    Regarding the specific workarounds I was referring to, the workaround to prevent the locking timeout issues is to dimension your Executors to allow for larger thread pools. But try the above configuration first, as often the timeout is caused by excessive load compared to the number of available executors, and the above suggestions should significantly lower the load on your server.

                    • 7. Re: Unable to acquire lock exception
                      Sanne Grinovero Master

                      I'm sorry I just noticed you're storing the Infinispan indexes in a set of LOCAL caches? What are you trying to accomplish?

                      • 8. Re: Unable to acquire lock exception
                        Saimethun G Newbie

                        Hi Sanne,

                        Jithendra, Dan and myself work in same team.

                        We were using DISTRIBUTED index cache and we ended up in getting Lock exception due to lucene index update task. So we switched to LOCAL index cache, INDEX=ALL to get avoid LockObtainedFailedException. Tried using worker execution as async, but this takes more time to update indexes and query is not returning results at all(no results).

                        Do you suggest us to use DISTRIBUTED index caches and try your recommendations ?

                         

                        -Saimethun G

                        • 9. Re: Unable to acquire lock exception
                          Sanne Grinovero Master

                          The effect of those options is semantically different, I'm afraid there is some confusion.

                          Your data caches are setup as Distributed, so each indexing engine will not see the full dataset but only the entries it is storing as primary owner or as replica. Assuming your cluster size is larger than the number of replicas (as common with distribution), there are other entries too which are stored on different nodes.

                           

                          With indexing option <indexing index="ALL"> you make sure that the engine will reindex both the entries it is primary owner for, and also when it's a backup owner.. but it will not generate indexing events for the other entries. So to accomodate for that, you can setup a custom indexing backend: to make sure indexing events get forwarded to all nodes needing to update their index.

                          So indeed you have to enable a backend, but the "jgroups" backend will forward all events locally generated to a SINGLE master node: this master node is then assuming that whatever it writes will be visible to other nodes, so that all querying indexes can read them.

                          This sharing of reads can be performed in various ways; a network shared filesystem is enough for most users, Infinispan storage is another option. I'm sorry if this is confusing, the framework simply can't tell if writing to a filesystem is good enough, as there is no way from Java that we can know if the filesystem is being shared or not.. so we can't implement an automatic validation of such a configuration.

                           

                          The configuration option "exclusive_index_use" is taking a full pessimistic lock on the index and won't let the lock go until the node is shutdown. Using it with the jgroups backend is only possible because each index store is separated.

                           

                          So with your configuration there are two problems:

                          • Each and every owner is sending an update command to the master node (indexing="ALL"), so you're having way more indexing commands than what you'd need (one per owner). This will hurt performance, but you'll not notice the problem in functional tests as these commands are idempotent.
                          • All these commands are processed by a single node only, and the other nodes won't get an updated index.

                           

                          A consequence - I suspect - is that you won't be able to find all entries on each node. Unless you are using the experimental ClusteredQuery API [SearchManager (Infinispan Distribution 7.1.0.CR2 API)]?

                          The ClusteredQuery API is able to broadcast queries to each node and merge the resulting lists, but for that you wouldn't need the jgroups backend.

                           

                          So you have different options:

                          1) either you replace the configuration properties as I suggested above AND change the Caches used for index storage to be clustered (you'd want locking_cachename and metadata_cachename to use REPL and data_cachename to use either REPL or DIST, but I recommend REPL as a good default).

                          2) You switch to file-system based directory (property directory_provider) and read about the master/slave pattern in the Hibernate Search documentation. This would be an alternative way to have a shared index to solution 1, which uses less memory but requires slower disk storage and setup of the network shares.

                          3) Switch those jgroups backends for the default backend and use the ClusteredQuery API (experimental!) This approach makes writes to the index blazing fast but each query would be slower (As it makes each index write a local only operation with linear scalability, but then each Query run needs to be executed on multiple nodes to fetch all results).

                          4) You can keep using the jgroups backend but make sure the indexes are stored in Infinispan using clustered caches (this similar to combining approaches 1 and 2).

                          [Remember both the backend and the directory_provider options allow to plug in a custom implementation]

                           

                          Each of these options describe a different architecture; to pick the best one for your use case you have to consider many factors, for example if you prefer to speedup index writes at cost of slower reads, or vice-versa.

                          To help making a choice:

                          - I normally use architecture 1). That's because in the use cases I test for there are not many writes - or we write a big burst of data overnight - and then we want top performance for queries. This is working well in Infinispan 7.1, although write performance is still not great on the SYNC backend.

                          - Solution 2 is well proven and has been around among many Hibernate Search users. It's reliable, and doesn't need complex JVM memory tuning nor much JGroups/network tuning.. but it's suited for ASYNC replication only as the indexes are copied over at intervals.

                          - The jgroups backend is marked experimental, indeed as you suggested we didn't have much user feedback yet so I'm not suggesting this one yet, especially if there is some urgency.

                           

                          All in all I hope that clarifies your options related to indexing configuration.

                          But I have some bad news too: considering you're using the jgroups backend and this was storing the index in LOCAL caches, I am no longer confident that the TimeoutException reported on the first post is related to indexing.

                          Do you have the same issues when turning indexing off? (See the "blackhole" backend Hibernate Search)

                          Assuming you already tested for that, it's possible that the indexing overhead is simply pushing the boundaries of your hardware too high, and need some tuning across all components. For example, the option "indexwriter.infostream" you have on some caches is very verbose and will slow you down significantly, If you're having more loggers enabled on the system, that would explain timeouts too.

                          • 10. Re: Re: Unable to acquire lock exception
                            Jithendra reddy Newbie

                            Thanks for such a detailed response. We also can't afford a file-system as of now. We took the option 1 and made the changes as you suggested.

                            Performed a load test and still see the exceptions coming. I have also seen couple of other exceptions in addition to the "Unable to acquire lock"exceptions. Here they are

                             

                            There are only two nodes in the cluster being load tested, each running on 8GB memory. I am also attaching the latest configuration with which the load test was ran. Please help.

                             

                            01:22:39,777 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (transport-thread--p2-t24) ISPN000136: Execution error: org.infinispan.util.concurrent.Tim

                            eoutException: Node lxomavmtceap615-33617 timed out

                                    at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:174) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:536) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:290) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.query.indexmanager.RemoteIndexingBackend.sendCommand(RemoteIndexingBackend.java:116) [infinispan-query.jar:7.0.0.Final]

                                    at org.infinispan.query.indexmanager.RemoteIndexingBackend.applyWork(RemoteIndexingBackend.java:64) [infinispan-query.jar:7.0.0.Final]

                                    at org.infinispan.query.indexmanager.InfinispanBackendQueueProcessor.applyWork(InfinispanBackendQueueProcessor.java:80) [infinispan-query.jar:7.0.0.Final]

                                    at org.hibernate.search.indexes.impl.DirectoryBasedIndexManager.performOperations(DirectoryBasedIndexManager.java:113) [hibernate-search-engine-5.0.0.Beta1.jar:

                            5.0.0.Beta1]

                                    at org.hibernate.search.backend.impl.WorkQueuePerIndexSplitter.commitOperations(WorkQueuePerIndexSplitter.java:49) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.

                            0.Beta1]

                                    at org.hibernate.search.backend.impl.BatchedQueueingProcessor.performWorks(BatchedQueueingProcessor.java:82) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta

                            1]

                                    at org.hibernate.search.backend.impl.TransactionalWorker.performWork(TransactionalWorker.java:86) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                                    at org.infinispan.query.backend.QueryInterceptor.performSearchWorks(QueryInterceptor.java:235) [infinispan-query.jar:7.0.0.Final]

                                    at org.infinispan.query.backend.QueryInterceptor.performSearchWork(QueryInterceptor.java:229) [infinispan-query.jar:7.0.0.Final]

                                    at org.infinispan.query.backend.QueryInterceptor.updateIndexes(QueryInterceptor.java:223) [infinispan-query.jar:7.0.0.Final]

                                    at org.infinispan.query.backend.QueryInterceptor.processPutMapCommand(QueryInterceptor.java:418) [infinispan-query.jar:7.0.0.Final]

                                    at org.infinispan.query.backend.QueryInterceptor.visitPutMapCommand(QueryInterceptor.java:186) [infinispan-query.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitPutMapCommand(NonTransactionalLockingInterceptor.java:67) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:55) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:55) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:172) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.statetransfer.StateTransferInterceptor.visitPutMapCommand(StateTransferInterceptor.java:100) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutMapCommand(CacheMgmtInterceptor.java:117) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:55) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1576) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.cache.impl.CacheImpl.putAllInternal(CacheImpl.java:1098) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.cache.impl.CacheImpl.access$300(CacheImpl.java:121) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.cache.impl.CacheImpl$3.call(CacheImpl.java:1227) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.cache.impl.CacheImpl$3.call(CacheImpl.java:1222) [infinispan-core.jar:7.0.0.Final]

                                    at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_65]

                                    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_65]

                                    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_65]

                                    at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_65]

                            Caused by: org.jgroups.TimeoutException: timeout waiting for response from lxomavmtceap615-33617, request: org.jgroups.blocks.UnicastRequest@1e1ece2f, req_id=524945, mode=GET_ALL, target=lxomavmtceap615-33617

                                    at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:429) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:372) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:167) [infinispan-core.jar:7.0.0.Final]

                                    ... 48 more

                             

                             

                             

                             

                             

                             

                             

                             

                            01:56:14,089 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (Incoming-1,lxomavmtceap615-34710) ISPN000136: Execution error: java.lang.IllegalStateExce

                            ption: Could not get property value

                                    at org.hibernate.search.util.impl.ReflectionHelper.getMemberValue(ReflectionHelper.java:82) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                                    at org.hibernate.search.engine.spi.DocumentBuilderIndexedEntity.buildDocumentFields(DocumentBuilderIndexedEntity.java:409) [hibernate-search-engine-5.0.0.Beta1.

                            jar:5.0.0.Beta1]

                                    at org.hibernate.search.engine.spi.DocumentBuilderIndexedEntity.buildDocumentFields(DocumentBuilderIndexedEntity.java:467) [hibernate-search-engine-5.0.0.Beta1.

                            jar:5.0.0.Beta1]

                                    at org.hibernate.search.engine.spi.DocumentBuilderIndexedEntity.getDocument(DocumentBuilderIndexedEntity.java:359) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.

                            0.Beta1]

                                    at org.hibernate.search.engine.spi.DocumentBuilderIndexedEntity.createUpdateWork(DocumentBuilderIndexedEntity.java:288) [hibernate-search-engine-5.0.0.Beta1.jar

                            :5.0.0.Beta1]

                                    at org.hibernate.search.engine.spi.DocumentBuilderIndexedEntity.addWorkToQueue(DocumentBuilderIndexedEntity.java:230) [hibernate-search-engine-5.0.0.Beta1.jar:5

                            .0.0.Beta1]

                                    at org.hibernate.search.engine.impl.WorkPlan$PerEntityWork.enqueueLuceneWork(WorkPlan.java:486) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                                    at org.hibernate.search.engine.impl.WorkPlan$PerClassWork.enqueueLuceneWork(WorkPlan.java:261) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                                    at org.hibernate.search.engine.impl.WorkPlan.getPlannedLuceneWork(WorkPlan.java:147) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                                    at org.hibernate.search.backend.impl.WorkQueue.prepareWorkPlan(WorkQueue.java:114) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                                    at org.hibernate.search.backend.impl.BatchedQueueingProcessor.prepareWorks(BatchedQueueingProcessor.java:56) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta

                            1]

                                    at org.hibernate.search.backend.impl.TransactionalWorker.performWork(TransactionalWorker.java:85) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                                    at org.infinispan.query.backend.QueryInterceptor.performSearchWorks(QueryInterceptor.java:235) [infinispan-query.jar:7.0.0.Final]

                                    at org.infinispan.query.backend.QueryInterceptor.performSearchWork(QueryInterceptor.java:229) [infinispan-query.jar:7.0.0.Final]

                                    at org.infinispan.query.backend.QueryInterceptor.updateIndexes(QueryInterceptor.java:223) [infinispan-query.jar:7.0.0.Final]

                                    at org.infinispan.query.backend.QueryInterceptor.processPutMapCommand(QueryInterceptor.java:418) [infinispan-query.jar:7.0.0.Final]

                                    at org.infinispan.query.backend.QueryInterceptor.visitPutMapCommand(QueryInterceptor.java:186) [infinispan-query.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitPutMapCommand(NonTransactionalLockingInterceptor.java:67) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:55) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:55) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:166) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.statetransfer.StateTransferInterceptor.visitPutMapCommand(StateTransferInterceptor.java:100) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutMapCommand(CacheMgmtInterceptor.java:117) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.AbstractVisitor.visitPutMapCommand(AbstractVisitor.java:55) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.write.PutMapCommand.acceptVisitor(PutMapCommand.java:47) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:39) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:48) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:97) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:218) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:86) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:267) [infinispan-core.jar:7.0.0.Final]

                                    at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:211) [infinispan-core.jar:7.0.0.Final]

                                    at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:460) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:377) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:250) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:677) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.JChannel.up(JChannel.java:733) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1029) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.pbcast.StreamingStateTransfer.up(StreamingStateTransfer.java:231) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.RSVP.up(RSVP.java:201) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:505) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.FRAG2.up(FRAG2.java:182) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.FlowControl.up(FlowControl.java:447) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.FlowControl.up(FlowControl.java:447) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.stack.Protocol.up(Protocol.java:420) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:294) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.UNICAST3.deliverBatch(UNICAST3.java:1087) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.UNICAST3.removeAndDeliver(UNICAST3.java:886) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.UNICAST3.handleBatchReceived(UNICAST3.java:867) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:517) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:674) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.stack.Protocol.up(Protocol.java:420) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:213) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.stack.Protocol.up(Protocol.java:420) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.stack.Protocol.up(Protocol.java:420) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.stack.Protocol.up(Protocol.java:420) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.TP.passBatchUp(TP.java:1605) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at org.jgroups.protocols.TP$BatchHandler.run(TP.java:1855) [jgroups-3.6.0.Final.jar:3.6.0.Final]

                                    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_65]

                                    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_65]

                                    at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_65]

                            Caused by: java.lang.IllegalArgumentException: Invoking locationId on a  null object

                                    at org.hibernate.annotations.common.reflection.java.JavaXProperty.invoke(JavaXProperty.java:81) [hibernate-commons-annotations-4.0.1.Final.jar:4.0.1.Final]

                                    at org.hibernate.search.util.impl.ReflectionHelper.getMemberValue(ReflectionHelper.java:79) [hibernate-search-engine-5.0.0.Beta1.jar:5.0.0.Beta1]

                                    ... 77 more

                            Caused by: java.lang.NullPointerException

                                    at sun.reflect.GeneratedMethodAccessor384.invoke(Unknown Source) [:1.7.0_65]

                                    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_65]

                                    at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_65]

                                    at org.hibernate.annotations.common.reflection.java.JavaXProperty.invoke(JavaXProperty.java:74) [hibernate-commons-annotations-4.0.1.Final.jar:4.0.1.Final]

                                    ... 78 more

                            • 11. Re: Re: Unable to acquire lock exception
                              Tristan Tarrant Master

                              Jithendra, you must use Infinispan 7.1.0.CR2 as Sanne explained.

                              • 12. Re: Re: Unable to acquire lock exception
                                Jithendra reddy Newbie

                                is it a must?? it will be very difficult at this time of the release (the release is in a two weeks time). Will it definitely help with the upgrade? bcoz we are in such a situation that it's either with or without the infinispan querying and has to be decided today. If we put in the effort for the upgrade at this critical time and it still does not resolve these issues, it will be a time lost for us, which we could have used for a backup solution.

                                 

                                Please advice.

                                 

                                Regards

                                Jithendra

                                • 13. Re: Re: Unable to acquire lock exception
                                  Tristan Tarrant Master

                                  I cannot control your project timing and decisions, so my statement is purely technical:

                                  all of the changes that Sanne described above have gone into Infinispan 7.1 and Hibernate Search 5.0.1 so that's what you need to get the improvements.

                                  • 14. Re: Re: Unable to acquire lock exception
                                    Jithendra reddy Newbie

                                    thanks for the advice Tristan. So, without upgrading, there is no way we could work around this problem?

                                     

                                    Regards

                                    Jithendra

                                    1 2 Previous Next