8 Replies Latest reply on Sep 24, 2012 7:11 PM by gustavonalle

    Cluster hangs waiting for State Transfer lock

    gustavonalle

      Hi, we are currently observing a strange behaviour in infinispan.

      Basically we have a 4 node cluster which is stopped and started by puppet simultaneously and sometimes (the more nodes in the cluster the easier is to reproduce) during start the whole cluster hangs. The thread dump shows the problematic section:

       

      "AsyncHttpClient-Callback" daemon prio=10 tid=0x0000000000a8d800 nid=0x39f8 in Object.wait() [0x00007fa8962df000]

         java.lang.Thread.State: TIMED_WAITING (on object monitor)

              at java.lang.Object.wait(Native Method)

              - waiting on <0x00000007dd22ac48> (a java.lang.Object)

              at org.infinispan.statetransfer.StateTransferLockImpl.acquireLockForWriteCommand(StateTransferLockImpl.java:309)

              - locked <0x00000007dd22ac48> (a java.lang.Object)

              at org.infinispan.statetransfer.StateTransferLockImpl.acquireForCommand(StateTransferLockImpl.java:128)

              at org.infinispan.interceptors.StateTransferLockInterceptor.handleWriteCommand(StateTransferLockInterceptor.java:176)

              at org.infinispan.interceptors.StateTransferLockInterceptor.visitPutKeyValueCommand(StateTransferLockInterceptor.java:152)

              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:116)

              at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:125)

              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:116)

              at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:132)

              at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:91)

              at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:62)

              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

              at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:345)

              at org.infinispan.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1006)

              at org.infinispan.CacheImpl.put(CacheImpl.java:702)

              at org.infinispan.CacheImpl.put(CacheImpl.java:694)

              at org.infinispan.CacheSupport.put(CacheSupport.java:128)

             

       

      Looking at the TRACE logs on more than one node, it shows a replication failure, with the same sender:

       

      23:53:55.801 [asyncTransportThread-0] TRACE o.i.remoting.rpc.RpcManagerImpl - replication exception:

      org.infinispan.remoting.transport.jgroups.SuspectException: One or more nodes have left the cluster while replicating command StateTransferControlCommand{cache=c.t.e.search.s.service.permissions.serviceCache, type=APPLY_STATE, sender=uk2x-rscnode12-52574, viewId=4, state=0, locks=N/A}

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:456) ~[sc-2012.1.1-SNAPSHOT.jar:na]

              at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:161) ~[sc-2012.1.1-SNAPSHOT.jar:na]

              at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:183) ~[sc-2012.1.1-SNAPSHOT.jar:na]

              at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:240) ~[sc-2012.1.1-SNAPSHOT.jar:na]

              at org.infinispan.remoting.rpc.RpcManagerImpl.access$000(RpcManagerImpl.java:78) ~[sc-2012.1.1-SNAPSHOT.jar:na]

              at org.infinispan.remoting.rpc.RpcManagerImpl$1.call(RpcManagerImpl.java:274) [sc-2012.1.1-SNAPSHOT.jar:na]

              at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) [na:1.7.0_03]

              at java.util.concurrent.FutureTask.run(FutureTask.java:166) [na:1.7.0_03]

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [na:1.7.0_03]

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [na:1.7.0_03]

              at java.lang.Thread.run(Thread.java:722) [na:1.7.0_03]

       

      If I stop ONLY the sender, the whole cluster ressurects and the write locks are released on all nodes. Otherwise, the whole cluster remains frozen.

       

      The caches are REPL_ASYN using embedded jgroups-udp.xml. Any suggestion where I should be looking to troubleshoot this, or maybe any configuration changes to avoid this cluster-wide locking?

       

       

      Thanks,

      Gustavo

        • 1. Re: Cluster hangs waiting for State Transfer lock
          gustavonalle

          More detail on this.

           

          * I'm using infinispan 5.1.5.Final, java 7, SLES 11.1

           

          * 2 of the four nodes enter an infinite loop trying to install and rollback the CacheView, the view number keeps increasing constantly

           

          10:12:54.133 [OOB-15,uk2y-gtsesp01-51111] DEBUG o.i.s.BaseStateTransferTask - Node uk2y-gtsesp01-51111 cancelled state transfer for view 1822 after 29 milliseconds!

          10:12:54.407 [OOB-2,uk2y-gtsesp01-51111] DEBUG o.i.s.BaseStateTransferTask - Node finished pushing data for cache views 1810.

          10:12:54.407 [OOB-25,uk2y-gtsesp01-51111] DEBUG o.i.s.BaseStateTransferTask - Node finished pushing data for cache views 1817.

          10:12:54.407 [OOB-9,uk2y-gtsesp01-51111] DEBUG o.i.s.BaseStateTransferTask - Node finished pushing data for cache views 1821.

          10:12:54.408 [OOB-9,uk2y-gtsesp01-51111] DEBUG o.i.cacheviews.CacheViewsManagerImpl - access: Rolling back to cache view 1809, new view id is 1811

          10:12:54.408 [OOB-9,uk2y-gtsesp01-51111] DEBUG o.i.s.BaseStateTransferTask - Node uk2y-gtsesp01-51111 cancelled state transfer for view 1810 after 304 milliseconds!

          10:12:54.408 [OOB-25,uk2y-gtsesp01-51111] DEBUG o.i.cacheviews.CacheViewsManagerImpl - userPermissionsCache: Rolling back to cache view 1816, new view id is 1818

          10:12:54.408 [OOB-25,uk2y-gtsesp01-51111] DEBUG o.i.s.BaseStateTransferTask - Node uk2y-gtsesp01-51111 cancelled state transfer for view 1817 after 304 milliseconds!

          10:12:54.409 [OOB-25,uk2y-gtsesp01-51111] DEBUG o.i.cacheviews.CacheViewsManagerImpl - c.t.e.s.s.service.permissions.userCache: Rolling back to cache view 1820, new view id is 1822

           

          * The other 2 nodes keep spitting the exception shown in the previous post ("One or more nodes have left the cluster while replicating command")

           

          * Although it's relatively easy to reproduce the problem with REPL_ASYNC, so far I cannot reproduce with DIST_ASYNC

           

          Thanks,

          Gustavo

          • 2. Re: Cluster hangs waiting for State Transfer lock
            mircea.markus

            Can you reproduce this with some TRACE enabled?

            • 3. Re: Cluster hangs waiting for State Transfer lock
              dan.berindei

              Gustavo, I didn't quite understand your comment about multiple nodes showing a replication error with the same sender - AFAIK this replication error is only logged on the sender.

               

              If you can post your TRACE logs here we could analyze them. If not, you should see which node the APPLY_STATE command was sent to, and you should check if the JGroups view logged JGroupsTransport at the time the command was sent does contain that particular node.

               

              We have seen a similar problem with continuously incrementing cache view ids, but we never got a TRACE or log so far.

              • 4. Re: Cluster hangs waiting for State Transfer lock
                gustavonalle

                @Dan, during the freezing, this is what I see:

                 

                 

                Server uk2x-rscnode02:

                java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: One or more nodes have left the cluster while replicating command StateTransferControlCommand{cache=c.t.e.s.s.service.permissions.serviceCache, type=APPLY_STATE, sender=uk2x-rscnode02-50849, viewId=4, state=0, locks=N/A}

                        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:262) ~[na:1.7.0_03]

                        at java.util.concurrent.FutureTask.get(FutureTask.java:119) ~[na:1.7.0_03]

                        at org.infinispan.util.concurrent.AggregatingNotifyingFutureBuilder.get(AggregatingNotifyingFutureBuilder.java:93) ~[s-c-2012.1.1-SNAPSHOT.jar:na]

                        at org.infinispan.statetransfer.BaseStateTransferTask.finishPushingState(BaseStateTransferTask.java:139) ~[s-c-2012.1.1-SNAPSHOT.jar:na]

                        at org.infinispan.statetransfer.ReplicatedStateTransferTask.doPerformStateTransfer(ReplicatedStateTransferTask.java:116) ~[s-c-2012.1.1-SNAPSHOT.jar:na]

                 

                Server uk2x-rcsnode15:

                java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: One or more nodes have left the cluster while replicating command StateTransferControlCommand{cache=c.t.e.s.s.service.permissions.serviceCache, type=APPLY_STATE, sender=uk2x-rscnode02-50849, viewId=4, state=0, locks=N/A}

                        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:262) ~[na:1.7.0_03]

                        at java.util.concurrent.FutureTask.get(FutureTask.java:119) ~[na:1.7.0_03]

                        at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterPrepareView(CacheViewsManagerImpl.java:318) ~[s-c-2012.1.1-SNAPSHOT.jar:na]

                        at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterInstallView(CacheViewsManagerImpl.java:249) ~[s-c-2012.1.1-SNAPSHOT.jar:na]

                        at org.infinispan.cacheviews.CacheViewsManagerImpl$ViewInstallationTask.call(CacheViewsManagerImpl.java:875) ~[s-c-2012.1.1-SNAPSHOT.jar:na]

                 

                And it keeps repeating itself incrementing the cache viewIds.

                 

                @Mircea I'll prepare some traces to send it to you.

                • 5. Re: Cluster hangs waiting for State Transfer lock
                  dan.berindei

                  @Gustavo, I see, the second stack trace is a bit different: node uk2x-rscnode02 sends the exception to the coordinator (uk2x-rcsnode15), and the coordinator fails the entire cache view installation because of it. At some point, though, it should receive a new JGroups view that doesn't contain the leaver and install a new cache view that won't trigger this SuspectException any more.

                   

                  Above the error in uk2x-rscnode02's log, you should see the node that's the target of the StateTransferControlCommand (something like dests=xxx). We need to find out if that node is still in the cluster according to uk2x-rscnode02, and according to uk2x-rcsnode15 (i.e. the  "Received new cluster view" messages in their logs before the failure).

                  • 6. Re: Cluster hangs waiting for State Transfer lock
                    gustavonalle

                    Please find the traces at https://dl.dropbox.com/u/36481334/trace.zip

                     

                    It is a 10 node cluster, REPL_ASYNC, Infinispan 5.1.5.Final. The first SuspectException occurs after about 10s after starting. After a 3 minutes of inactivity, I stopp two nodes from the cluster, and finally stopped the remaining nodes. Please let me know if you need any other information

                     

                    Thanks,

                    Gustavo

                    • 7. Re: Cluster hangs waiting for State Transfer lock
                      gustavonalle

                      I had the same problem with DIST_ASYNC. It's much harder to reproduce but when it happens the whole cluster get stuck and several threads becomes blocked at this point.

                       

                      Thread 30221: (state = BLOCKED)

                      - java.lang.Object.wait(long) @bci=0 (Interpreted frame)

                      - org.infinispan.statetransfer.StateTransferLockImpl.acquireLockForWriteCommand(org.infinispan.context.InvocationContext) @bci=67, line=309 (Interpreted frame)

                      - org.infinispan.statetransfer.StateTransferLockImpl.acquireForCommand(org.infinispan.context.InvocationContext, org.infinispan.commands.write.WriteCommand) @bci=13, line=128 (Interpreted frame)

                      - org.infinispan.interceptors.StateTransferLockInterceptor.handleWriteCommand(org.infinispan.context.InvocationContext, org.infinispan.commands.write.WriteCommand) @bci=6, line=176 (Interpreted frame)

                      - org.infinispan.interceptors.StateTransferLockInterceptor.visitPutKeyValueCommand(org.infinispan.context.InvocationContext, org.infinispan.commands.write.PutKeyValueCommand) @bci=3, line=152 (Interpreted frame)

                      - org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(org.infinispan.context.InvocationContext, org.infinispan.commands.Visitor) @bci=3, line=77 (Compiled frame)

                      - org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(org.infinispan.context.InvocationContext, org.infinispan.commands.VisitableCommand) @bci=6, line=116 (Compiled frame)

                      - org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(org.infinispan.context.InvocationContext, org.infinispan.commands.write.PutKeyValueCommand) @bci=7, line=125 (Interpreted frame)

                      - org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(org.infinispan.context.InvocationContext, org.infinispan.commands.Visitor) @bci=3, line=77 (Compiled frame)

                      - org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(org.infinispan.context.InvocationContext, org.infinispan.commands.VisitableCommand) @bci=6, line=116 (Compiled frame)

                      - org.infinispan.interceptors.InvocationContextInterceptor.handleAll(org.infinispan.context.InvocationContext, org.infinispan.commands.VisitableCommand) @bci=178, line=132 (Interpreted frame)

                      - org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(org.infinispan.context.InvocationContext, org.infinispan.commands.VisitableCommand) @bci=3, line=91 (Interpreted frame)

                      - org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(org.infinispan.context.InvocationContext, org.infinispan.commands.write.PutKeyValueCommand) @bci=3, line=62 (Compiled frame)

                      - org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(org.infinispan.context.InvocationContext, org.infinispan.commands.Visitor) @bci=3, line=77 (Compiled frame)

                      - org.infinispan.interceptors.InterceptorChain.invoke(org.infinispan.context.InvocationContext, org.infinispan.commands.VisitableCommand) @bci=6, line=345 (Interpreted frame)

                      - org.infinispan.CacheImpl.executeCommandAndCommitIfNeeded(org.infinispan.context.InvocationContext, org.infinispan.commands.VisitableCommand) @bci=33, line=1006 (Interpreted frame)

                      - org.infinispan.CacheImpl.put(java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, long, java.util.concurrent.TimeUnit, java.util.EnumSet, java.lang.ClassLoader) @bci=55, line=702 (Interpreted frame)

                      - org.infinispan.CacheImpl.put(java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit, long, java.util.concurrent.TimeUnit) @bci=12, line=694 (Interpreted frame)

                      - org.infinispan.CacheSupport.put(java.lang.Object, java.lang.Object, long, java.util.concurrent.TimeUnit) @bci=13, line=128 (Interpreted frame)

                      - c.t.e.s.s.s.p.o.ODCServiceCacheImpl.handleSubscription(c.t.e.s.o.SubscriptionListResponse, java.lang.String) @bci=17, line=168 (Interpreted frame)

                       

                      Thanks,

                      Gustavo

                      • 8. Re: Cluster hangs waiting for State Transfer lock
                        gustavonalle

                        More info about the DIST_ASYNC hang.

                         

                        I've seen in the logs that there was a period (approximatly 1h) where the cluster suffered several network partitions and subsequent merges.

                         

                        But the aftermath is that the cluster never recovered from that.

                         

                        In one node, it was throwing the below exception for days:

                         

                        14:46:05.226 [AsyncHttpClient-Callback] ERROR o.i.i.InvocationContextInterceptor - ISPN000136: Execution error

                        org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 27

                                at org.infinispan.interceptors.StateTransferLockInterceptor.signalStateTransferInProgress(StateTransferLockInterceptor.java:201) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.StateTransferLockInterceptor.handleWriteCommand(StateTransferLockInterceptor.java:177) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.StateTransferLockInterceptor.visitPutKeyValueCommand(StateTransferLockInterceptor.java:152) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:116) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:125) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:116) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:132) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:91) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:62) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:345) [synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1006) [synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.CacheImpl.put(CacheImpl.java:702) [synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.CacheImpl.put(CacheImpl.java:694) [synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.CacheSupport.put(CacheSupport.java:128) [synapse-core-1.0-rc1.jar:na]

                         

                        In other nodes, the same exception, but with a different view id:

                         

                        14:48:01.021 [AsyncHttpClient-Callback] ERROR o.i.i.InvocationContextInterceptor - ISPN000136: Execution error

                        org.infinispan.statetransfer.StateTransferInProgressException: Timed out waiting for the state transfer lock, state transfer in progress for view 9

                                at org.infinispan.interceptors.StateTransferLockInterceptor.signalStateTransferInProgress(StateTransferLockInterceptor.java:201) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.StateTransferLockInterceptor.handleWriteCommand(StateTransferLockInterceptor.java:177) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.StateTransferLockInterceptor.visitPutKeyValueCommand(StateTransferLockInterceptor.java:152) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:116) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:125) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:116) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:132) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:91) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:62) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77) ~[synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:345) [synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1006) [synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.CacheImpl.put(CacheImpl.java:702) [synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.CacheImpl.put(CacheImpl.java:694) [synapse-core-1.0-rc1.jar:na]

                                at org.infinispan.CacheSupport.put(CacheSupport.java:128) [synapse-core-1.0-rc1.jar:na]

                            

                         

                         

                        The biggest drawback is that if a thread from our web server for some reason decided to do a PUT in the cache, it blocks at the point shown in my previous post. Since we are dealing with a very low latency system, blocks of any kind are not welcomed.

                         

                        Any suggestion (maybe configuration changes) in order to avoid contention during state transfer?