1 2 Previous Next 15 Replies Latest reply on Nov 11, 2011 8:59 AM by galder.zamarreno

    Cannot initialize or sync Infinispan/hotrod cluster

    fealves78

      I am spinning 2 infinispan/hotrod servers on the same machine but both of them are crashing right after (or during) the initialization process. The servers are started as follows:

       

      Server #1: ubuntu@ip-10-81-0-54:~/dev/infinispan-5.1od -c configa.xml -Dlog4j.configuration=../etc/log4j.xml

      Server #2: ubuntu@ip-10-81-0-54:~/dev/infinispan-5.1od -c configb.xml -Dlog4j.configuration=../etc/log4j.xml

       

      Then I get the following outputs:

       

      Server #1:

       

      ubuntu@ip-10-81-0-54:~/dev/infinispan-5.1.0.ALPHA2/bin$ ./startServer.sh -r hotrod -c configa.xml -Dlog4j.configuration=../etc/log4j.xml

      -------------------------------------------------------------------
      GMS: address=ip-10-81-0-54-42407, cluster=testcluster, physical address=10.81.0.54:7900
      -------------------------------------------------------------------
      2011-10-07 23:14:39,881 WARN  [DefaultCacheManager] (InfinispanServer-Main) ISPN000156: You are not starting all your caches at the same time. This can lead to problems as asymmetric clusters are not supported, see ISPN-658. We recommend using EmbeddedCacheManager.startCaches() to start all your caches upfront.
      2011-10-07 23:14:39,893 WARN  [DefaultCacheManager] (InfinispanServer-Main) ISPN000156: You are not starting all your caches at the same time. This can lead to problems as asymmetric clusters are not supported, see ISPN-658. We recommend using EmbeddedCacheManager.startCaches() to start all your caches upfront.
      2011-10-07 23:14:50,116 ERROR [InvocationContextInterceptor] (InfinispanServer-Main) ISPN000136: Execution error
      org.infinispan.util.concurrent.TimeoutException: Replication timeout for ip-10-81-0-54-7296
              at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:71)
              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:413)
              at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:131)
              at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:155)
              at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:200)
              at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:187)
              at org.infinispan.remoting.rpc.RpcManagerImpl.broadcastRpcCommand(RpcManagerImpl.java:170)
              at org.infinispan.remoting.rpc.RpcManagerImpl.broadcastRpcCommand(RpcManagerImpl.java:163)
              at org.infinispan.interceptors.ReplicationInterceptor.handleCrudMethod(ReplicationInterceptor.java:116)
              at org.infinispan.interceptors.ReplicationInterceptor.visitPutKeyValueCommand(ReplicationInterceptor.java:79)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
              at org.infinispan.interceptors.LockingInterceptor.visitPutKeyValueCommand(LockingInterceptor.java:295)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:133)
              at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
              at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:214)
              at org.infinispan.interceptors.TxInterceptor.visitPutKeyValueCommand(TxInterceptor.java:162)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:133)
              at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
              at org.infinispan.interceptors.StateTransferLockInterceptor.visitPutKeyValueCommand(StateTransferLockInterceptor.java:110)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
              at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)
              at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)
              at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:318)
              at org.infinispan.CacheImpl.putIfAbsent(CacheImpl.java:533)
              at org.infinispan.CacheSupport.putIfAbsent(CacheSupport.java:74)
              at org.infinispan.server.hotrod.HotRodServer$$anonfun$1$$anonfun$3.apply(HotRodServer.scala:123)
              at org.infinispan.server.hotrod.HotRodServer$$anonfun$1$$anonfun$3.apply(HotRodServer.scala:123)
              at org.infinispan.server.hotrod.HotRodServer.org$infinispan$server$hotrod$HotRodServer$$updateTopologyCacheEntry(HotRodServer.scala:137)
              at org.infinispan.server.hotrod.HotRodServer$$anonfun$1.apply(HotRodServer.scala:122)
              at org.infinispan.server.hotrod.HotRodServer$$anonfun$1.apply(HotRodServer.scala:110)
              at org.infinispan.server.hotrod.HotRodServer.isViewUpdated(HotRodServer.scala:212)
              at org.infinispan.server.hotrod.HotRodServer.org$infinispan$server$hotrod$HotRodServer$$updateTopologyView(HotRodServer.scala:207)
              at org.infinispan.server.hotrod.HotRodServer.addSelfToTopologyView(HotRodServer.scala:110)
              at org.infinispan.server.hotrod.HotRodServer.startTransport(HotRodServer.scala:98)
              at org.infinispan.server.core.AbstractProtocolServer.start(AbstractProtocolServer.scala:99)
              at org.infinispan.server.hotrod.HotRodServer.start(HotRodServer.scala:79)
              at org.infinispan.server.core.Main$.boot(Main.scala:140)
              at org.infinispan.server.core.Main$$anon$1.call(Main.scala:94)
              at org.infinispan.server.core.Main$$anon$1.call(Main.scala:91)
              at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
              at java.util.concurrent.FutureTask.run(FutureTask.java:138)
              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
              at java.lang.Thread.run(Thread.java:662)
      2011-10-07 23:14:50,188 ERROR [InvocationContextInterceptor] (OOB-1,testcluster,ip-10-81-0-54-42407) ISPN000136: Execution error
      org.infinispan.distribution.RehashInProgressException: Timed out waiting for the transaction lock
              at org.infinispan.interceptors.StateTransferLockInterceptor.visitPutKeyValueCommand(StateTransferLockInterceptor.java:107)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
              at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)
              at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)
              at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:318)
              at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:68)
              at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:66)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:173)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:181)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:265)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:159)
              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:160)
              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:139)
              at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
              at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
              at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
              at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:556)
              at org.jgroups.JChannel.up(JChannel.java:720)
              at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
              at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
              at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
              at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
              at org.jgroups.protocols.pbcast.GMS.up(GMS.java:870)
              at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
              at org.jgroups.protocols.UNICAST.up(UNICAST.java:298)
              at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:764)
              at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:626)
              at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
              at org.jgroups.protocols.FD.up(FD.java:270)
              at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:273)
              at org.jgroups.protocols.MERGE2.up(MERGE2.java:208)
              at org.jgroups.protocols.Discovery.up(Discovery.java:335)
              at org.jgroups.protocols.TP.passMessageUp(TP.java:1093)
              at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1649)
              at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1631)
              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
              at java.lang.Thread.run(Thread.java:662)
      ^CException in thread "ShutdownHookThread" java.lang.RuntimeException: Exception encountered in shutting down the server
              at org.infinispan.server.core.ShutdownHook.run(Main.scala:350)
      Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
              at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
              at java.util.concurrent.FutureTask.get(FutureTask.java:83)
              at org.infinispan.server.core.ShutdownHook.run(Main.scala:346)
      Caused by: java.lang.NullPointerException
              at org.infinispan.jmx.JmxUtil.unregisterMBean(JmxUtil.java:109)
              at org.infinispan.server.core.AbstractProtocolServer.stop(AbstractProtocolServer.scala:141)
              at org.infinispan.server.hotrod.HotRodServer.stop(HotRodServer.scala:225)
              at org.infinispan.server.core.ShutdownHook$$anon$5.call(Main.scala:340)
              at org.infinispan.server.core.ShutdownHook$$anon$5.call(Main.scala:337)
              at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
              at java.util.concurrent.FutureTask.run(FutureTask.java:138)
              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
              at java.lang.Thread.run(Thread.java:662)

      Server #2:

       

      ubuntu@ip-10-81-0-54:~/dev/infinispan-5.1.0.ALPHA2/bin$ ./startServer.sh -r hotrod -c configb.xml -Dlog4j.configuration=../etc/log4j.xml

      -------------------------------------------------------------------
      GMS: address=ip-10-81-0-54-7296, cluster=testcluster, physical address=10.81.0.54:7920
      -------------------------------------------------------------------
      2011-10-07 23:14:40,377 WARN  [DefaultCacheManager] (InfinispanServer-Main) ISPN000156: You are not starting all your caches at the same time. This can lead to problems as asymmetric clusters are not supported, see ISPN-658. We recommend using EmbeddedCacheManager.startCaches() to start all your caches upfront.
      2011-10-07 23:14:40,407 WARN  [DefaultCacheManager] (InfinispanServer-Main) ISPN000156: You are not starting all your caches at the same time. This can lead to problems as asymmetric clusters are not supported, see ISPN-658. We recommend using EmbeddedCacheManager.startCaches() to start all your caches upfront.
      2011-10-07 23:14:50,214 ERROR [InvocationContextInterceptor] (InfinispanServer-Main) ISPN000136: Execution error
      org.infinispan.distribution.RehashInProgressException: Timed out waiting for the transaction lock
              at org.infinispan.interceptors.StateTransferLockInterceptor.visitPutKeyValueCommand(StateTransferLockInterceptor.java:107)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
              at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)
              at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)
              at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:318)
              at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:68)
              at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:66)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:173)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:181)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:265)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:159)
              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:160)
              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:139)
              at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
              at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
              at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
              at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:556)
              at org.jgroups.JChannel.up(JChannel.java:720)
              at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
              at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
              at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
              at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
              at org.jgroups.protocols.pbcast.GMS.up(GMS.java:870)
              at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
              at org.jgroups.protocols.UNICAST.up(UNICAST.java:298)
              at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:764)
              at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:626)
              at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
              at org.jgroups.protocols.FD.up(FD.java:270)
              at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:273)
              at org.jgroups.protocols.MERGE2.up(MERGE2.java:208)
              at org.jgroups.protocols.Discovery.up(Discovery.java:335)
              at org.jgroups.protocols.TP.passMessageUp(TP.java:1093)
              at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1649)
              at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1631)
              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
              at java.lang.Thread.run(Thread.java:662)
      Failed to boot JBoss:
      org.infinispan.distribution.RehashInProgressException: Timed out waiting for the transaction lock
              at org.infinispan.interceptors.StateTransferLockInterceptor.visitPutKeyValueCommand(StateTransferLockInterceptor.java:107)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
              at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)
              at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)
              at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:318)
              at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:68)
              at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:66)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:173)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:181)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:265)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:159)
              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:160)
              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:139)
              at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
              at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
              at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
              at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:556)
              at org.jgroups.JChannel.up(JChannel.java:720)
              at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
              at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
              at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
              at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
              at org.jgroups.protocols.pbcast.GMS.up(GMS.java:870)
              at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
              at org.jgroups.protocols.UNICAST.up(UNICAST.java:298)
              at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:764)
              at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:626)
              at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
              at org.jgroups.protocols.FD.up(FD.java:270)
              at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:273)
              at org.jgroups.protocols.MERGE2.up(MERGE2.java:208)
              at org.jgroups.protocols.Discovery.up(Discovery.java:335)
              at org.jgroups.protocols.TP.passMessageUp(TP.java:1093)
              at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1649)
              at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1631)
              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
              at java.lang.Thread.run(Thread.java:662)
      Exception in thread "main" java.util.concurrent.ExecutionException: org.infinispan.distribution.RehashInProgressException: Timed out waiting for the transaction lock
              at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
              at java.util.concurrent.FutureTask.get(FutureTask.java:83)
              at org.infinispan.server.core.Main$.main(Main.scala:112)
              at org.infinispan.server.core.Main.main(Main.scala)
      Caused by: org.infinispan.distribution.RehashInProgressException: Timed out waiting for the transaction lock
              at org.infinispan.interceptors.StateTransferLockInterceptor.visitPutKeyValueCommand(StateTransferLockInterceptor.java:107)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
              at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)
              at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)
              at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
              at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
              at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:318)
              at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:68)
              at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:66)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:173)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:181)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:265)
              at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:159)
              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:160)
              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:139)
              at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:447)
              at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:354)
              at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:230)
              at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:556)
              at org.jgroups.JChannel.up(JChannel.java:720)
              at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026)
              at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
              at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
              at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
              at org.jgroups.protocols.pbcast.GMS.up(GMS.java:870)
              at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
              at org.jgroups.protocols.UNICAST.up(UNICAST.java:298)
              at org.jgroups.protocols.pbcast.NAKACK.handleMessage(NAKACK.java:764)
              at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:626)
              at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
              at org.jgroups.protocols.FD.up(FD.java:270)
              at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:273)
              at org.jgroups.protocols.MERGE2.up(MERGE2.java:208)
              at org.jgroups.protocols.Discovery.up(Discovery.java:335)
              at org.jgroups.protocols.TP.passMessageUp(TP.java:1093)
              at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1649)
              at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1631)
              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
              at java.lang.Thread.run(Thread.java:662)
      ^CException in thread "ShutdownHookThread" java.lang.RuntimeException: Exception encountered in shutting down the server
              at org.infinispan.server.core.ShutdownHook.run(Main.scala:350)
      Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
              at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
              at java.util.concurrent.FutureTask.get(FutureTask.java:83)
              at org.infinispan.server.core.ShutdownHook.run(Main.scala:346)
      Caused by: java.lang.NullPointerException
              at org.infinispan.jmx.JmxUtil.unregisterMBean(JmxUtil.java:109)
              at org.infinispan.server.core.AbstractProtocolServer.stop(AbstractProtocolServer.scala:141)
              at org.infinispan.server.hotrod.HotRodServer.stop(HotRodServer.scala:225)
              at org.infinispan.server.core.ShutdownHook$$anon$5.call(Main.scala:340)
              at org.infinispan.server.core.ShutdownHook$$anon$5.call(Main.scala:337)
              at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
              at java.util.concurrent.FutureTask.run(FutureTask.java:138)
              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
              at java.lang.Thread.run(Thread.java:662)

       

      I wonder what I am doing wrong. Any ideas? 

       

      I thank you in forward for your help.

       

      Fealves78

        • 1. Re: Cannot initialize or sync Infinispan/hotrod cluster
          galder.zamarreno

          I think you're encountering a state transfer bug that was introduced in ALPHA2 with the state transfer refactoring. This is solved in 5.1.0.BETA1. Could you try again with this version?

          • 2. Re: Cannot initialize or sync Infinispan/hotrod cluster
            fealves78

            Galder,

             

            First of all, thank you for your promptly reply.

             

            I have donwloaded infinispan 5.1.0.BETA1. I re-run the cache instances with the same configuration files as before. Now, instead of getting errors in both instances, I am just getting errors on the second instance when it starts. The first instance keeps running and no errors are reported. Following is the instances output:

             

            Server #1:

             

            ubuntu@ip-10-81-0-54:~/dev/infinispan-5.1.0.BETA1/bin$ ./startServed -c configa.xml -Dlog4j.configuration=../etc/log4j.xml

            -------------------------------------------------------------------
            GMS: address=ip-10-81-0-54-13172, cluster=testcluster, physical address=10.81.0.54:7900
            -------------------------------------------------------------------
            2011-10-10 16:55:38,683 WARN  [DefaultCacheManager] (InfinispanServer-Main) ISPN000156: You are not starting all your caches at the same time. This can lead to problems as asymmetric clusters are not supported, see ISPN-658. We recommend using EmbeddedCacheManager.startCaches() to start all your caches upfront.
            2011-10-10 16:55:38,754 WARN  [DefaultCacheManager] (InfinispanServer-Main) ISPN000156: You are not starting all your caches at the same time. This can lead to problems as asymmetric clusters are not supported, see ISPN-658. We recommend using EmbeddedCacheManager.startCaches() to start all your caches upfront.

             

             

            Server #2:

             

            ubuntu@ip-10-81-0-54:~/dev/infinispan-5.1.0.BETA1/bin$ ./startServer.sh -r hotrod -c configb.xml -Dlog4j.configuration=../etc/log4j.xml

            -------------------------------------------------------------------
            GMS: address=ip-10-81-0-54-11602, cluster=testcluster, physical address=10.81.0.54:7950
            -------------------------------------------------------------------
            2011-10-10 17:09:10,882 WARN  [DefaultCacheManager] (InfinispanServer-Main) ISPN000156: You are not starting all your caches at the same time. This can lead to problems as asymmetric clusters are not supported, see ISPN-658. We recommend using EmbeddedCacheManager.startCaches() to start all your caches upfront.
            2011-10-10 17:09:10,916 WARN  [DefaultCacheManager] (InfinispanServer-Main) ISPN000156: You are not starting all your caches at the same time. This can lead to problems as asymmetric clusters are not supported, see ISPN-658. We recommend using EmbeddedCacheManager.startCaches() to start all your caches upfront.
            Failed to boot JBoss:
            org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:11222
                    at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:303)
                    at org.infinispan.server.core.transport.NettyTransport.start(NettyTransport.scala:95)
                    at org.infinispan.server.core.AbstractProtocolServer.startTransport(AbstractProtocolServer.scala:121)
                    at org.infinispan.server.hotrod.HotRodServer.startTransport(HotRodServer.scala:101)
                    at org.infinispan.server.core.AbstractProtocolServer.start(AbstractProtocolServer.scala:99)
                    at org.infinispan.server.hotrod.HotRodServer.start(HotRodServer.scala:79)
                    at org.infinispan.server.core.Main$.boot(Main.scala:140)
                    at org.infinispan.server.core.Main$$anon$1.call(Main.scala:94)
                    at org.infinispan.server.core.Main$$anon$1.call(Main.scala:91)
                    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
                    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
                    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
                    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
                    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
                    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
                    at java.lang.Thread.run(Thread.java:662)
            Caused by: java.net.BindException: Address already in use
                    at sun.nio.ch.Net.bind(Native Method)
                    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
                    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
                    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.bind(NioServerSocketPipelineSink.java:148)
                    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleServerSocket(NioServerSocketPipelineSink.java:100)
                    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:74)
                    at org.jboss.netty.channel.Channels.bind(Channels.java:468)
                    at org.jboss.netty.channel.AbstractChannel.bind(AbstractChannel.java:200)
                    at org.jboss.netty.bootstrap.ServerBootstrap$Binder.channelOpen(ServerBootstrap.java:348)
                    at org.jboss.netty.channel.Channels.fireChannelOpen(Channels.java:176)
                    at org.jboss.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:85)
                    at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel(NioServerSocketChannelFactory.java:142)
                    at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel(NioServerSocketChannelFactory.java:90)
                    at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:282)
                    ... 15 more
            Exception in thread "main" java.util.concurrent.ExecutionException: org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:11222
                    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
                    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
                    at org.infinispan.server.core.Main$.main(Main.scala:112)
                    at org.infinispan.server.core.Main.main(Main.scala)
            Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:11222
                    at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:303)
                    at org.infinispan.server.core.transport.NettyTransport.start(NettyTransport.scala:95)
                    at org.infinispan.server.core.AbstractProtocolServer.startTransport(AbstractProtocolServer.scala:121)
                    at org.infinispan.server.hotrod.HotRodServer.startTransport(HotRodServer.scala:101)
                    at org.infinispan.server.core.AbstractProtocolServer.start(AbstractProtocolServer.scala:99)
                    at org.infinispan.server.hotrod.HotRodServer.start(HotRodServer.scala:79)
                    at org.infinispan.server.core.Main$.boot(Main.scala:140)
                    at org.infinispan.server.core.Main$$anon$1.call(Main.scala:94)
                    at org.infinispan.server.core.Main$$anon$1.call(Main.scala:91)
                    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
                    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
                    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
                    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
                    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
                    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
                    at java.lang.Thread.run(Thread.java:662)
            Caused by: java.net.BindException: Address already in use
                    at sun.nio.ch.Net.bind(Native Method)
                    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
                    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
                    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.bind(NioServerSocketPipelineSink.java:148)
                    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleServerSocket(NioServerSocketPipelineSink.java:100)
                    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:74)
                    at org.jboss.netty.channel.Channels.bind(Channels.java:468)
                    at org.jboss.netty.channel.AbstractChannel.bind(AbstractChannel.java:200)
                    at org.jboss.netty.bootstrap.ServerBootstrap$Binder.channelOpen(ServerBootstrap.java:348)
                    at org.jboss.netty.channel.Channels.fireChannelOpen(Channels.java:176)
                    at org.jboss.netty.channel.socket.nio.NioServerSocketChannel.<init>(NioServerSocketChannel.java:85)
                    at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel(NioServerSocketChannelFactory.java:142)
                    at org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory.newChannel(NioServerSocketChannelFactory.java:90)
                    at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:282)
                    ... 15 more

             

             

            Also, I would like to know why I get these messages from the hotrod server:

             

            2011-10-10 16:55:38,683 WARN  [DefaultCacheManager] (InfinispanServer-Main) ISPN000156: You are not starting all your caches at the same time. This can lead to problems as asymmetric clusters are not supported, see ISPN-658. We recommend using EmbeddedCacheManager.startCaches() to start all your caches upfront.
            2011-10-10 16:55:38,754 WARN  [DefaultCacheManager] (InfinispanServer-Main) ISPN000156: You are not starting all your caches at the same time. This can lead to problems as asymmetric clusters are not supported, see ISPN-658. We recommend using EmbeddedCacheManager.startCaches() to start all your caches upfront.

             

            Wasn't the hotrod server responsible for initializing the caches described in the configa.xml and configb.xml? How do I avoid these messages?

             

            Please let me know.

             

            Fealves78

            • 3. Re: Cannot initialize or sync Infinispan/hotrod cluster
              sannegrinovero

              Hi,

              quoting your stacktrace:

              "org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:11222"

               

              so you're starting two servers on the same node, and attempting to bind to the same port. You either have to run the second server on a different node or change ports.

              • 4. Re: Cannot initialize or sync Infinispan/hotrod cluster
                fealves78

                Hello Sanne,

                 

                I thought I have done that in the configuration files tcpa.xml and tcpb.xml. If you look into these files you will see that Server #1 is supposed to be running on 10.81.0.54:7900 and Server #2 on 10.81.0.54:7920. Isn't it correct? If not, how do I do that declaratively? Or do I need to always pass the host and port arguments in the command line for the HotRod server?

                 

                Please let me know.

                 

                I thank you in forward for your help.

                 

                Fealves78

                 

                • 5. Re: Cannot initialize or sync Infinispan/hotrod cluster
                  galder.zamarreno

                  There's two set of communication paths being stablished here.

                   

                  1. How your client connects to Infinispan Hot Rod servers (client connectivity)

                  2. How Hot Rod servers connect with each other (clustering connectivity)

                   

                  tcp*.xml files control clustering connectivity.

                   

                  You still need to pass --host and --port parameters rather than relying on default, but do not use port 7900!!! That's the clustering port . So, start one of them with:

                   

                  --host 10.81.0.54 --port 11222

                   

                  and the other with:

                   

                  --host 10.81.0.54 --port 11223

                   

                  That's of course assuming you want clustering and inbound invocations to go through same network. You might want to change that at some point to have a dedicated clustering network as opposed to the network via which client comms come.

                  • 6. Re: Cannot initialize or sync Infinispan/hotrod cluster
                    fealves78

                    Galder,

                    So, if ports 7900 (Server #1 - clustering) and 7920 (Server #2 - Clustering) is what I have setup on the tcp*.xml files, and let's assume that I will be using ports 11222 (Server #1 - Hotrod) and 11223 (Server #2 - Hotrod) for the Hotrod servers, Is there a way to setup the Hotrod servers configuration (host, port, and etc…) declaratively?

                    Also, let's say that I have both servers up and running and I have 3 remote clients connected to the first server (10.81.0.54:11222). If the connection with that server is down, how will the clients know to switch to the next server (10.81.0.54:11223)? Is this something I should implement the logic for or is there a way to configure the clients with a list of possible servers available?

                    Please let me know.

                    Thank you for all your help.

                     

                    • 7. Re: Cannot initialize or sync Infinispan/hotrod cluster
                      galder.zamarreno

                      Re: Q1 - Nope, unless you start the Hot Rod servers programmatically yourself.

                       

                      Re: Q2 - Nothing to do. Once the clients connect to the 1st server, they should find out about any other servers in the cluster, and so should be able to load balance and failover accordingly. That's why we came up with our own custom binary protocol, in order to support these scenarios. A detailed view of the protocol and the rationale behind it can be found in: http://www.slideshare.net/galderz/infinispan-servers-beyond-peertopeer-data-grids

                      • 8. Re: Cannot initialize or sync Infinispan/hotrod cluster
                        fealves78

                        Hello,

                         

                        I have just downloaded the latest version of infinispan (Infinispan5.1.0.0Beta2 from 10/19/2011). Using the same configuration files as mentioned before in this thread, I started the first server, and after it initialized I started the second server (about 10 seconds later). Both servers initialized correctly and looking at the logs, everything seems to be running just fine. However, if I initialize both servers at the same time (with less than 2 seconds from each other), I am getting an error.

                         

                        Here is how I initialized the servers:

                         

                        Server #1: ubuntu@ip-10-81-0-54:~/dev/infinispan-5.1.0.BETA2/bin$ ./startServer.sh -r hotrod -c configa.xml -Dlog4j.configuration=../etc/log4j.xml -l 10.81.0.54 -p 11222

                         

                        Server #2: ubuntu@ip-10-81-0-54:~/dev/infinispan-5.1.0.BETA2/bin$ ./startServer.sh -r hotrod -c configb.xml -Dlog4j.configuration=../etc/log4j.xml -l 10.81.0.54 -p 11223

                         

                        Here is the initialization log for both servers:

                         

                        2011-10-25 16:45:02,510 INFO (main) [org.infinispan.server.core.Main$] ISPN005001: Start main with args: -r, hotrod, -c, configb.xml, -Dlog4j.configuration=../etc/log4j.xml, -l, 10.81.0.54, -p, 11223

                        2011-10-25 16:45:02,636 DEBUG (InfinispanServer-Main) [org.infinispan.util.FileLookup] Unable to find file configb.xml in classpath; searching for this file on the filesystem instead.

                        2011-10-25 16:45:02,647 DEBUG (InfinispanServer-Main) [org.infinispan.config.InfinispanConfiguration] Using schema schema/infinispan-config-5.1.xsd

                        2011-10-25 16:45:04,531 DEBUG (InfinispanServer-Main) [org.infinispan.manager.DefaultCacheManager] Started cache manager testcluster on null

                        2011-10-25 16:45:04,565 DEBUG (InfinispanServer-Main) [org.infinispan.server.hotrod.HotRodServer] Starting server with basic settings: host=10.81.0.54, port=11223, masterThreads=-1, workerThreads=40, idleTimeout=-1, tcpNoDelay=true, sendBufSize=0, recvBufSize=0

                        2011-10-25 16:45:04,939 INFO (InfinispanServer-Main) [org.infinispan.remoting.transport.jgroups.JGroupsTransport] ISPN000078: Starting JGroups Channel

                        2011-10-25 16:45:04,976 INFO (InfinispanServer-Main) [org.jgroups.JChannel] JGroups version: 3.0.0.CR5

                        2011-10-25 16:45:04,981 DEBUG (InfinispanServer-Main) [org.jgroups.conf.ClassConfigurator] Using jg-magic-map.xml as magic number file and jg-protocol-ids.xml for protocol IDs

                        2011-10-25 16:45:05,445 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.pbcast.GMS] changed role to org.jgroups.protocols.pbcast.ClientGmsImpl

                        2011-10-25 16:45:05,511 DEBUG (InfinispanServer-Main) [org.jgroups.stack.Configurator] set property TCP.diagnostics_addr to default value /224.0.75.75

                        2011-10-25 16:45:05,553 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.FRAG2] received CONFIG event: {bind_addr=/10.81.0.54}

                        2011-10-25 16:45:08,597 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.pbcast.GMS] initial_mbrs are [own_addr=ip-10-81-0-54-56511, is_server=false, is_coord=false, logical_name=ip-10-81-0-54-56511, physical_addrs=10.81.0.54:7900]

                        2011-10-25 16:45:08,597 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.pbcast.GMS] election results: {}

                        2011-10-25 16:45:09,110 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.pbcast.GMS] initial_mbrs are [own_addr=ip-10-81-0-54-56511, view_id=[ip-10-81-0-54-56511|0], is_server=true, is_coord=true, logical_name=ip-10-81-0-54-56511, physical_addrs=10.81.0.54:7900]

                        2011-10-25 16:45:09,110 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.pbcast.GMS] election results: {ip-10-81-0-54-56511=1}

                        2011-10-25 16:45:09,110 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.pbcast.GMS] sending handleJoin(ip-10-81-0-54-64618) to ip-10-81-0-54-56511

                        2011-10-25 16:45:09,192 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.pbcast.NAKACK]

                        [setDigest()]

                        existing digest: []

                        new digest: ip-10-81-0-54-56511: [1 (1)], ip-10-81-0-54-64618: [0 (0)]

                        resulting digest: ip-10-81-0-54-56511: [1 (1)], ip-10-81-0-54-64618: [0 (0)]

                        2011-10-25 16:45:09,192 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.pbcast.GMS] [ip-10-81-0-54-64618]: JoinRsp=[ip-10-81-0-54-56511|1] [ip-10-81-0-54-56511, ip-10-81-0-54-64618] [size=2]

                          

                        2011-10-25 16:45:09,192 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.pbcast.GMS] new_view=[ip-10-81-0-54-56511|1] [ip-10-81-0-54-56511, ip-10-81-0-54-64618]

                        2011-10-25 16:45:09,192 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.pbcast.GMS] ip-10-81-0-54-64618: view is [ip-10-81-0-54-56511|1] [ip-10-81-0-54-56511, ip-10-81-0-54-64618]

                        2011-10-25 16:45:09,193 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.FD_SOCK] VIEW_CHANGE received: [ip-10-81-0-54-56511, ip-10-81-0-54-64618]

                        2011-10-25 16:45:09,195 DEBUG (FD_SOCK pinger,testcluster,ip-10-81-0-54-64618) [org.jgroups.protocols.FD_SOCK] ping_dest is ip-10-81-0-54-56511, pingable_mbrs=[ip-10-81-0-54-56511, ip-10-81-0-54-64618]

                        2011-10-25 16:45:09,196 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.pbcast.STABLE] [ergonomics] setting max_bytes to 800KB (2 members)

                        2011-10-25 16:45:09,197 DEBUG (InfinispanServer-Main) [org.infinispan.remoting.transport.jgroups.JGroupsTransport] New view accepted: [ip-10-81-0-54-56511|1] [ip-10-81-0-54-56511, ip-10-81-0-54-64618]

                        2011-10-25 16:45:09,197 INFO (InfinispanServer-Main) [org.infinispan.remoting.transport.jgroups.JGroupsTransport] ISPN000094: Received new cluster view: [ip-10-81-0-54-56511|1] [ip-10-81-0-54-56511, ip-10-81-0-54-64618]

                        2011-10-25 16:45:09,198 DEBUG (InfinispanServer-Main) [org.jgroups.protocols.pbcast.GMS] ip-10-81-0-54-64618:

                        2011-10-25 16:45:09,241 INFO (InfinispanServer-Main) [org.infinispan.remoting.transport.jgroups.JGroupsTransport] ISPN000079: Cache local address is ip-10-81-0-54-64618, physical addresses are [10.81.0.54:7950]

                        2011-10-25 16:45:09,241 DEBUG (InfinispanServer-Main) [org.infinispan.remoting.transport.jgroups.JGroupsTransport] Waiting on view being accepted

                        2011-10-25 16:45:09,275 DEBUG (InfinispanServer-Main) [org.infinispan.interceptors.InterceptorChain] Interceptor chain size: 7

                        2011-10-25 16:45:09,276 DEBUG (InfinispanServer-Main) [org.infinispan.interceptors.InterceptorChain] Interceptor chain is:

                        >> org.infinispan.interceptors.InvocationContextInterceptor

                        >> org.infinispan.interceptors.StateTransferLockInterceptor

                        >> org.infinispan.interceptors.NotificationInterceptor

                        >> org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor

                        >> org.infinispan.interceptors.EntryWrappingInterceptor

                        >> org.infinispan.interceptors.ReplicationInterceptor

                        >> org.infinispan.interceptors.CallInterceptor

                        2011-10-25 16:45:09,277 DEBUG (InfinispanServer-Main) [org.infinispan.cacheviews.CacheViewsManagerImpl] ___defaultcache: Node ip-10-81-0-54-64618 is joining

                        2011-10-25 16:45:09,446 DEBUG (OOB-2,testcluster,ip-10-81-0-54-64618) [org.infinispan.statetransfer.BaseStateTransferManagerImpl] Applying new state from ip-10-81-0-54-56511: received 0 keys

                        2011-10-25 16:45:09,452 DEBUG (OOB-1,testcluster,ip-10-81-0-54-64618) [org.infinispan.statetransfer.ReplicatedStateTransferTask] Commencing state transfer 2 on node: ip-10-81-0-54-64618. Before start, data container had 0 entries

                        2011-10-25 16:45:09,455 DEBUG (OOB-1,testcluster,ip-10-81-0-54-64618) [org.infinispan.cacheviews.CacheViewsManagerImpl] ___defaultcache: Committing cache view 2

                        2011-10-25 16:45:09,456 DEBUG (InfinispanServer-Main) [org.infinispan.CacheImpl] Started cache ___defaultcache on ip-10-81-0-54-64618

                        2011-10-25 16:45:09,461 DEBUG (OOB-1,testcluster,ip-10-81-0-54-64618) [org.infinispan.statetransfer.ReplicatedStateTransferTask] Node ip-10-81-0-54-64618 completed rehash for view 2 in 9 milliseconds!

                        2011-10-25 16:45:09,469 DEBUG (InfinispanServer-Main) [org.infinispan.interceptors.InterceptorChain] Interceptor chain size: 7

                        2011-10-25 16:45:09,469 DEBUG (InfinispanServer-Main) [org.infinispan.interceptors.InterceptorChain] Interceptor chain is:

                        >> org.infinispan.interceptors.InvocationContextInterceptor

                        >> org.infinispan.interceptors.StateTransferLockInterceptor

                        >> org.infinispan.interceptors.NotificationInterceptor

                        >> org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor

                        >> org.infinispan.interceptors.EntryWrappingInterceptor

                        >> org.infinispan.interceptors.ReplicationInterceptor

                        >> org.infinispan.interceptors.CallInterceptor

                        2011-10-25 16:45:09,469 DEBUG (InfinispanServer-Main) [org.infinispan.cacheviews.CacheViewsManagerImpl] test: Node ip-10-81-0-54-64618 is joining

                        2011-10-25 16:45:09,475 DEBUG (OOB-1,testcluster,ip-10-81-0-54-64618) [org.infinispan.statetransfer.ReplicatedStateTransferTask] Commencing state transfer 2 on node: ip-10-81-0-54-64618. Before start, data container had 0 entries

                        2011-10-25 16:45:09,475 DEBUG (OOB-2,testcluster,ip-10-81-0-54-64618) [org.infinispan.statetransfer.BaseStateTransferManagerImpl] Applying new state from ip-10-81-0-54-56511: received 0 keys

                        2011-10-25 16:45:09,480 DEBUG (OOB-2,testcluster,ip-10-81-0-54-64618) [org.infinispan.cacheviews.CacheViewsManagerImpl] test: Committing cache view 2

                        2011-10-25 16:45:09,480 DEBUG (OOB-2,testcluster,ip-10-81-0-54-64618) [org.infinispan.statetransfer.ReplicatedStateTransferTask] Node ip-10-81-0-54-64618 completed rehash for view 2 in 5 milliseconds!

                        2011-10-25 16:45:09,485 DEBUG (InfinispanServer-Main) [org.infinispan.CacheImpl] Started cache test on ip-10-81-0-54-64618

                        2011-10-25 16:45:09,492 DEBUG (InfinispanServer-Main) [org.infinispan.interceptors.InterceptorChain] Interceptor chain size: 7

                        2011-10-25 16:45:09,492 DEBUG (InfinispanServer-Main) [org.infinispan.interceptors.InterceptorChain] Interceptor chain is:

                        >> org.infinispan.interceptors.InvocationContextInterceptor

                        >> org.infinispan.interceptors.StateTransferLockInterceptor

                        >> org.infinispan.interceptors.NotificationInterceptor

                        >> org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor

                        >> org.infinispan.interceptors.EntryWrappingInterceptor

                        >> org.infinispan.interceptors.ReplicationInterceptor

                        >> org.infinispan.interceptors.CallInterceptor

                        2011-10-25 16:45:09,492 DEBUG (InfinispanServer-Main) [org.infinispan.cacheviews.CacheViewsManagerImpl] ___hotRodTopologyCache: Node ip-10-81-0-54-64618 is joining

                        2011-10-25 16:45:09,497 DEBUG (OOB-2,testcluster,ip-10-81-0-54-64618) [org.infinispan.statetransfer.ReplicatedStateTransferTask] Commencing state transfer 2 on node: ip-10-81-0-54-64618. Before start, data container had 0 entries

                        2011-10-25 16:45:09,499 DEBUG (OOB-2,testcluster,ip-10-81-0-54-64618) [org.infinispan.cacheviews.CacheViewsManagerImpl] ___hotRodTopologyCache: Committing cache view 2

                        2011-10-25 16:45:09,499 DEBUG (OOB-2,testcluster,ip-10-81-0-54-64618) [org.infinispan.statetransfer.ReplicatedStateTransferTask] Node ip-10-81-0-54-64618 completed rehash for view 2 in 3 milliseconds!

                        2011-10-25 16:45:09,500 DEBUG (InfinispanServer-Main) [org.infinispan.CacheImpl] Started cache ___hotRodTopologyCache on ip-10-81-0-54-64618

                        2011-10-25 16:45:09,501 DEBUG (InfinispanServer-Main) [org.infinispan.server.hotrod.HotRodServer] Externally facing address is 10.81.0.54:11223

                        2011-10-25 16:45:09,519 DEBUG (InfinispanServer-Main) [org.infinispan.server.hotrod.HotRodServer] Local topology address is TopologyAddress(10.81.0.54,11223,Map(),ip-10-81-0-54-64618)

                        2011-10-25 16:45:12,194 DEBUG (Timer-3,testcluster,ip-10-81-0-54-64618) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-56511 (own address=ip-10-81-0-54-64618)

                        2011-10-25 16:45:15,194 DEBUG (Timer-4,testcluster,ip-10-81-0-54-64618) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-56511 (own address=ip-10-81-0-54-64618)

                        2011-10-25 16:45:18,195 DEBUG (Timer-4,testcluster,ip-10-81-0-54-64618) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-56511 (own address=ip-10-81-0-54-64618)

                        2011-10-25 16:45:19,545 ERROR (InfinispanServer-Main) [org.infinispan.interceptors.InvocationContextInterceptor] ISPN000136: Execution error

                        org.infinispan.util.concurrent.TimeoutException: Replication timeout for ip-10-81-0-54-56511

                        at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:100)

                        at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:419)

                        at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:130)

                        at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:154)

                        at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:199)

                        at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:186)

                        at org.infinispan.remoting.rpc.RpcManagerImpl.broadcastRpcCommand(RpcManagerImpl.java:169)

                        at org.infinispan.remoting.rpc.RpcManagerImpl.broadcastRpcCommand(RpcManagerImpl.java:162)

                        at org.infinispan.interceptors.ReplicationInterceptor.handleCrudMethod(ReplicationInterceptor.java:201)

                        at org.infinispan.interceptors.ReplicationInterceptor.visitPutKeyValueCommand(ReplicationInterceptor.java:163)

                        at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

                        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)

                        at org.infinispan.interceptors.EntryWrappingInterceptor.invokeNextAndApplyChanges(EntryWrappingInterceptor.java:181)

                        at org.infinispan.interceptors.EntryWrappingInterceptor.visitPutKeyValueCommand(EntryWrappingInterceptor.java:136)

                        at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

                        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)

                        at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitPutKeyValueCommand(NonTransactionalLockingInterceptor.java:59)

                        at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

                        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)

                        at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:133)

                        at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)

                        at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

                        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)

                        at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:133)

                        at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)

                        at org.infinispan.interceptors.StateTransferLockInterceptor.visitPutKeyValueCommand(StateTransferLockInterceptor.java:110)

                        at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

                        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)

                        at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)

                        at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)

                        at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)

                        at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

                        at org.infinispan2011-10-25 16:45:21,175 DEBUG (Timer-2,testcluster,ip-10-81-0-54-56511) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-64618 (own address=ip-10-81-0-54-56511)

                        2011-10-25 16:45:24,176 DEBUG (Timer-3,testcluster,ip-10-81-0-54-56511) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-64618 (own address=ip-10-81-0-54-56511)

                        2011-10-25 16:45:27,176 DEBUG (Timer-5,testcluster,ip-10-81-0-54-56511) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-64618 (own address=ip-10-81-0-54-56511)

                        2011-10-25 16:45:30,177 DEBUG (Timer-3,testcluster,ip-10-81-0-54-56511) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-64618 (own address=ip-10-81-0-54-56511)

                        2011-10-25 16:45:33,178 DEBUG (Timer-5,testcluster,ip-10-81-0-54-56511) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-64618 (own address=ip-10-81-0-54-56511)

                        2011-10-25 16:45:36,178 DEBUG (Timer-2,testcluster,ip-10-81-0-54-56511) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-64618 (own address=ip-10-81-0-54-56511)

                        otRodServer.addSelfToTopologyView(HotRodServer.scala:110)

                        at org.infinispan.server.hotrod.HotRodServer.startTransport(HotRodServer.scala:98)

                        at org.infinispan.server.core.AbstractProtocolServer.start(AbstractProtocolServer.scala:99)

                        at org.infinispan.server.hotrod.HotRodServer.start(HotRodServer.scala:79)

                        at org.infinispan.server.core.Main$.boot(Main.scala:140)

                        at org.infinispan.server.core.Main$$anon$1.call(Main.scala:94)

                        at org.infinispan.server.core.Main$$anon$1.call(Main.scala:91)

                        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

                        at java.util.concurrent.FutureTask.run(FutureTask.java:138)

                        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)

                        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)

                        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                        at java.lang.Thread.run(Thread.java:662)

                        2011-10-25 16:45:19,560 DEBUG (InfinispanServer-Main) [org.infinispan.server.hotrod.HotRodServer] Timed out while trying to update new view [TopologyView(1,List(TopologyAddress(10.81.0.54,11223,Map(),ip-10-81-0-54-64618)))]

                        org.infinispan.util.concurrent.TimeoutException: Replication timeout for ip-10-81-0-54-56511

                        at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:100)

                        at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:419)

                        at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:130)

                        at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:154)

                        at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:199)

                        at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:186)

                        at org.infinispan.remoting.rpc.RpcManagerImpl.broadcastRpcCommand(RpcManagerImpl.java:169)

                        at org.infinispan.remoting.rpc.RpcManagerImpl.broadcastRpcCommand(RpcManagerImpl.java:162)

                        at org.infinispan.interceptors.ReplicationInterceptor.handleCrudMethod(ReplicationInterceptor.java:201)

                        at org.infinispan.interceptors.ReplicationInterceptor.visitPutKeyValueCommand(ReplicationInterceptor.java:163)

                        at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

                        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)

                        at org.infinispan.interceptors.EntryWrappingInterceptor.invokeNextAndApplyChanges(EntryWrappingInterceptor.java:181)

                        at org.infinispan.interceptors.EntryWrappingInterceptor.visitPutKeyValueCommand(EntryWrappingInterceptor.java:136)

                        at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

                        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)

                        at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitPutKeyValueCommand(NonTransactionalLockingInterceptor.java:59)

                        at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

                        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)

                        at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:133)

                        at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)

                        at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

                        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)

                        at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:133)

                        at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)

                        at org.infinispan.interceptors.StateTransferLockInterceptor.visitPutKeyValueCommand(StateTransferLockInterceptor.java:110)

                        at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

                        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)

                        at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)

                        at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)

                        at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)

                        at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)

                        at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:318)

                        at org.infinispan.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:903)

                        at org.infinispan.CacheImpl.putIfAbsent(CacheImpl.java:633)

                        at org.infinispan.CacheImpl.putIfAbsent(CacheImpl.java:624)

                        at org.infinispan.CacheSupport.putIfAbsent(CacheSupport.java:73)

                        at org.infinispan.server.hotrod.HotRodServer$$anonfun$1$$anonfun$3.apply(HotRodServer.scala:123)

                        at org.infinispan.server.hotrod.HotRodServer$$anonfun$1$$anonfun$3.apply(HotRodServer.scala:123)

                        at org.infinispan.server.hotrod.HotRodServer.org$infinispan$server$hotrod$HotRodServer$$updateTopologyCacheEntry(HotRodServer.scala:137)

                        at org.infinispan.server.hotrod.HotRodServer$$anonfun$1.apply(HotRodServer.scala:122)

                        at org.infinispan.server.hotrod.HotRodServer$$anonfun$1.apply(HotRodServer.scala:110)

                        at org.infinispan.server.hotrod.HotRodServer.isViewUpdated(HotRodServer.scala:212)

                        at org.infinispan.server.hotrod.HotRodServer.org$infinispan$server$hotrod$HotRodServer$$updateTopologyView(HotRodServer.scala:207)

                        at org.infinispan.server.hotrod.HotRodServer.addSelfToTopologyView(HotRodServer.scala:110)

                        at org.infinispan.server.hotrod.HotRodServer.startTransport(HotRodServer.scala:98)

                        at org.infinispan.server.core.AbstractProtocolServer.start(AbstractProtocolServer.scala:99)

                        at org.infinispan.server.hotrod.HotRodServer.start(HotRodServer.scala:79)

                        at org.infinispan.server.core.Main$.boot(Main.scala:140)

                        at org.infinispan.server.core.Main$$anon$1.call(Main.scala:94)

                        at org.infinispan.server.core.Main$$anon$1.call(Main.scala:91)

                        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

                        at java.util.concurrent.FutureTask.run(FutureTask.java:138)

                        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)

                        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)

                        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                        at java.lang.Thread.run(Thread.java:662)

                        2011-10-25 16:45:20,447 DEBUG (InfinispanServer-Main) [org.infinispan.server.hotrod.HotRodServer] Added TopologyAddress(10.81.0.54,11223,Map(),ip-10-81-0-54-64618) to topology, new view is TopologyView(2,List(TopologyAddress(10.81.0.54,11222,Map(),ip-10-81-0-54-56511), TopologyAddress(10.81.0.54,11223,Map(),ip-10-81-0-54-64618)))

                        2011-10-25 16:45:21,195 DEBUG (Timer-2,testcluster,ip-10-81-0-54-64618) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-56511 (own address=ip-10-81-0-54-64618)

                        2011-10-25 16:45:24,195 DEBUG (Timer-3,testcluster,ip-10-81-0-54-64618) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-56511 (own address=ip-10-81-0-54-64618)

                        2011-10-25 16:45:27,195 DEBUG (Timer-4,testcluster,ip-10-81-0-54-64618) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-56511 (own address=ip-10-81-0-54-64618)

                        2011-10-25 16:45:30,196 DEBUG (Timer-5,testcluster,ip-10-81-0-54-64618) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-56511 (own address=ip-10-81-0-54-64618)

                        2011-10-25 16:45:33,196 DEBUG (Timer-2,testcluster,ip-10-81-0-54-64618) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-56511 (own address=ip-10-81-0-54-64618)

                        2011-10-25 16:45:36,197 DEBUG (Timer-3,testcluster,ip-10-81-0-54-64618) [org.jgroups.protocols.FD] sending are-you-alive msg to ip-10-81-0-54-56511 (own address=ip-10-81-0-54-64618)

                         

                         

                        Could you please let me know what is going on at this point?

                         

                         

                        I thank you in forward for your help.

                         

                         

                        PS: I am attaching the config files I used, once again, just in case...

                        • 9. Re: Cannot initialize or sync Infinispan/hotrod cluster
                          fealves78

                          I am also having problems to connect to the HotRodServers using Infinispan 5.1.0.0 BETA2. I am not sure why, but the code works on previous versions of infinispan but not with this version.

                           

                          After I started the servers following the latest configuration files I sent to you, I tried to run the following code:

                           

                          import org.infinispan.Cache;

                          import org.infinispan.client.hotrod.RemoteCacheManager;

                          import org.infinispan.manager.CacheContainer;

                           

                          public class Main {

                           

                               public static void main(String[] args) {

                                    CacheContainer cacheContainer = new RemoteCacheManager();

                                    Cache<String, String> cache = cacheContainer.getCache("test");

                           

                                    cache.put("user01", "first user");

                                    cache.put("user02", "second user");

                           

                                    cache.remove("user02");

                           

                                    assert !cache.containsKey("user02") : "user02 was removed";

                                    assert cache.containsKey("user01") : "user01 is still in the cache";

                               }

                          }

                           

                          Then I get the following error:

                           

                           

                          at org.infinispan.util.Util.loadClass(Util.java:89)

                          at org.infinispan.util.Util.getInstance(Util.java:207)

                          at org.infinispan.client.hotrod.RemoteCacheManager.start(RemoteCacheManager.java:450)

                          at org.infinispan.client.hotrod.RemoteCacheManager.<init>(RemoteCacheManager.java:284)

                          at org.infinispan.client.hotrod.RemoteCacheManager.<init>(RemoteCacheManager.java:291)

                          at Main.main(Main.java:13)

                          Caused by: java.lang.ClassNotFoundException: org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory

                          at org.infinispan.util.Util.loadClassStrict(Util.java:137)

                          at org.infinispan.util.Util.loadClass(Util.java:87)

                          ... 5 more

                          Caused by: java.lang.NoClassDefFoundError: org/apache/commons/pool/KeyedPoolableObjectFactory

                          at java.lang.Class.forName0(Native Method)

                          at java.lang.Class.forName(Class.java:247)

                          at org.infinispan.util.Util.loadClassStrict(Util.java:126)

                          ... 6 more

                          Caused by: java.lang.ClassNotFoundException: org.apache.commons.pool.KeyedPoolableObjectFactory

                          at java.net.URLClassLoader$1.run(URLClassLoader.java:202)

                          at java.security.AccessController.doPrivileged(Native Method)

                          at java.net.URLClassLoader.findClass(URLClassLoader.java:190)

                          at java.lang.ClassLoader.loadClass(ClassLoader.java:306)

                          at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)

                          at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

                          ... 9 more

                           

                           

                           

                          Please help!

                           

                          • 10. Re: Cannot initialize or sync Infinispan/hotrod cluster
                            galder.zamarreno

                            Seems like you the commons pool library missing, that is: commons-pool:commons-pool:1.5.4

                             

                            If you're using maven and you're depending on hot rod client, you should get it. Otherwise, you forgot to add the jar to the classpath

                            • 11. Re: Cannot initialize or sync Infinispan/hotrod cluster
                              fealves78

                              Galder,

                               

                              First of all, thank you for your help.

                               

                              I have tried your suggestion. I added the commons-pool:commons-pool:1.5.4 and that error dissapeared, but then a different error was reported: "Could not fetch transport".

                               

                              So, I decided to update to Infinispan 5.1.0.0 Beta 3. With the new version (and using the same configXXX.xml and tcpXXX.xml files as before), I was able to successfully connect to a HotRod Server using the following code:

                               

                               

                              import org.infinispan.Cache;

                              import org.infinispan.client.hotrod.RemoteCacheManager;

                              import org.infinispan.client.hotrod.impl.ConfigurationProperties;

                              import org.infinispan.client.hotrod.impl.transport.tcp.RoundRobinBalancingStrategy;

                              import org.infinispan.manager.CacheContainer;

                               

                              public class Main {

                               

                               

                                   public static void main(String[] args) {

                               

                               

                               

                               

                                        Properties props = new Properties();

                                        props.put(ConfigurationProperties.SERVER_LIST, "10.81.0.54:11222");

                                        props.put(ConfigurationProperties.REQUEST_BALANCING_STRATEGY, RoundRobinBalancingStrategy.class.getName());

                                        props.put("maxActive", 10);

                               

                               

                               

                                        CacheContainer cacheContainer = new RemoteCacheManager();

                                        Cache<String, String> cache = cacheContainer.getCache("test");

                               

                               

                               

                                        cache.put("user01", "first user");

                                        cache.put("user02", "second user");

                               

                               

                               

                                        cache.remove("user02");

                               

                               

                               

                                        assert !cache.containsKey("user02") : "user02 was removed";

                                        assert cache.containsKey("user01") : "user01 is still in the cache";

                                   }

                               

                              }

                               

                               

                               

                              Now, if I have 2 or more HotRod servers up and running (as I had before in my previous replies on this thread), and I try to connect to any of the servers, I get an error reported by the repective server.  Following is the error I got when I tried to connect to Server #1:

                               

                              ubuntu@ip-10-81-0-54:~/dev/infinispan-5.1.0.BETA3/bin$ ./startServer.sh -r hotrod -c configa.xml -Dlog4j.configuration=../etc/log4j.xml -l 10.81.0.54 -p 11222

                              -------------------------------------------------------------------
                              GMS: address=ip-10-81-0-54-29816, cluster=testcluster, physical address=10.81.0.54:7900
                              -------------------------------------------------------------------
                              2011-10-27 17:13:42,369 WARN  [DefaultChannelPipeline] (HotRodServerWorker-1-2) An exception was thrown by a user handler while handling an exception event ([id: 0x047ed081, /10.81.0.54:45009 :> /10.81.0.54:11222] EXCEPTION: java.nio.channels.ClosedChannelException)
                              java.lang.NullPointerException
                                      at org.jboss.netty.handler.codec.replay.CustomReplayingDecoder.slimDownBuffer(CustomReplayingDecoder.java:178)
                                      at org.infinispan.server.core.AbstractProtocolDecoder.resetParams(AbstractProtocolDecoder.scala:180)
                                      at org.infinispan.server.core.AbstractProtocolDecoder.exceptionCaught(AbstractProtocolDecoder.scala:265)
                                      at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:432)
                                      at org.jboss.netty.channel.socket.nio.NioWorker.cleanUpWriteBuffer(NioWorker.java:652)
                                      at org.jboss.netty.channel.socket.nio.NioWorker.close(NioWorker.java:592)
                                      at org.jboss.netty.channel.socket.nio.NioWorker.write0(NioWorker.java:512)
                                      at org.jboss.netty.channel.socket.nio.NioWorker.writeFromUserCode(NioWorker.java:387)
                                      at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:137)
                                      at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:76)
                                      at org.jboss.netty.channel.Channels.write(Channels.java:632)
                                      at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:70)
                                      at org.jboss.netty.channel.Channels.write(Channels.java:611)
                                      at org.jboss.netty.channel.Channels.write(Channels.java:578)
                                      at org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:259)
                                      at org.infinispan.server.core.AbstractProtocolDecoder.exceptionCaught(AbstractProtocolDecoder.scala:261)
                                      at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:432)
                                      at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:331)
                                      at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:280)
                                      at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:200)
                                      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
                                      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
                                      at java.lang.Thread.run(Thread.java:662)
                              2011-10-27 17:13:42,371 WARN  [DefaultChannelPipeline] (HotRodServerWorker-1-2) An exception was thrown by a user handler while handling an exception event ([id: 0x047ed081, /10.81.0.54:45009 :> /10.81.0.54:11222] EXCEPTION: java.io.IOException: Connection reset by peer)
                              java.lang.NullPointerException
                                      at org.jboss.netty.handler.codec.replay.CustomReplayingDecoder.slimDownBuffer(CustomReplayingDecoder.java:178)
                                      at org.infinispan.server.core.AbstractProtocolDecoder.resetParams(AbstractProtocolDecoder.scala:180)
                                      at org.infinispan.server.core.AbstractProtocolDecoder.exceptionCaught(AbstractProtocolDecoder.scala:265)
                                      at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:432)
                                      at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:331)
                                      at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:280)
                                      at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:200)
                                      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
                                      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
                                      at java.lang.Thread.run(Thread.java:662)

                               

                              Another error message I got under the same circumstances (with the same application and configuration files while trying to connect to a cluster of 2 HotRodServers) was:

                               

                              log4j:WARN No appenders could be found for logger (org.jboss.logging).
                              log4j:WARN Please initialize the log4j system properly.
                              log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
                              Exception in thread "main" org.infinispan.client.hotrod.exceptions.InvalidResponseException:: Invalid magic number. Expected 0xa1 and received 0x85
                                  at org.infinispan.client.hotrod.impl.protocol.Codec10.readHeader(Codec10.java:91)
                                  at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:82)
                                  at org.infinispan.client.hotrod.impl.operations.AbstractKeyValueOperation.sendPutOperation(AbstractKeyValueOperation.java:72)
                                  at org.infinispan.client.hotrod.impl.operations.PutOperation.executeOperation(PutOperation.java:51)
                                  at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:66)
                                  at org.infinispan.client.hotrod.impl.RemoteCacheImpl.put(RemoteCacheImpl.java:203)
                                  at org.infinispan.CacheSupport.put(CacheSupport.java:51)
                                  at com.pgi.cache.RemoteCache.<init>(RemoteCache.java:34)
                                  at com.pgi.cache.Main.main(Main.java:10)

                               

                               

                               

                              I am getting puzzled by such connection and synchronization problems. At this point I am not sure if there is anything wrong on the config files or in my code. I know for sure I am not using any firewall and all the ports are widelly open in the test equipment. Once again I need to ask for your help to solve this problem.

                               

                              If you could provide me with a sample (including the respective configuration files for each hotrod node) where I could test 2 or more HotRod servers connected and synchronizing in replication mode with remote clients adding/removing data from the servers It would be very helpfull. If you cannot provide such sample, I would appreciate if you could at least point me towards the right direction.

                               

                              I thank you in forward for your help.

                              • 12. Re: Cannot initialize or sync Infinispan/hotrod cluster
                                pete.haidinyak

                                I'm following this too, I would also be interested in the sample.

                                 

                                Thanks

                                • 13. Re: Cannot initialize or sync Infinispan/hotrod cluster
                                  galder.zamarreno

                                  That NPE is definitely a bug. Can you please create a JIRA in https://issues.jboss.org/browse/ISPN ?

                                   

                                  Would it be possible to provide a test case for it?

                                   

                                  This NPE comes from code introduced in 5.1.0.BETA2 to deal with https://issues.jboss.org/browse/ISPN-1383

                                  1 of 1 people found this helpful
                                  • 14. Re: Cannot initialize or sync Infinispan/hotrod cluster
                                    galder.zamarreno

                                    Btw, I think the NullPointerException is what is causing the invalid magic exception on the server side, so let's address the root cause.

                                    1 of 1 people found this helpful
                                    1 2 Previous Next