7 Replies Latest reply on Aug 26, 2015 5:31 AM by auth.gabor

    Infinispan rebalance issue (9.0.1 and TCPPING)

    auth.gabor

      I've installed the Wildfly 9.0.1.Final to my four VPS and configured the TCPPING based TCP stack:

       

                          <stack name="tcpping">
                              <transport type="TCP" socket-binding="jgroups-tcp"/>

                              <protocol type="TCPPING">

                                  <property name="port_range">0</property>

                                  <property name="initial_hosts">...[7600],...[7600],...[7600],...[7600]</property>

                              </protocol>

                              <protocol type="MERGE2"/>

                              <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>

                              <protocol type="FD"/>

                              <protocol type="VERIFY_SUSPECT"/>

                              <protocol type="BARRIER"/>

                              <protocol type="pbcast.NAKACK"/>

                              <protocol type="UNICAST2"/>

                              <protocol type="pbcast.STABLE"/>

                              <protocol type="pbcast.GMS">

                                  <property name="join_timeout">3000</property>

                              </protocol>

                              <protocol type="MFC"/>

                              <protocol type="FRAG2"/>

                              <protocol type="RSVP"/>

                          </stack>

       

       

      It works (the HornetQ separately configured):

       

       

      2015-08-21 10:15:40,580 INFO  [org.hornetq.core.server] (Thread-22 (HornetQ-server-HornetQServerImpl::serverUUID=721d266c-471b-11e5-804f-2d915de25069-102213587)) HQ221027: Bridge ClusterConnectionBridge@520c2b01 [name=sf.my-cluster.2390a8bf-471b-11e5-9bf2-1f8985d0deeb, queue=QueueImpl[name=sf.my-cluster.2390a8bf-471b-11e5-9bf2-1f8985d0deeb, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=721d266c-471b-11e5-804f-2d915de25069]]@3299b324 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@520c2b01 [name=sf.my-cluster.2390a8bf-471b-11e5-9bf2-1f8985d0deeb, queue=QueueImpl[name=sf.my-cluster.2390a8bf-471b-11e5-9bf2-1f8985d0deeb, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=721d266c-471b-11e5-804f-2d915de25069]]@3299b324 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@2123237246[nodeUUID=721d266c-471b-11e5-804f-2d915de25069, connector=TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor, address=jms, server=HornetQServerImpl::serverUUID=721d266c-471b-11e5-804f-2d915de25069])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor], discoveryGroupConfiguration=null]] is connected

      2015-08-21 10:15:42,331 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-11,ee,dc02-wild02) ISPN000094: Received new cluster view for channel web: [dc01-wild01|17] (4) [dc01-wild01, dc01-wild02, dc02-wild02, dc02-wild01]

      2015-08-21 10:15:42,334 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-11,ee,dc02-wild02) ISPN000094: Received new cluster view for channel ejb: [dc01-wild01|17] (4) [dc01-wild01, dc01-wild02, dc02-wild02, dc02-wild01]

       

       

      The topology is based on two DC and two node in each DC: dc01-wild01, dc01-wild02, dc02-wild01 and dc02-wild02



      I've made a short network split between the two DC:


      2015-08-21 00:27:39,801 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-13,ee,dc01-wild01) ISPN000094: Received new cluster view for channel web: [dc01-wild01|5] (2) [dc01-wild01, dc01-wild02]

      2015-08-21 00:27:39,801 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-13,ee,dc01-wild01) ISPN000094: Received new cluster view for channel ejb: [dc01-wild01|5] (2) [dc01-wild01, dc01-wild02]

      2015-08-21 00:28:01,636 WARN  [org.hornetq.core.client] (hornetq-failure-check-thread) HQ212037: Connection failure has been detected: HQ119014: Did not receive data from /dc02-wild01:50755. It is likely the client has exited or crashed without closing its connection, or the network between the server and client has failed. You also might have configured connection-ttl and client-failure-check-period incorrectly. Please check user manual for more information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]

      2015-08-21 00:28:01,641 WARN  [org.hornetq.core.server] (hornetq-failure-check-thread) HQ222061: Client connection failed, clearing up resources for session f61066c0-476f-11e5-937d-51211e930346

      2015-08-21 00:28:01,652 WARN  [org.hornetq.core.server] (hornetq-failure-check-thread) HQ222107: Cleared up resources for session f61066c0-476f-11e5-937d-51211e930346

      2015-08-21 00:28:01,676 INFO  [org.hornetq.core.server] (hornetq-failure-check-thread) HQ221021: failed to remove connection

      2015-08-21 00:28:01,674 WARN  [org.hornetq.core.client] (hornetq-failure-check-thread) HQ212037: Connection failure has been detected: HQ119014: Did not receive data from /dc02-wild02:50753. It is likely the client has exited or crashed without closing its connection, or the network between the server and client has failed. You also might have configured connection-ttl and client-failure-check-period incorrectly. Please check user manual for more information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]

      2015-08-21 00:28:05,678 WARN  [org.hornetq.core.server] (hornetq-failure-check-thread) HQ222061: Client connection failed, clearing up resources for session e6b45da5-476f-11e5-bf0e-21b2d764fb60

      2015-08-21 00:28:05,679 WARN  [org.hornetq.core.server] (hornetq-failure-check-thread) HQ222107: Cleared up resources for session e6b45da5-476f-11e5-bf0e-21b2d764fb60

      2015-08-21 00:28:05,683 INFO  [org.hornetq.core.server] (hornetq-failure-check-thread) HQ221021: failed to remove connection

      [...]

       

      2015-08-21 00:28:39,769 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-15,ee,dc02-wild01) ISPN000093: Received new, MERGED cluster view for channel web: MergeView::[dc02-wild02|7] (2) [dc02-wild02, dc02-wild01], 1 subgroups: [dc02-wild02|6] (1) [dc02-wild02]

      2015-08-21 00:28:39,769 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-15,ee,dc02-wild01) ISPN000093: Received new, MERGED cluster view for channel ejb: MergeView::[dc02-wild02|7] (2) [dc02-wild02, dc02-wild01], 1 subgroups: [dc02-wild02|6] (1) [dc02-wild02]

      2015-08-21 00:28:35,774 WARN  [org.hornetq.core.client] (hornetq-failure-check-thread) HQ212037: Connection failure has been detected: HQ119014: Did not receive data from /dc01-wild01:39101. It is likely the client has exited or crashed without closing its connection, or the network between the server and client has failed. You also might have configured connection-ttl and client-failure-check-period incorrectly. Please check user manual for more information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]

      2015-08-21 00:28:35,776 WARN  [org.hornetq.core.server] (hornetq-failure-check-thread) HQ222061: Client connection failed, clearing up resources for session e73bb549-476f-11e5-80eb-ab455268dbbc

      2015-08-21 00:28:35,779 WARN  [org.hornetq.core.server] (hornetq-failure-check-thread) HQ222107: Cleared up resources for session e73bb549-476f-11e5-80eb-ab455268dbbc

      2015-08-21 00:28:35,803 INFO  [org.hornetq.core.server] (hornetq-failure-check-thread) HQ221021: failed to remove connection

      2015-08-21 00:28:35,803 WARN  [org.hornetq.core.client] (hornetq-failure-check-thread) HQ212037: Connection failure has been detected: HQ119014: Did not receive data from /dc01-wild02:44217. It is likely the client has exited or crashed without closing its connection, or the network between the server and client has failed. You also might have configured connection-ttl and client-failure-check-period incorrectly. Please check user manual for more information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]

      2015-08-21 00:28:35,804 WARN  [org.hornetq.core.server] (hornetq-failure-check-thread) HQ222061: Client connection failed, clearing up resources for session e741cfed-476f-11e5-ba8e-d150e8aecec4

      2015-08-21 00:28:35,804 WARN  [org.hornetq.core.server] (hornetq-failure-check-thread) HQ222107: Cleared up resources for session e741cfed-476f-11e5-ba8e-d150e8aecec4

      2015-08-21 00:28:35,807 INFO  [org.hornetq.core.server] (hornetq-failure-check-thread) HQ221021: failed to remove connection

       

       

       

      ...and I've restored the network state between the two DC:

       

      2015-08-21 00:30:08,666 INFO  [org.hornetq.core.server] (Thread-9 (HornetQ-server-HornetQServerImpl::serverUUID=6380e9fd-471b-11e5-96ec-31025eb08071-1856560111)) HQ221027: Bridge ClusterConnectionBridge@6cdd7c57 [name=sf.my-cluster.2390a8bf-471b-11e5-9bf2-1f8985d0deeb, queue=QueueImpl[name=sf.my-cluster.2390a8bf-471b-11e5-9bf2-1f8985d0deeb, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=6380e9fd-471b-11e5-96ec-31025eb08071]]@621a8fbc targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@6cdd7c57 [name=sf.my-cluster.2390a8bf-471b-11e5-9bf2-1f8985d0deeb, queue=QueueImpl[name=sf.my-cluster.2390a8bf-471b-11e5-9bf2-1f8985d0deeb, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=6380e9fd-471b-11e5-96ec-31025eb08071]]@621a8fbc targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@222674417[nodeUUID=6380e9fd-471b-11e5-96ec-31025eb08071, connector=TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor, address=jms, server=HornetQServerImpl::serverUUID=6380e9fd-471b-11e5-96ec-31025eb08071])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor], discoveryGroupConfiguration=null]] is connected

      2015-08-21 00:30:09,720 INFO  [org.hornetq.core.server] (Thread-25 (HornetQ-server-HornetQServerImpl::serverUUID=6380e9fd-471b-11e5-96ec-31025eb08071-1856560111)) HQ221027: Bridge ClusterConnectionBridge@328f547d [name=sf.my-cluster.721d266c-471b-11e5-804f-2d915de25069, queue=QueueImpl[name=sf.my-cluster.721d266c-471b-11e5-804f-2d915de25069, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=6380e9fd-471b-11e5-96ec-31025eb08071]]@3c48ac5c targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@328f547d [name=sf.my-cluster.721d266c-471b-11e5-804f-2d915de25069, queue=QueueImpl[name=sf.my-cluster.721d266c-471b-11e5-804f-2d915de25069, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=6380e9fd-471b-11e5-96ec-31025eb08071]]@3c48ac5c targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@222674417[nodeUUID=6380e9fd-471b-11e5-96ec-31025eb08071, connector=TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor, address=jms, server=HornetQServerImpl::serverUUID=6380e9fd-471b-11e5-96ec-31025eb08071])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor], discoveryGroupConfiguration=null]] is connected

      2015-08-21 00:30:27,181 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-8,ee,dc01-wild01) ISPN000093: Received new, MERGED cluster view for channel web: MergeView::[dc01-wild01|8] (4) [dc01-wild01, dc02-wild02, dc02-wild01, dc01-wild02], 2 subgroups: [dc02-wild02|7] (2) [dc02-wild02, dc02-wild01], [dc01-wild01|5] (2) [dc01-wild01, dc01-wild02]

      2015-08-21 00:30:27,182 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-8,ee,dc01-wild01) ISPN000093: Received new, MERGED cluster view for channel ejb: MergeView::[dc01-wild01|8] (4) [dc01-wild01, dc02-wild02, dc02-wild01, dc01-wild02], 2 subgroups: [dc02-wild02|7] (2) [dc02-wild02, dc02-wild01], [dc01-wild01|5] (2) [dc01-wild01, dc01-wild02]

       

      [...]

       

      2015-08-21 00:30:07,957 INFO  [org.hornetq.core.server] (Thread-18 (HornetQ-server-HornetQServerImpl::serverUUID=2390a8bf-471b-11e5-9bf2-1f8985d0deeb-53245283)) HQ221027: Bridge ClusterConnectionBridge@14c7f868 [name=sf.my-cluster.46d0dc0a-471b-11e5-a0d3-e33b3b454637, queue=QueueImpl[name=sf.my-cluster.46d0dc0a-471b-11e5-a0d3-e33b3b454637, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=2390a8bf-471b-11e5-9bf2-1f8985d0deeb]]@7f1602b targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@14c7f868 [name=sf.my-cluster.46d0dc0a-471b-11e5-a0d3-e33b3b454637, queue=QueueImpl[name=sf.my-cluster.46d0dc0a-471b-11e5-a0d3-e33b3b454637, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=2390a8bf-471b-11e5-9bf2-1f8985d0deeb]]@7f1602b targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@521648698[nodeUUID=2390a8bf-471b-11e5-9bf2-1f8985d0deeb, connector=TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor, address=jms, server=HornetQServerImpl::serverUUID=2390a8bf-471b-11e5-9bf2-1f8985d0deeb])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor], discoveryGroupConfiguration=null]] is connected

      2015-08-21 00:30:08,945 INFO  [org.hornetq.core.server] (Thread-15 (HornetQ-server-HornetQServerImpl::serverUUID=2390a8bf-471b-11e5-9bf2-1f8985d0deeb-53245283)) HQ221027: Bridge ClusterConnectionBridge@3bf1b6b [name=sf.my-cluster.6380e9fd-471b-11e5-96ec-31025eb08071, queue=QueueImpl[name=sf.my-cluster.6380e9fd-471b-11e5-96ec-31025eb08071, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=2390a8bf-471b-11e5-9bf2-1f8985d0deeb]]@1f981c97 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@3bf1b6b [name=sf.my-cluster.6380e9fd-471b-11e5-96ec-31025eb08071, queue=QueueImpl[name=sf.my-cluster.6380e9fd-471b-11e5-96ec-31025eb08071, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=2390a8bf-471b-11e5-9bf2-1f8985d0deeb]]@1f981c97 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@521648698[nodeUUID=2390a8bf-471b-11e5-9bf2-1f8985d0deeb, connector=TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor, address=jms, server=HornetQServerImpl::serverUUID=2390a8bf-471b-11e5-9bf2-1f8985d0deeb])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=8080&host=x-x-x-x&http-upgrade-enabled=true&http-upgrade-endpoint=http-acceptor], discoveryGroupConfiguration=null]] is connected

      2015-08-21 00:30:27,315 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-2,ee,dc02-wild01) ISPN000093: Received new, MERGED cluster view for channel web: MergeView::[dc01-wild01|8] (4) [dc01-wild01, dc02-wild02, dc02-wild01, dc01-wild02], 2 subgroups: [dc02-wild02|7] (2) [dc02-wild02, dc02-wild01], [dc01-wild01|5] (2) [dc01-wild01, dc01-wild02]

      2015-08-21 00:30:27,315 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-2,ee,dc02-wild01) ISPN000093: Received new, MERGED cluster view for channel ejb: MergeView::[dc01-wild01|8] (4) [dc01-wild01, dc02-wild02, dc02-wild01, dc01-wild02], 2 subgroups: [dc02-wild02|7] (2) [dc02-wild02, dc02-wild01], [dc01-wild01|5] (2) [dc01-wild01, dc01-wild02]

       

       

      As you see, the HornetQ and the Wildfly cluster joined together...

       

      ...but the Infinispan not, some exception from the server.log of the nodes:

       

      2015-08-21 00:27:39,802 WARN  [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p3-t25) ISPN000197: Error updating cluster member list: org.infinispan.remoting.transport.jgroups.SuspectException: Suspected member: dc02-wild02

              at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:78)

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:586)

              at org.infinispan.topology.ClusterTopologyManagerImpl.confirmMembersAvailable(ClusterTopologyManagerImpl.java:402)

              at org.infinispan.topology.ClusterTopologyManagerImpl.updateCacheMembers(ClusterTopologyManagerImpl.java:393)

              at org.infinispan.topology.ClusterTopologyManagerImpl.handleClusterView(ClusterTopologyManagerImpl.java:309)

              at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener$1.run(ClusterTopologyManagerImpl.java:590)

              at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

              at java.util.concurrent.FutureTask.run(FutureTask.java:266)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

              at java.lang.Thread.run(Thread.java:745)

       

      [...]

       

      2015-08-21 00:27:59,371 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (transport-thread--p3-t3) ISPN000136: Execution error: org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 8

              at org.infinispan.statetransfer.StateTransferLockImpl.waitForTransactionData(StateTransferLockImpl.java:92)

              at org.infinispan.interceptors.base.BaseStateTransferInterceptor.waitForTransactionData(BaseStateTransferInterceptor.java:96)

              at org.infinispan.statetransfer.StateTransferInterceptor.handleTxWriteCommand(StateTransferInterceptor.java:278)

              at org.infinispan.statetransfer.StateTransferInterceptor.handleWriteCommand(StateTransferInterceptor.java:247)

              at org.infinispan.statetransfer.StateTransferInterceptor.visitRemoveCommand(StateTransferInterceptor.java:123)

              at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)

              at org.infinispan.interceptors.CacheMgmtInterceptor.visitRemoveCommand(CacheMgmtInterceptor.java:209)

              at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)

              at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102)

              at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71)

              at org.infinispan.commands.AbstractVisitor.visitRemoveCommand(AbstractVisitor.java:49)

              at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)

              at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)

              at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1617)

              at org.infinispan.cache.impl.CacheImpl.removeInternal(CacheImpl.java:579)

              at org.infinispan.cache.impl.CacheImpl.remove(CacheImpl.java:572)

              at org.infinispan.cache.impl.DecoratedCache.remove(DecoratedCache.java:442)

              at org.infinispan.cache.impl.AbstractDelegatingCache.remove(AbstractDelegatingCache.java:297)

              at org.wildfly.clustering.server.registry.CacheRegistry.topologyChanged(CacheRegistry.java:152)

              at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)

              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

              at java.lang.reflect.Method.invoke(Method.java:497)

              at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:286)

              at org.infinispan.util.concurrent.WithinThreadExecutor.execute(WithinThreadExecutor.java:22)

              at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl.invoke(AbstractListenerImpl.java:309)

              at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.doRealInvocation(CacheNotifierImpl.java:1212)

              at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invoke(CacheNotifierImpl.java:1170)

              at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invoke(CacheNotifierImpl.java:1135)

              at org.infinispan.notifications.cachelistener.CacheNotifierImpl.notifyTopologyChanged(CacheNotifierImpl.java:590)

              at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:201)

              at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:45)

              at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:113)

              at org.infinispan.topology.LocalTopologyManagerImpl.resetLocalTopologyBeforeRebalance(LocalTopologyManagerImpl.java:333)

              at org.infinispan.topology.LocalTopologyManagerImpl.doHandleRebalance(LocalTopologyManagerImpl.java:413)

              at org.infinispan.topology.LocalTopologyManagerImpl$3.run(LocalTopologyManagerImpl.java:382)

              at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

              at java.util.concurrent.FutureTask.run(FutureTask.java:266)

              at org.infinispan.executors.SemaphoreCompletionService$QueueingTask.runInternal(SemaphoreCompletionService.java:173)

              at org.infinispan.executors.SemaphoreCompletionService$QueueingTask.run(SemaphoreCompletionService.java:151)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

              at java.lang.Thread.run(Thread.java:745)

       

      [...]

       

      2015-08-21 00:27:59,446 ERROR [org.infinispan.topology.LocalTopologyManagerImpl] (transport-thread--p8-t13) ISPN000367: There was an issue with topology update for topology: 9: org.infinispan.commons.CacheListenerException: I

      SPN000280: Caught exception [org.infinispan.util.concurrent.TimeoutException] while invoking method [public void org.wildfly.clustering.server.registry.CacheRegistry.topologyChanged(org.infinispan.notifications.cachelistener.

      event.TopologyChangedEvent)] on listener instance: org.wildfly.clustering.server.registry.CacheRegistryFactory$1@7014c85

              at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:291)

              at org.infinispan.util.concurrent.WithinThreadExecutor.execute(WithinThreadExecutor.java:22)

              at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl.invoke(AbstractListenerImpl.java:309)

              at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.doRealInvocation(CacheNotifierImpl.java:1212)

              at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invoke(CacheNotifierImpl.java:1170)

              at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invoke(CacheNotifierImpl.java:1135)

              at org.infinispan.notifications.cachelistener.CacheNotifierImpl.notifyTopologyChanged(CacheNotifierImpl.java:590)

              at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:201)

              at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:45)

              at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:113)

              at org.infinispan.topology.LocalTopologyManagerImpl.doHandleTopologyUpdate(LocalTopologyManagerImpl.java:285)

              at org.infinispan.topology.LocalTopologyManagerImpl$1.run(LocalTopologyManagerImpl.java:218)

              at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

              at java.util.concurrent.FutureTask.run(FutureTask.java:266)

              at org.infinispan.executors.SemaphoreCompletionService$QueueingTask.runInternal(SemaphoreCompletionService.java:173)

              at org.infinispan.executors.SemaphoreCompletionService$QueueingTask.run(SemaphoreCompletionService.java:151)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

              at java.lang.Thread.run(Thread.java:745)

      Caused by: org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology 8

              at org.infinispan.statetransfer.StateTransferLockImpl.waitForTransactionData(StateTransferLockImpl.java:92)

              at org.infinispan.interceptors.base.BaseStateTransferInterceptor.waitForTransactionData(BaseStateTransferInterceptor.java:96)

              at org.infinispan.statetransfer.StateTransferInterceptor.handleTxWriteCommand(StateTransferInterceptor.java:278)

              at org.infinispan.statetransfer.StateTransferInterceptor.handleWriteCommand(StateTransferInterceptor.java:247)

              at org.infinispan.statetransfer.StateTransferInterceptor.visitRemoveCommand(StateTransferInterceptor.java:123)

              at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)

              at org.infinispan.interceptors.CacheMgmtInterceptor.visitRemoveCommand(CacheMgmtInterceptor.java:209)

              at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)

              at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102)

              at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71)

              at org.infinispan.commands.AbstractVisitor.visitRemoveCommand(AbstractVisitor.java:49)

              at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)

              at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)

              at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1617)

              at org.infinispan.cache.impl.CacheImpl.removeInternal(CacheImpl.java:579)

              at org.infinispan.cache.impl.CacheImpl.remove(CacheImpl.java:572)

              at org.infinispan.cache.impl.DecoratedCache.remove(DecoratedCache.java:442)

              at org.infinispan.cache.impl.AbstractDelegatingCache.remove(AbstractDelegatingCache.java:297)

              at org.wildfly.clustering.server.registry.CacheRegistry.topologyChanged(CacheRegistry.java:152)

              at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)

              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

              at java.lang.reflect.Method.invoke(Method.java:497)

              at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:286)

              ... 18 more

              Suppressed: org.infinispan.commons.CacheException: javax.transaction.RollbackException: Transaction marked as rollback only.

                      at org.wildfly.clustering.ee.infinispan.ActiveTransactionBatch.close(ActiveTransactionBatch.java:50)

                      at org.wildfly.clustering.server.registry.CacheRegistry.topologyChanged(CacheRegistry.java:157)

                      ... 22 more

              Caused by: javax.transaction.RollbackException: Transaction marked as rollback only.

                      at org.infinispan.transaction.tm.DummyTransaction.setRollbackOnly(DummyTransaction.java:148)

                      at org.infinispan.interceptors.InvocationContextInterceptor.markTxForRollbackAndRethrow(InvocationContextInterceptor.java:163)

                      at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:128)

                      at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71)

                      at org.infinispan.commands.AbstractVisitor.visitRemoveCommand(AbstractVisitor.java:49)

                      at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)

                      at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)

                      at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1617)

                      at org.infinispan.cache.impl.CacheImpl.removeInternal(CacheImpl.java:579)

                      at org.infinispan.cache.impl.CacheImpl.remove(CacheImpl.java:572)

                      at org.infinispan.cache.impl.DecoratedCache.remove(DecoratedCache.java:442)

                      at org.infinispan.cache.impl.AbstractDelegatingCache.remove(AbstractDelegatingCache.java:297)

                      at org.wildfly.clustering.server.registry.CacheRegistry.topologyChanged(CacheRegistry.java:152)

                      ... 22 more

      [...]

       

      2015-08-21 00:30:33,189 ERROR [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p4-t7) ISPN000196: Failed to recover cluster state after the current node became the coordinator: java.util.concurrent.TimeoutException

              at java.util.concurrent.FutureTask.get(FutureTask.java:205)

              at org.infinispan.topology.ClusterTopologyManagerImpl.executeOnClusterSync(ClusterTopologyManagerImpl.java:473)

              at org.infinispan.topology.ClusterTopologyManagerImpl.recoverClusterStatus(ClusterTopologyManagerImpl.java:350)

              at org.infinispan.topology.ClusterTopologyManagerImpl.handleClusterView(ClusterTopologyManagerImpl.java:286)

              at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener$1.run(ClusterTopologyManagerImpl.java:590)

              at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

              at java.util.concurrent.FutureTask.run(FutureTask.java:266)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

              at java.lang.Thread.run(Thread.java:745)

       

       

      2015-08-21 00:30:42,262 WARN  [org.infinispan.partitionhandling.impl.PreferAvailabilityStrategy] (transport-thread--p8-t20) ISPN000313: Cache dist lost data because of abrupt leavers [dc01-wild01, dc01-wild02]

       

      [...]

       

      2015-08-21 00:30:42,511 WARN  [org.infinispan.remoting.inboundhandler.NonTotalOrderPerCacheInboundInvocationHandler] (remote-thread--p6-t26) ISPN000071: Caught exception when handling command StateRequestCommand{cache=dist, origin=dc01-wild02, type=GET_TRANSACTIONS, topologyId=8, segments=[32, 67, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 23, 25, 28, 30]}: java.lang.IllegalArgumentException: Node dc01-wild01 is not a member

              at org.infinispan.distribution.ch.impl.DefaultConsistentHash.getSegmentsForOwner(DefaultConsistentHash.java:115)

              at org.infinispan.distribution.group.GroupingConsistentHash.getSegmentsForOwner(GroupingConsistentHash.java:67)

              at org.infinispan.statetransfer.StateProviderImpl.getTransactionsForSegments(StateProviderImpl.java:163)

              at org.infinispan.statetransfer.StateRequestCommand.perform(StateRequestCommand.java:67)

              at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokePerform(BasePerCacheInboundInvocationHandler.java:85)

              at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:32)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

              at java.lang.Thread.run(Thread.java:745)

       

      [...]

       

      2015-08-21 00:28:41,861 WARN  [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p5-t2) ISPN000209: Failed to retrieve transactions for segments [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 23, 25, 28, 30, 32, 67] of cache dist from node dc01-wild01: org.infinispan.util.concurrent.TimeoutException: Node dc01-wild01 timed out

              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:248)

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:561)

              at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:287)

              at org.infinispan.statetransfer.StateConsumerImpl.getTransactions(StateConsumerImpl.java:848)

              at org.infinispan.statetransfer.StateConsumerImpl.requestTransactions(StateConsumerImpl.java:767)

              at org.infinispan.statetransfer.StateConsumerImpl.addTransfers(StateConsumerImpl.java:711)

              at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:369)

              at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:198)

              at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:45)

              at org.infinispan.statetransfer.StateTransferManagerImpl$1.rebalance(StateTransferManagerImpl.java:118)

              at org.infinispan.topology.LocalTopologyManagerImpl.doHandleRebalance(LocalTopologyManagerImpl.java:420)

              at org.infinispan.topology.LocalTopologyManagerImpl$3.run(LocalTopologyManagerImpl.java:382)

              at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

              at java.util.concurrent.FutureTask.run(FutureTask.java:266)

              at org.infinispan.executors.SemaphoreCompletionService$QueueingTask.runInternal(SemaphoreCompletionService.java:173)

              at org.infinispan.executors.SemaphoreCompletionService$QueueingTask.run(SemaphoreCompletionService.java:151)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

              at java.lang.Thread.run(Thread.java:745)

      Caused by: org.jgroups.TimeoutException: timeout waiting for response from dc01-wild01, request: org.jgroups.blocks.UnicastRequest@ce4af61, req_id=8688, mode=GET_ALL, target=dc01-wild01

              at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:427)

              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:433)

              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:241)

              ... 18 more

       

      [...]

       

      2015-08-21 00:28:58,324 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (SessionExpirationScheduler - 1) ISPN000136: Execution error: java.lang.StackOverflowError

              at java.lang.Class.forName0(Native Method)

              at java.lang.Class.forName(Class.java:348)

              at org.jboss.logging.Logger$1.run(Logger.java:2544)

              at java.security.AccessController.doPrivileged(Native Method)

              at org.jboss.logging.Logger.getMessageLogger(Logger.java:2529)

              at org.jboss.logging.Logger.getMessageLogger(Logger.java:2516)

              at org.infinispan.util.logging.LogFactory.getLog(LogFactory.java:17)

              at org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand.<clinit>(TxCompletionNotificationCommand.java:26)

              at org.infinispan.commands.CommandsFactoryImpl.buildTxCompletionNotificationCommand(CommandsFactoryImpl.java:542)

              at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.releaseLocksOnFailureBeforePrepare(PessimisticLockingInterceptor.java:252)

              at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitDataWriteCommand(PessimisticLockingInterceptor.java:138)

              at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitRemoveCommand(AbstractLockingInterceptor.java:65)

              at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)

              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)

              at org.infinispan.commands.AbstractVisitor.visitRemoveCommand(AbstractVisitor.java:49)

              at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)

       

      [...]

       

      2015-08-21 00:28:58,426 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (SessionExpirationScheduler - 1) ISPN000136: Execution error: java.lang.NoClassDefFoundError: Could not initialize class org.infinispan.commands.remote.recovery.TxCompletionNotificationCommand

              at org.infinispan.commands.CommandsFactoryImpl.buildTxCompletionNotificationCommand(CommandsFactoryImpl.java:542)

              at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.releaseLocksOnFailureBeforePrepare(PessimisticLockingInterceptor.java:252)

              at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitDataWriteCommand(PessimisticLockingInterceptor.java:138)

              at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitRemoveCommand(AbstractLockingInterceptor.java:65)

              at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)

              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)

              at org.infinispan.commands.AbstractVisitor.visitRemoveCommand(AbstractVisitor.java:49)

              at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)

              at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:367)

              at org.infinispan.interceptors.TxInterceptor.visitRemoveCommand(TxInterceptor.java:273)

              at org.infinispan.commands.write.RemoveCommand.acceptVisitor(RemoveCommand.java:58)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)

              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)

       

      [...]

       

      2015-08-21 00:28:39,773 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (default task-16) ISPN000136: Execution error: org.infinispan.remoting.RemoteException: ISPN000217: Received exception from dc02-wild02,

      see cause for remote stack trace

              at org.infinispan.remoting.transport.AbstractTransport.checkResponse(AbstractTransport.java:46)

              at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:71)

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:586)

              at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:287)

              at org.infinispan.interceptors.distribution.TxDistributionInterceptor.visitLockControlCommand(TxDistributionInterceptor.java:185)

              at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)

              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)

              at org.infinispan.commands.AbstractVisitor.visitLockControlCommand(AbstractVisitor.java:174)

              at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)

              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)

              at org.infinispan.commands.AbstractVisitor.visitLockControlCommand(AbstractVisitor.java:174)

              at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)

              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)

              at org.infinispan.commands.AbstractVisitor.visitLockControlCommand(AbstractVisitor.java:174)

              at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110)

              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)

              at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.acquireRemoteIfNeeded(PessimisticLockingInterceptor.java:238)

              at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitDataWriteCommand(PessimisticLockingInterceptor.java:128)

              at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.visitPutKeyValueCommand(AbstractTxLockingInterceptor.java:68)

       

      [...]

       

      2015-08-21 00:28:42,482 WARN  [org.infinispan.transaction.impl.TransactionTable] (TxCleanupService,gacivs-frontend-war-0.0.22-SNAPSHOT.war,dc02-wild01) ISPN000326: Remote transaction GlobalTransaction:<dc02-wild02>:6506:remote timed out. Rolling back after 67032 ms

      2015-08-21 00:28:42,483 WARN  [org.infinispan.transaction.impl.TransactionTable] (TxCleanupService,gacivs-frontend-war-0.0.22-SNAPSHOT.war,dc02-wild01) ISPN000326: Remote transaction GlobalTransaction:<dc01-wild01>:10945:remote timed out. Rolling back after 66123 ms

       

      [...]

       

      2015-08-21 00:29:41,823 WARN  [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p8-t21) ISPN000209: Failed to retrieve transactions for segments [13, 14, 15, 16, 27, 28, 29, 30, 31, 32, 33, 54, 55, 67, 68, 70, 72] of cache dist from node dc02-wild01: org.infinispan.util.concurrent.TimeoutException: Node dc02-wild01 timed out

              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:248)

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:561)

              at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:287)

              at org.infinispan.statetransfer.StateConsumerImpl.getTransactions(StateConsumerImpl.java:848)

              at org.infinispan.statetransfer.StateConsumerImpl.requestTransactions(StateConsumerImpl.java:767)

              at org.infinispan.statetransfer.StateConsumerImpl.addTransfers(StateConsumerImpl.java:711)

              at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:369)

              at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:198)

              at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:45)

              at org.infinispan.statetransfer.StateTransferManagerImpl$1.rebalance(StateTransferManagerImpl.java:118)

              at org.infinispan.topology.LocalTopologyManagerImpl.doHandleRebalance(LocalTopologyManagerImpl.java:420)

              at org.infinispan.topology.LocalTopologyManagerImpl$3.run(LocalTopologyManagerImpl.java:382)

              at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

              at java.util.concurrent.FutureTask.run(FutureTask.java:266)

              at org.infinispan.executors.SemaphoreCompletionService$QueueingTask.runInternal(SemaphoreCompletionService.java:173)

              at org.infinispan.executors.SemaphoreCompletionService$QueueingTask.run(SemaphoreCompletionService.java:151)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

              at java.lang.Thread.run(Thread.java:745)

      Caused by: org.jgroups.TimeoutException: timeout waiting for response from dc02-wild01, request: org.jgroups.blocks.UnicastRequest@37ef53c9, req_id=5727, mode=GET_ALL, target=dc02-wild01

              at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:427)

              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:433)

              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:241)

              ... 18 more

       

       

       

      Every node logged these various exceptions ~2GByte/hour speed... :/

       


      Any idea?

       

       

      P.S.: The cache coordinator node and the cluster-wide rebalance of the cache is a new feature? It is works for you?

      2015-08-21 11:45:42,426 INFO  [org.infinispan.CLUSTER] (remote-thread--p9-t12) ISPN000310: Starting cluster-wide rebalance for cache gacivs-webgl-war-0.0.22-SNAPSHOT.war, topology CacheTopology{id=6, rebalanceId=3, currentCH=DefaultConsistentHash{ns=80, owners = (3)[dc01-wild01: 27+27, dc01-wild02: 27+26, dc02-wild01: 26+27]}, pendingCH=DefaultConsistentHash{ns=80, owners = (4)[dc01-wild01: 20+20, dc01-wild02: 20+20, dc02-wild01: 20+20, dc02-wild02: 20+20]}, unionCH=null, actualMembers=[dc01-wild01, dc01-wild02, dc02-wild01, dc02-wild02]}

      2015-08-21 11:45:42,430 INFO  [org.infinispan.CLUSTER] (remote-thread--p4-t1) ISPN000310: Starting cluster-wide rebalance for cache dist, topology CacheTopology{id=6, rebalanceId=3, currentCH=DefaultConsistentHash{ns=80, owners = (3)[dc01-wild01: 27+27, dc01-wild02: 27+26, dc02-wild01: 26+27]}, pendingCH=DefaultConsistentHash{ns=80, owners = (4)[dc01-wild01: 20+20, dc01-wild02: 20+20, dc02-wild01: 20+20, dc02-wild02: 20+20]}, unionCH=null, actualMembers=[dc01-wild01, dc01-wild02, dc02-wild01, dc02-wild02]}

      2015-08-21 11:45:42,445 INFO  [org.infinispan.CLUSTER] (remote-thread--p9-t17) ISPN000310: Starting cluster-wide rebalance for cache dist, topology CacheTopology{id=6, rebalanceId=3, currentCH=DefaultConsistentHash{ns=80, owners = (3)[dc01-wild01: 27+27, dc01-wild02: 27+26, dc02-wild01: 26+27]}, pendingCH=DefaultConsistentHash{ns=80, owners = (4)[dc01-wild01: 20+20, dc01-wild02: 20+20, dc02-wild01: 20+20, dc02-wild02: 20+20]}, unionCH=null, actualMembers=[dc01-wild01, dc01-wild02, dc02-wild01, dc02-wild02]}

      2015-08-21 11:45:42,449 INFO  [org.infinispan.CLUSTER] (remote-thread--p9-t16) ISPN000310: Starting cluster-wide rebalance for cache gacivs-frontend-war-0.0.22-SNAPSHOT.war, topology CacheTopology{id=6, rebalanceId=3, currentCH=DefaultConsistentHash{ns=80, owners = (3)[dc01-wild01: 27+27, dc01-wild02: 27+26, dc02-wild01: 26+27]}, pendingCH=DefaultConsistentHash{ns=80, owners = (4)[dc01-wild01: 20+20, dc01-wild02: 20+20, dc02-wild01: 20+20, dc02-wild02: 20+20]}, unionCH=null, actualMembers=[dc01-wild01, dc01-wild02, dc02-wild01, dc02-wild02]}

      2015-08-21 11:45:42,453 INFO  [org.infinispan.CLUSTER] (remote-thread--p9-t12) ISPN000310: Starting cluster-wide rebalance for cache gacivs-backend-services-0.0.22-SNAPSHOT.war, topology CacheTopology{id=6, rebalanceId=3, currentCH=DefaultConsistentHash{ns=80, owners = (3)[dc01-wild01: 27+27, dc01-wild02: 27+26, dc02-wild01: 26+27]}, pendingCH=DefaultConsistentHash{ns=80, owners = (4)[dc01-wild01: 20+20, dc01-wild02: 20+20, dc02-wild01: 20+20, dc02-wild02: 20+20]}, unionCH=null, actualMembers=[dc01-wild01, dc01-wild02, dc02-wild01, dc02-wild02]}

      2015-08-21 11:45:42,471 INFO  [org.infinispan.CLUSTER] (remote-thread--p9-t19) ISPN000310: Starting cluster-wide rebalance for cache gacivs-backend-dao-services-0.0.22-SNAPSHOT.war, topology CacheTopology{id=6, rebalanceId=3, currentCH=DefaultConsistentHash{ns=80, owners = (3)[dc01-wild01: 27+27, dc01-wild02: 27+26, dc02-wild01: 26+27]}, pendingCH=DefaultConsistentHash{ns=80, owners = (4)[dc01-wild01: 20+20, dc01-wild02: 20+20, dc02-wild01: 20+20, dc02-wild02: 20+20]}, unionCH=null, actualMembers=[dc01-wild01, dc01-wild02, dc02-wild01, dc02-wild02]}

      2015-08-21 11:45:43,320 INFO  [org.infinispan.CLUSTER] (remote-thread--p9-t19) ISPN000336: Finished cluster-wide rebalance for cache gacivs-webgl-war-0.0.22-SNAPSHOT.war, topology id = 6

      2015-08-21 11:45:43,375 INFO  [org.infinispan.CLUSTER] (remote-thread--p9-t19) ISPN000336: Finished cluster-wide rebalance for cache gacivs-backend-services-0.0.22-SNAPSHOT.war, topology id = 6

      2015-08-21 11:45:43,378 INFO  [org.infinispan.CLUSTER] (remote-thread--p9-t18) ISPN000336: Finished cluster-wide rebalance for cache gacivs-frontend-war-0.0.22-SNAPSHOT.war, topology id = 6

      2015-08-21 11:45:43,437 INFO  [org.infinispan.CLUSTER] (remote-thread--p9-t18) ISPN000336: Finished cluster-wide rebalance for cache dist, topology id = 6

      2015-08-21 11:45:43,453 INFO  [org.infinispan.CLUSTER] (remote-thread--p4-t5) ISPN000336: Finished cluster-wide rebalance for cache dist, topology id = 6

      2015-08-21 11:45:43,464 INFO  [org.infinispan.CLUSTER] (remote-thread--p9-t18) ISPN000336: Finished cluster-wide rebalance for cache gacivs-backend-dao-services-0.0.22-SNAPSHOT.war, topology id = 6

       

       

      Sorry for the TL;DR post...

        • 1. Re: Infinispan rebalance issue (9.0.1 and TCPPING)
          pferraro

          A similar issue was filed here: https://issues.jboss.org/browse/WFLY-5140

           

          According to the user, upgrading the jdk to version 8u60 solved the issue.

          • 2. Re: Infinispan rebalance issue (9.0.1 and TCPPING)
            kmicic

            Actually it didn't. StackOverFlowError disapeared but NoClassDefFoundErrors are still there.

            • 3. Re: Infinispan rebalance issue (9.0.1 and TCPPING)
              pferraro

              FYI: I found the underlying issue and submitted a fix to both 9.x and master:

              https://github.com/wildfly/wildfly/pull/7983

              https://github.com/wildfly/wildfly/pull/7984

              • 4. Re: Infinispan rebalance issue (9.0.1 and TCPPING)
                auth.gabor

                Thank you, I'll patch my 9.0.1.Final with the [WFLY-5140] NoClassDefFoundError in ha configuration - JBoss Issue Tracker (it is already patched with the [WFLY-5123] Execution error: org.infinispan.util.concurrent.TimeoutException: Timed out waiting for topology - JBoss Iss…) and test it again soon! The half of the cited exceptions is solved...

                • 5. Re: Infinispan rebalance issue (9.0.1 and TCPPING)
                  auth.gabor

                  I've patched my 9.0.1.Final with the above mentioned changes and at the moment the majority of the exceptions disappeared, one ERROR level exception remained...

                  2015-08-25 09:18:43,523 ERROR [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p5-t13) ISPN000196: Failed to recover cluster state after the current node became the coordinator: org.infinispan.commons.CacheException: Unsuccessful response received from node dc01-wild01: CacheNotFoundResponse
                          at org.infinispan.topology.ClusterTopologyManagerImpl.executeOnClusterSync(ClusterTopologyManagerImpl.java:482)
                          at org.infinispan.topology.ClusterTopologyManagerImpl.recoverClusterStatus(ClusterTopologyManagerImpl.java:350)
                          at org.infinispan.topology.ClusterTopologyManagerImpl.handleClusterView(ClusterTopologyManagerImpl.java:286)
                          at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener$1.run(ClusterTopologyManagerImpl.java:590)
                          at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
                          at java.util.concurrent.FutureTask.run(FutureTask.java:266)
                          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
                          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
                          at java.lang.Thread.run(Thread.java:745)

                  ...and one WARN level exception:

                  2015-08-25 09:30:46,756 WARN  [org.infinispan.topology.CacheTopologyControlCommand] (remote-thread--p4-t26) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=dist, type=REBALANCE_CONFIRM, sender=dc01-wild01, joinInfo=null, topologyId=55, rebalanceId=23, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=21}: org.infinispan.commons.CacheException: Received invalid rebalance confirmation from dc01-wild01 for cache dist, expecting topology id 57 but got 55
                          at org.infinispan.topology.RebalanceConfirmationCollector.confirmRebalance(RebalanceConfirmationCollector.java:39)
                          at org.infinispan.topology.ClusterCacheStatus.doConfirmRebalance(ClusterCacheStatus.java:292)
                          at org.infinispan.topology.ClusterTopologyManagerImpl.handleRebalanceCompleted(ClusterTopologyManagerImpl.java:204)
                          at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:167)
                          at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:144)
                          at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler$2.run(GlobalInboundInvocationHandler.java:158)
                          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
                          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
                          at java.lang.Thread.run(Thread.java:745)

                  I'll continue my tests...

                  • 6. Re: Infinispan rebalance issue (9.0.1 and TCPPING)
                    pferraro

                    That particular error was fixed in upstream Infinispan - though I'm not sure if the fix was ported to Infinispan's 7.2 branch (used in WF9).

                    • 7. Re: Infinispan rebalance issue (9.0.1 and TCPPING)
                      auth.gabor

                      Hm... network split, some WARN exception and notification (its OK I think):

                      2015-08-26 00:50:02,186 WARN  [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p17-t7) ISPN000197: Error updating cluster member list: org.infinispan.remoting.transport.jgroups.SuspectException: Suspected member: dc02-wild02
                              at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:78)
                              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:586)
                              at org.infinispan.topology.ClusterTopologyManagerImpl.confirmMembersAvailable(ClusterTopologyManagerImpl.java:402)
                              at org.infinispan.topology.ClusterTopologyManagerImpl.updateCacheMembers(ClusterTopologyManagerImpl.java:393)
                              at org.infinispan.topology.ClusterTopologyManagerImpl.handleClusterView(ClusterTopologyManagerImpl.java:309)
                              at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener$1.run(ClusterTopologyManagerImpl.java:590)
                              at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
                              at java.util.concurrent.FutureTask.run(FutureTask.java:266)
                              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
                              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
                              at java.lang.Thread.run(Thread.java:745)
                      2015-08-26 00:50:04,219 WARN  [org.infinispan.partitionhandling.impl.PreferAvailabilityStrategy] (transport-thread--p17-t8) ISPN000313: Cache dist lost data because of abrupt leavers [dc02-wild01, dc02-wild02]

                      ...and some WARN notification after the network state restored between the two DC:

                      2015-08-26 00:51:29,386 WARN  [org.infinispan.statetransfer.StateConsumerImpl] (stateTransferExecutor-thread--p29-t12) Received unsolicited state from node dc02-wild01 for segment 1 of cache gacivs-backend-services-0.0.23-SNAPSHOT.war
                      2015-08-26 00:51:29,424 WARN  [org.infinispan.statetransfer.StateConsumerImpl] (stateTransferExecutor-thread--p29-t46) Received unsolicited state from node dc02-wild02 for segment 2 of cache gacivs-backend-dao-services-0.0.23-SNAPSHOT.war

                       

                      Everything looks good and works good, but one suspicious exception remained:

                      2015-08-26 08:10:21,426 ERROR [org.infinispan.topology.LocalTopologyManagerImpl] (remote-thread--p6-t43) ISPN000367: There was an issue with topology update for topology: 21: org.infinispan.commons.CacheListenerException: ISPN000280: Caught exception [org.infinispan.commons.CacheException] while invoking method [public void org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager.dataRehashed(org.infinispan.notifications.cachelistener.event.DataRehashedEvent)] on listener instance: org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager@5f452058
                              at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:291)
                              at org.infinispan.util.concurrent.WithinThreadExecutor.execute(WithinThreadExecutor.java:22)
                              at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl.invoke(AbstractListenerImpl.java:309)
                              at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.doRealInvocation(CacheNotifierImpl.java:1212)
                              at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invoke(CacheNotifierImpl.java:1170)
                              at org.infinispan.notifications.cachelistener.CacheNotifierImpl$BaseCacheEntryListenerInvocation.invoke(CacheNotifierImpl.java:1135)
                              at org.infinispan.notifications.cachelistener.CacheNotifierImpl.notifyDataRehashed(CacheNotifierImpl.java:576)
                              at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:386)
                              at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:198)
                              at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:45)
                              at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:113)
                              at org.infinispan.topology.LocalTopologyManagerImpl.doHandleTopologyUpdate(LocalTopologyManagerImpl.java:285)
                              at org.infinispan.topology.LocalTopologyManagerImpl$1.run(LocalTopologyManagerImpl.java:218)
                              at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
                              at java.util.concurrent.FutureTask.run(FutureTask.java:266)
                              at org.infinispan.executors.SemaphoreCompletionService$QueueingTask.runInternal(SemaphoreCompletionService.java:173)
                              at org.infinispan.executors.SemaphoreCompletionService$QueueingTask.run(SemaphoreCompletionService.java:151)
                              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
                              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
                              at java.lang.Thread.run(Thread.java:745)
                      Caused by: org.infinispan.commons.CacheException: javax.transaction.SystemException: Unable to rollback transaction
                              at org.wildfly.clustering.ee.infinispan.ActiveTransactionBatch.discard(ActiveTransactionBatch.java:59)
                              at org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager.schedule(InfinispanSessionManager.java:369)
                              at org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager.dataRehashed(InfinispanSessionManager.java:348)
                              at sun.reflect.GeneratedMethodAccessor158.invoke(Unknown Source)
                              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                              at java.lang.reflect.Method.invoke(Method.java:497)
                              at org.infinispan.notifications.impl.AbstractListenerImpl$ListenerInvocationImpl$1.run(AbstractListenerImpl.java:286)
                              ... 19 more
                      Caused by: javax.transaction.SystemException: Unable to rollback transaction
                              at org.infinispan.transaction.tm.DummyTransaction.rollback(DummyTransaction.java:130)
                              at org.infinispan.transaction.tm.DummyBaseTransactionManager.rollback(DummyBaseTransactionManager.java:95)
                              at org.wildfly.clustering.ee.infinispan.ActiveTransactionBatch.discard(ActiveTransactionBatch.java:57)
                              ... 25 more
                      Caused by: javax.transaction.HeuristicMixedException
                              at org.infinispan.transaction.tm.DummyTransaction.finishResource(DummyTransaction.java:404)
                              at org.infinispan.transaction.tm.DummyTransaction.rollbackResources(DummyTransaction.java:424)
                              at org.infinispan.transaction.tm.DummyTransaction.runCommit(DummyTransaction.java:300)
                              at org.infinispan.transaction.tm.DummyTransaction.rollback(DummyTransaction.java:127)
                              ... 27 more
                      Caused by: javax.transaction.xa.XAException
                              at org.infinispan.transaction.impl.TransactionCoordinator.rollback(TransactionCoordinator.java:182)
                              at org.infinispan.transaction.xa.TransactionXaAdapter.rollback(TransactionXaAdapter.java:125)
                              at org.infinispan.transaction.tm.DummyTransaction.finishResource(DummyTransaction.java:372)
                              ... 30 more
                      Caused by: org.infinispan.util.concurrent.TimeoutException: Timed out after 17.5 seconds waiting for a response from dc01-wild01
                              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processCalls(CommandAwareRpcDispatcher.java:509)
                              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommands(CommandAwareRpcDispatcher.java:152)
                              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:564)
                              at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:287)
                              at org.infinispan.interceptors.distribution.TxDistributionInterceptor.visitRollbackCommand(TxDistributionInterceptor.java:236)
                              at org.infinispan.commands.tx.RollbackCommand.acceptVisitor(RollbackCommand.java:40)
                              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
                              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)
                              at org.infinispan.commands.AbstractVisitor.visitRollbackCommand(AbstractVisitor.java:128)
                              at org.infinispan.commands.tx.RollbackCommand.acceptVisitor(RollbackCommand.java:40)
                              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
                              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)
                              at org.infinispan.commands.AbstractVisitor.visitRollbackCommand(AbstractVisitor.java:128)
                              at org.infinispan.commands.tx.RollbackCommand.acceptVisitor(RollbackCommand.java:40)
                              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
                              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)
                              at org.infinispan.commands.AbstractVisitor.visitRollbackCommand(AbstractVisitor.java:128)
                              at org.infinispan.commands.tx.RollbackCommand.acceptVisitor(RollbackCommand.java:40)
                              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
                              at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.visitRollbackCommand(AbstractTxLockingInterceptor.java:56)
                              at org.infinispan.commands.tx.RollbackCommand.acceptVisitor(RollbackCommand.java:40)
                              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
                              at org.infinispan.interceptors.NotificationInterceptor.visitRollbackCommand(NotificationInterceptor.java:50)
                              at org.infinispan.commands.tx.RollbackCommand.acceptVisitor(RollbackCommand.java:40)
                              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
                              at org.infinispan.interceptors.TxInterceptor.visitRollbackCommand(TxInterceptor.java:239)
                              at org.infinispan.commands.tx.RollbackCommand.acceptVisitor(RollbackCommand.java:40)
                              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
                              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)
                              at org.infinispan.commands.AbstractVisitor.visitRollbackCommand(AbstractVisitor.java:128)
                              at org.infinispan.statetransfer.TransactionSynchronizerInterceptor.visitRollbackCommand(TransactionSynchronizerInterceptor.java:66)
                              at org.infinispan.commands.tx.RollbackCommand.acceptVisitor(RollbackCommand.java:40)
                              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
                              at org.infinispan.statetransfer.StateTransferInterceptor.handleTxCommand(StateTransferInterceptor.java:200)
                              at org.infinispan.statetransfer.StateTransferInterceptor.visitRollbackCommand(StateTransferInterceptor.java:98)
                              at org.infinispan.commands.tx.RollbackCommand.acceptVisitor(RollbackCommand.java:40)
                              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
                              at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111)
                              at org.infinispan.commands.AbstractVisitor.visitRollbackCommand(AbstractVisitor.java:128)
                              at org.infinispan.commands.tx.RollbackCommand.acceptVisitor(RollbackCommand.java:40)
                              at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97)
                              at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102)
                              at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71)
                              at org.infinispan.commands.AbstractVisitor.visitRollbackCommand(AbstractVisitor.java:128)
                              at org.infinispan.commands.tx.RollbackCommand.acceptVisitor(RollbackCommand.java:40)
                              at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336)
                              at org.infinispan.transaction.impl.TransactionCoordinator.rollbackInternal(TransactionCoordinator.java:231)
                              at org.infinispan.transaction.impl.TransactionCoordinator.rollback(TransactionCoordinator.java:170)
                              ... 32 more