6 Replies Latest reply on Jul 30, 2014 5:08 PM by Rasmi Sahu

    Infispan replication is not happening between two nodes

    Rasmi Sahu Newbie

      I tried to hit the account number 07762471982 on node1 and getting the response.Then again I hit the same account on node1 It fetched the result from cache from node1.I verified the log It seems like the cache is working on the independent node.

       

      But when I tried to hit the same account(07762471982 ) on node2(Which is cache clustered with Node1) then it did not get the response from the cache.It fetched the result from the downstream(External data source) .It seems like the cache replication is not happening properly between the two node.

       

      I capture the log from infinispan level.I saw some exception in the node1 server log .Below is the exception

      ----------------------------------------------------------------------------

      17:02:59,066 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-2,preengsdmbl03-20024) ISPN000093: Received new, MERGED cluster view: MergeView::[preengsdmbl03-20024|4] [preengsdmbl03-20024, preengsdmbl01-36260, preengsdmbl02-4901], subgroups=[preengsdmbl01-36260|2] [preengsdmbl03-20024], [preengsdmbl01-36260|3] [preengsdmbl01-36260, preengsdmbl02-4901]

      17:03:20,346 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,preengsdmbl03-20024) ISPN000094: Received new cluster view: [preengsdmbl03-20024|6] [preengsdmbl03-20024, preengsdmbl02-4901]

      17:03:20,348 ERROR [org.infinispan.cacheviews.CacheViewsManagerImpl] (CacheViewInstaller-1,preengsdmbl03-20024) ISPN000172: Failed to prepare view CacheView{viewId=8, members=[preengsdmbl03-20024, preengsdmbl01-36260, preengsdmbl02-4901]} for cache SENET, rolling back to view CacheView{viewId=3, members=[preengsdmbl01-36260, preengsdmbl02-4901, preengsdmbl03-20024]}: java.util.concurrent.ExecutionException: org.infinispan.remoting.transport.jgroups.SuspectException: Suspected member: preengsdmbl01-36260

      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252) [rt.jar:1.6.0_24]

      at java.util.concurrent.FutureTask.get(FutureTask.java:111) [rt.jar:1.6.0_24]

      at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterPrepareView(CacheViewsManagerImpl.java:320) [infinispan-core-5.1.8.Final-redhat-1.jar:5.1.8.Final-redhat-1]

      at org.infinispan.cacheviews.CacheViewsManagerImpl.clusterInstallView(CacheViewsManagerImpl.java:250) [infinispan-core-5.1.8.Final-redhat-1.jar:5.1.8.Final-redhat-1]

      at org.infinispan.cacheviews.CacheViewsManagerImpl$ViewInstallationTask.call(CacheViewsManagerImpl.java:894) [infinispan-core-5.1.8.Final-redhat-1.jar:5.1.8.Final-redhat-1]

      at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) [rt.jar:1.6.0_24]

      at java.util.concurrent.FutureTask.run(FutureTask.java:166) [rt.jar:1.6.0_24]

      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.6.0_24]

      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.6.0_24]

      at java.lang.Thread.run(Thread.java:679) [rt.jar:1.6.0_24]

      Caused by: org.infinispan.remoting.transport.jgroups.SuspectException: Suspected member: preengsdmbl01-36260

      at org.infinispan.remoting.transport.AbstractTransport.parseResponseAndAddToResponseList(AbstractTransport.java:101) [infinispan-core-5.1.8.Final-redhat-1.jar:5.1.8.Final-redhat-1]

      at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:515) [infinispan-core-5.1.8.Final-redhat-1.jar:5.1.8.Final-redhat-1]

      at org.infinispan.cacheviews.CacheViewsManagerImpl$2.call(CacheViewsManagerImpl.java:304) [infinispan-core-5.1.8.Final-redhat-1.jar:5.1.8.Final-redhat-1]

      at org.infinispan.cacheviews.CacheViewsManagerImpl$2.call(CacheViewsManagerImpl.java:301) [infinispan-core-5.1.8.Final-redhat-1.jar:5.1.8.Final-redhat-1]

      ... 5 more

       

       

       

       

      Can anyone help on this issue?

        • 1. Re: Infispan replication is not happening between two nodes
          Wolf-Dieter Fink Master

          Looks like your cluster is split because of network issues.

          If you see such messages you have issues with the cluster, if that happen without an explanation (i.e. network/power failure) you should review your network.

           

          JGroups use multicast by default, so you need to have this enabled for your network hardware (if there is a router/firewall in between).

          If this is not possible refer the documentation and change the JGroups communication to "TCP" instead of "UDP"

           

          Maybe you use a simple test and have both nodes on the same machine, in this case the cluster/replication should work correct

          • 2. Re: Infispan replication is not happening between two nodes
            Rasmi Sahu Newbie

            Wolf,

             

             

            Thanks for you response .Here I observe that both the nodes are on the same sub net but the multicast was not enabled in between the node .And there is no firewall in between the node.

             

             

            Is it still required to enable the multicast traffic in between the node to support the data replication .and I run the tcpdump in both the node, the UDP is using the 228.6.7.8 destination.

             

             

            Please reply.

             

            Thanks,

            Rasmi

            • 3. Re: Infispan replication is not happening between two nodes
              Wolf-Dieter Fink Master

              For a cluster you need either multicast if you use the default configuration or you have to switch JGroups to the tcp-stack .

              In any case you need to check that there are no cluster problems shown in the logfiles (suspected members, merge) all nodes should show the correct number of members after start.

               

              If this is not the case you have a multicast or network issue which you need to solve.

              • 4. Re: Re: Infispan replication is not happening between two nodes
                Rasmi Sahu Newbie

                Wolf,

                  I enable the multicast in our network and all nodes are multicasted.

                 

                To verify this i execute the route command and get the same details on all the three node

                 

                Destination    Gateway        genMask  Flag  metricRef  useIface

                 

                224.0.0.0      *                    240.0.0.0    U    0              eth1

                 

                 

                After that i check the infinispan log each of the node and found as

                #################

                Activating infinispan subsystem

                starting Jgroups channel

                Unable to use Jgroups configuration  mechanism provided in the properties{}.Using defauts Jgroups configuration!

                received a new cluster view [prenode1sdmbl01-60084|0] [Prenode1sdmbl01-60084]

                ################

                 

                But in the node1 log I observe that the other nodes physical address are not found .But there is no exception like suscpect and merge.Is it fine ?Are all the three nodes are in cluster?

                 

                 

                I just open the Jconsole and verify the Mbean for org.infinispan that the numberentries are different from all the three node.

                 

                 

                But not sure the data is replicating or not.Can you please tell how we make sure that all the three nodes are in the cluster and data replication happen properly in between all the three node.

                 

                 

                Is there any way we can see throught the Jconsole that the replicatiion is hapening in all the three node?

                 

                 

                Please suggest.

                 

                Thanks,

                Rasmi

                • 5. Re: Infispan replication is not happening between two nodes
                  Wolf-Dieter Fink Master

                  if the nodes are correct clustered you should see a message like "cluster members : 3" for each node in the logfiles

                  • 6. Re: Infispan replication is not happening between two nodes
                    Rasmi Sahu Newbie


                    Wolf,

                     

                    After doing the multicast configuration and deployed the cache application in the Jboss server.During the server start up it Received a cluster view for all the three nodes .Below is the logs

                     

                    00:30,691 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,preengsdmbl01-9019) ISPN000094: Received new cluster view: [Node01-9019|1] [Node01-9019, Node02-26250,Node03-64128]

                     

                     

                    After 3 minutes all node are getting merge request and finally isolated to a single node and getting out of the cluster.

                     

                    07:14:02,214 INFO  [com.hp.vipb.jmx.VIPBMBeanFacade] (Timer-4) End invokeMBean

                    07:14:05,211 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,preengsdmbl03-64128) ISPN000093: Received new, MERGED cluster view: MergeView::[Node02-26250|5] [Node02-26250, Node01-2748, Node03-64128], subgroups=[Node02-262503] [Node02-26250], [Node01-2748|4] [Node01-9019,Node03-64128]

                     

                    At the end the log message is showing the cluster view as single node

                    17:03:51,076 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,Node01-9019) ISPN000094: Received new cluster view: [Node01-9019|2] [Node01-9019]

                     

                     

                    It seems like the the cluster is not stable.Can you please help on this to fix this issue.

                     

                    Thanks,

                    Rasmi