9 Replies Latest reply on Oct 22, 2014 10:29 AM by networker

    Infinispan cluster formation not working with Wildfly 8.1

    networker

      Hello,

       

      I have a strange problem with Wildfly 8.1 regarding clustered Infinispan caches. Regardless of which JGroups protocol stack (UDP multicast or TCP) I am using , the Infinispan cluster formation is not working. Each node always forms a cluster by its own. I have tried several custom JGroups protocol configurations, and surely also the standalone HA configurations (standalone-ha.xml) coming with Wildfly. The result is always the same.

       

      In my cluster test application, I am using the Infinispan and JGroups implementations provided by Wildfly 8.1 (i.e., Infinispan 6.0.2-Final and JGroups 3.4.3-Final). Cluster separation has been verified by a test app using an invalidation cache, where keys do not get invalidated (although they should get) and also by checking the coordinator-address of the cluster in the Wildfly "Management Model View". Each node believes that he is the coordinator of the cluster.

       

      In a cluster setup with two nodes, one node complains in its server log that it is dropping an unicast message to a wrong destimation:

      2014-10-17 09:58:30,594 WARN  [org.jgroups.protocols.TP$ProtocolAdapter] (INT-1,shared=udp) JGRP000031: <host2>/test: dropping unicast message to wrong destination <host1>/test

       

      Important to note:

      - For testing, no firewall is running on each cluster node.

      - When using the JGroups UDP multicast protocol stack, each node can see the JGroups message from all other nodes. This has been verified with the McastReceiverTest class of the JGroups implementation.

      - All JGroups stack configurations I tried for Wildfly 8.1 are working with JBoss AS 7.1.1!!!

       

      Does anybody have experiences similar problems with Wildfly 8.1 or have an idea what may be the reason?

       

      Thanks in advance,

      Roland

        • 1. Re: Infinispan cluster formation not working with Wildfly 8.1
          networker

          Minor correction: This issue should better be named "JGroups cluster formation not working with Wildfly 8.1". It's more a problem with JGroups than with Infinispan.

          • 2. Re: Infinispan cluster formation not working with Wildfly 8.1
            pferraro

            I know there were fixes to 3.4.x regarding dropped unicasts.  Try updating the jar in the org.jgroups module to 3.4.6.Final.

            • 3. Re: Infinispan cluster formation not working with Wildfly 8.1
              networker

              I tried it, but problem still resists.

              • 4. Re: Infinispan cluster formation not working with Wildfly 8.1
                pferraro

                The JGroups discovery protocol is responsible for initial cluster member discovery (e.g. PING, MPING, etc.). Presumably, you're using multicast for discovery.  Are you certain that the interface on which the relevant socket are bound have a route to the multicast address in use?

                • 5. Re: Infinispan cluster formation not working with Wildfly 8.1
                  belaban

                  Why don't you try a standalone JGroups demo program (e.g. Draw or Chat) with your configuration to see if the JGroups bits work ? Going through the following check list might also help:

                  • What's your environment ? Virtualized ?
                  • bind_addr set correctly ? What does the output show when a cluster node is started ? e.g. 127.0.0.1 won't form a cluster across hosts
                  • Firewall, switch ? You mentioned you turned the FW off, does iptables -L show nothing ?
                  • ip_ttl set to > 0 ?
                  • What's your JGroups configuration ?
                  • 6. Re: Infinispan cluster formation not working with Wildfly 8.1
                    networker

                    Hello Paul, Bela,

                     

                    thanks for your responses and sorry for my delay in replying. I answer to both of your questions  at once:

                     

                    - Yes, Paul, I am sure. As I wrote initially, I can see all multicast messages from node 2 on node 1 and vice versa when using JGroups' McastReceiverTest implementation for logging the messages. Thus, the datagram packets from the other node are available.

                    - Environment: yes, it is virtualized. 2 VMs on one physical node.

                    - The jgroups.bind_addr is set to the IP address of the VM's eth0. The jboss.bind.address to 0.0.0.0.

                    - VMs are running on same physical machine. No switch between. Firewalls on VMs are turned off.

                    - I do not set the ip_ttl, i.e., using the default.

                    - As JGroups configuration I am using the default configuration in the standalone-ha.xml config file, coming with Wildfly 8.1.

                     

                            <subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">

                                <stack name="udp">

                                    <transport type="UDP" socket-binding="jgroups-udp"/>

                                    <protocol type="PING"/>

                                    <protocol type="MERGE3"/>

                                    <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>

                                    <protocol type="FD_ALL"/>

                                    <protocol type="VERIFY_SUSPECT"/>

                                    <protocol type="pbcast.NAKACK2"/>

                                    <protocol type="UNICAST3"/>

                                    <protocol type="pbcast.STABLE"/>

                                    <protocol type="pbcast.GMS"/>

                                    <protocol type="UFC"/>

                                    <protocol type="MFC"/>

                                    <protocol type="FRAG2"/>

                                    <protocol type="RSVP"/>

                                </stack>

                                <stack name="tcp">

                                    <transport type="TCP" socket-binding="jgroups-tcp"/>

                                    <protocol type="TCPPING">

                                        <property name="initial_hosts">

                                            ${jgroups.tcpping.initial_hosts}

                                        </property>

                                        <property name="port_range">0</property>

                                    </protocol>

                                    <protocol type="MERGE2"/>

                                    <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>

                                    <protocol type="FD"/>

                                    <protocol type="VERIFY_SUSPECT"/>

                                    <protocol type="pbcast.NAKACK2"/>

                                    <protocol type="UNICAST3"/>

                                    <protocol type="pbcast.STABLE"/>

                                    <protocol type="pbcast.GMS"/>

                                    <protocol type="MFC"/>

                                    <protocol type="FRAG2"/>

                                    <protocol type="RSVP"/>

                                </stack>

                            </subsystem>

                            ...

                       <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">

                            ...

                            <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>

                            <socket-binding name="jgroups-tcp" port="7600"/>

                            <socket-binding name="jgroups-tcp-fd" port="57600"/>

                            <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>

                            <socket-binding name="jgroups-udp-fd" port="54200"/>

                            ...

                        </socket-binding-group>

                     

                    Basically I am using the same JGroups configuration as I do with JBoss AS 7.1.1, on the same VMs. With JBoss AS 7.1.1, the configurations are working fine.

                     

                    I also tried the TCP stack with setting the jgroups.tcpping.initial_hosts property on both hosts. It also fails with Wildfly but works with JBoss AS 7.1.1.

                    • 7. Re: Infinispan cluster formation not working with Wildfly 8.1
                      rhusar

                      BTW The dropping issue is still open for WildFly ([WFLY-2632] JGroups drops unicast messages after shutdown/restart (dropping unicast message to wrong destination) - JBos…) but it was only seen after restarts.

                      • 8. Re: Infinispan cluster formation not working with Wildfly 8.1
                        pferraro

                        Roland Tusch wrote:

                        - The jgroups.bind_addr is set to the IP address of the VM's eth0. The jboss.bind.address to 0.0.0.0.

                        That's your problem.  JGroups won't bind to a wildcard address.

                        See [JGRP-1885] bind_addr of 0.0.0.0 should throw an exception - JBoss Issue Tracker

                        • 9. Re: Infinispan cluster formation not working with Wildfly 8.1
                          networker

                          Hi Paul,

                           

                          you saved my day! :-) Thanks a lot!! That really was the problem. Setting the jboss.bind.address property to a wildcard address worked with JBoss AS 7.1.1, but it does no more work with Wildfly 8.1. When I set it to the eth0's IP address, the cluster formation is working fine. There must have been a change in the JGroups implementation between 3.0.6 and 3.4.3 regarding this binding.

                           

                          Thanks all for helping on this issue!

                           

                          Have a nice day.