1 2 Previous Next 21 Replies Latest reply on Aug 30, 2012 5:12 AM by akostadinov

    Cluster-nodes don't find each other

    apatispelikan

      Hello,

       

      I use JBoss AS7 in domain-mode on two physical nodes. We run an application for several customers and each customer has it's own server-group (with it's own two server-processes - one per physical-node). We startet a cluster for our first customer and it works. Now we wanted to start the next server-group but those server-processes do not form a cluster - each works only on it own.

       

      First of all UDP seems to work. I run several jgroups-test-applications (and they all worked fine). Example:

      Node 2: /opt/jdk1.7/bin/java -cp  /appl/domaincontroller/jboss-as-7.1.2.Final/modules/org/jgroups/main/jgroups-3.9.Final.jar -Djava.net.preferIPv4Stack=true org.jgroups.tests.McastSenderTest -bind_addr 10.9.0.12 -mcast_addr 228.1.2.23 -port 4711 -ttl 9

      Node 1: /opt/jdk1.7/bin/java -c/appl/domaincontroller/jboss-as-7.1.2.Final/modules/org/jgroups/main/jgroups-3.0.9.Final.jar -Djava.net.preferIPv4Stack=true org.jgroups.tests.McastReceiverTest -bind_addr 10.9.0.11 -mcast_addr 228.1.2.23 -port 4711

      --> works

      Node 2: /opt/jdk1.7/bin/java -cp  /appl/domaincontroller/jboss-as-7.1.2.Final/modules/org/jgroups/main/jgroups-3.9.Final.jar -Djava.net.preferIPv4Stack=true org.jgroups.tests.McastReceiverTest -bind_addr 10.9.0.12 -mcast_addr 228.1.2.23 -port 4711

      Node 1: /opt/jdk1.7/bin/java -c/appl/domaincontroller/jboss-as-7.1.2.Final/modules/org/jgroups/main/jgroups-3.0.9.Final.jar -Djava.net.preferIPv4Stack=true org.jgroups.tests.McastSenderTest -bind_addr 10.9.0.11 -mcast_addr 228.1.2.23 -port 4711 -ttl 9

      --> works

      Node 1: /opt/jdk1.7/bin/java -cp  /appl/domaincontroller/jboss-as-7.1.2.Final/modules/org/jgroups/main/jgroups-3.0.9.Final.jar -Djava.net.preferIPv4Stack=true org.jgroups.demos.Chat

      Node 2: /opt/jdk1.7/bin/java -cp  /appl/domaincontroller/jboss-as-7.1.2.Final/modules/org/jgroups/main/jgroups-3.0.9.Final.jar -Djava.net.preferIPv4Stack=true org.jgroups.demos.Chat

      --> works

       

      I separate customers cluster-traffic by using different UDP- and TCP-ports. I also separate business-traffic from cluster-traffic by using two physical network-interfaces per node. This is the relevant section of the file domain.xml:

          <subsystem xmlns="urn:jboss:domain:jgroups:1.1" default-stack="tcp">

           ...

          <socket-binding-groups>

              <socket-binding-group name="full-ha-sockets" default-interface="public">

                  <socket-binding name="ajp" port="9000"/>

                  <socket-binding name="http" port="9001"/>

                  <socket-binding name="https" port="9002"/>

                  <socket-binding name="jacorb" interface="unsecure" port="9003"/>

                  <socket-binding name="jacorb-ssl" interface="unsecure" port="9004"/>

                  <socket-binding name="jgroups-diagnostics" interface="cluster" port="0" multicast-address="228.1.2.24" multicast-port="9005"/>

                  <socket-binding name="jgroups-mping" interface="cluster" port="0" multicast-address="228.1.2.23" multicast-port="${jboss.clustergroup.port}"/>

                  <socket-binding name="jgroups-tcp" interface="cluster" port="9007"/>

                  <socket-binding name="jgroups-tcp-fd" interface="cluster" port="9008"/>

                  <socket-binding name="messaging" interface="management" port="9012"/>

                  <socket-binding name="messaging-group" port="0" multicast-address="228.1.2.24" multicast-port="9013"/>

                  <socket-binding name="messaging-throughput" interface="management" port="9014"/>

                  <socket-binding name="osgi-http" interface="management" port="9015"/>

                  <socket-binding name="remoting" interface="management" port="9016"/>

                  <socket-binding name="txn-recovery-environment" interface="management" port="9017"/>

                  <socket-binding name="txn-status-manager" interface="management" port="9018"/>

                  <outbound-socket-binding name="mail-smtp">

                      <remote-destination host="smtp.apa.at" port="25"/>

                  </outbound-socket-binding>

              </socket-binding-group>

          </socket-binding-groups>

      (I use tcp because using udp I wasn't not able to bind to my addresses - but tcp is ok because there are only two nodes, but I want to avoid to configure the initial nodes so mping should work).

       

      All servers of a server-group have the same system property

      customer A (port-offset=50): <property name="jboss.clustergroup.port" value="9060"/>

      customer B (port-offset=100): <property name="jboss.clustergroup.port" value="9160"/>

      to simulate port-offsets for multicast-ports.

       

      After starting a server of customer A, "lsof" gives me

      java      32112       nobody  354u  IPv4 122475705      0t0  UDP 228.1.2.23:9060

      java      32112       nobody  357u  IPv4 122475706      0t0  TCP 10.9.0.11:9057 (LISTEN)

      java      32112       nobody  358u  IPv4 122475707      0t0  TCP 10.9.0.11:9058 (LISTEN)

      No matter which customer it is - it looks always the same (only different ports). And for the very first server-group clustering works.

       

      Starting any server for the second customer brings

      16:06:21,850 FINE  [org.jgroups.protocols.FRAG2] (ChannelService lifecycle - 1) received CONFIG event: {bind_addr=/10.9.0.12}

      16:06:21,852 FINE  [org.jgroups.protocols.MPING] (ChannelService lifecycle - 1) bind_addr=/10.9.0.12 mcast_addr=/228.1.2.23, mcast_port=9260

      16:06:21,969 INFO  [stdout] (ChannelService lifecycle - 1)

      16:06:21,969 INFO  [stdout] (ChannelService lifecycle - 1) -------------------------------------------------------------------

      16:06:21,970 INFO  [stdout] (ChannelService lifecycle - 1) GMS: address=slave:customerB02/hibernate, cluster=hibernate, physical address=10.9.0.12:9257

      16:06:21,970 INFO  [stdout] (ChannelService lifecycle - 1) -------------------------------------------------------------------

      ...

      16:06:24,980 FINE  [org.jgroups.protocols.pbcast.NAKACK] (ChannelService lifecycle - 1)

      [setDigest()]

      existing digest:  []

      new digest:       slave:customerB02/hibernate: [0 (0)]

      resulting digest: slave:customerB02/hibernate: [0 (0)]

      16:06:24,980 FINE  [org.jgroups.protocols.pbcast.GMS] (ChannelService lifecycle - 1) slave:customerB02/hibernate: view is [slave:customerB02/hibernate|0] [slave:customerB02/hibernate]

      16:06:24,981 FINE  [org.jgroups.protocols.pbcast.STABLE] (ChannelService lifecycle - 1) resuming message garbage collection

      16:06:24,982 FINE  [org.jgroups.protocols.FD_SOCK] (ChannelService lifecycle - 1) VIEW_CHANGE received: [slave:customerB02/hibernate]

      16:06:24,984 FINE  [org.jgroups.protocols.pbcast.STABLE] (ChannelService lifecycle - 1) [ergonomics] setting max_bytes to 400KB (1 members)

      16:06:24,984 FINE  [org.jgroups.protocols.pbcast.STABLE] (ChannelService lifecycle - 1) resuming message garbage collection

      16:06:24,985 FINE  [org.jgroups.protocols.pbcast.GMS] (ChannelService lifecycle - 1) created group (first member). My view is [slave:customerB02/hibernate|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl

      ...

      16:06:26,289 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-19) ISPN000078: Starting JGroups Channel

      16:06:26,294 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-19) ISPN000094: Received new cluster view: [slave:customerB02/hibernate|0] [slave:customerB02/hibernate]

      16:06:26,295 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-19) ISPN000079: Cache local address is slave:customerB02/hibernate, physical addresses are [10.9.0.12:9257]

      16:06:26,299 INFO  [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-19) ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.4.FINAL

      After the three-seconds-timeout MPING gives up and asumes that the process is the first in the cluster.

       

      During my experiments I experienced that also the first server-group does not work sometimes: If I shutdown the server on the second node and start it again the cluster doesn't form again. After restarting the server on the first node the cluster is available again. This is the log of the first customer during this

       

      Node 1 sees shutting down server-process on node 2:

      16:31:04,688 FINE  [org.jgroups.protocols.FD] (Timer-3,<ADDR>) sending are-you-alive msg to slave:customerA02/hibernate (own address=master:customerA01/hibernate)

      16:31:08,140 FINE  [org.jgroups.protocols.pbcast.GMS] (Incoming-16,null) master:customerA01/hibernate: view is [master:customerA01/hibernate|2] [master:customerA01/hibernate]

      16:31:08,141 FINE  [org.jgroups.protocols.pbcast.STABLE] (Incoming-16,null) resuming message garbage collection

      16:31:08,141 FINE  [org.jgroups.protocols.pbcast.NAKACK] (Incoming-16,null) removed slave:customerA02/hibernate from xmit_table (not member anymore)

      16:31:08,142 FINE  [org.jgroups.protocols.FD_SOCK] (Incoming-16,null) VIEW_CHANGE received: [master:customerA01/hibernate]

      16:31:08,142 FINE  [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,hibernate,master:customerA01/hibernate) peer slave:customerA02/hibernate closed socket gracefully

      16:31:08,445 FINE  [org.jgroups.protocols.pbcast.STABLE] (Incoming-16,null) [ergonomics] setting max_bytes to 400KB (1 members)

      16:31:08,446 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-16,null) ISPN000094: Received new cluster view: [master:customerA01/hibernate|2] [master:customerA01/hibernate]

       

      Starting server-process on node 2 logs this:

      16:40:39,511 FINE  [org.jgroups.protocols.FRAG2] (ChannelService lifecycle - 1) received CONFIG event: {bind_addr=/10.9.0.12}

      16:40:39,513 FINE  [org.jgroups.protocols.MPING] (ChannelService lifecycle - 1) bind_addr=/10.9.0.12 mcast_addr=/228.1.2.23, mcast_port=9060

      16:40:39,560 INFO  [stdout] (ChannelService lifecycle - 1)

      16:40:39,561 INFO  [stdout] (ChannelService lifecycle - 1) -------------------------------------------------------------------

      16:40:39,561 INFO  [stdout] (ChannelService lifecycle - 1) GMS: address=slave:customerA02/hibernate, cluster=hibernate, physical address=10.9.0.12:9057

      16:40:39,561 INFO  [stdout] (ChannelService lifecycle - 1) -------------------------------------------------------------------

      ...

      16:40:42,572 FINE  [org.jgroups.protocols.pbcast.NAKACK] (ChannelService lifecycle - 1)

      [setDigest()]

      existing digest:  []

      new digest:       slave:customerA02/hibernate: [0 (0)]

      resulting digest: slave:customerA02/hibernate: [0 (0)]

      16:40:42,572 FINE  [org.jgroups.protocols.pbcast.GMS] (ChannelService lifecycle - 1) slave:customerA02/hibernate: view is [slave:customerA02/hibernate|0] [slave:customerA02/hibernate]

      16:40:42,573 FINE  [org.jgroups.protocols.pbcast.STABLE] (ChannelService lifecycle - 1) resuming message garbage collection

      16:40:42,574 FINE  [org.jgroups.protocols.FD_SOCK] (ChannelService lifecycle - 1) VIEW_CHANGE received: [slave:customerA02/hibernate]

      16:40:42,575 FINE  [org.jgroups.protocols.pbcast.STABLE] (ChannelService lifecycle - 1) [ergonomics] setting max_bytes to 400KB (1 members)

      16:40:42,576 FINE  [org.jgroups.protocols.pbcast.STABLE] (ChannelService lifecycle - 1) resuming message garbage collection

      16:40:42,577 FINE  [org.jgroups.protocols.pbcast.GMS] (ChannelService lifecycle - 1) created group (first member). My view is [slave:customerA02/hibernate|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl

      ...

      16:40:43,914 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000078: Starting JGroups Channel

      16:40:43,918 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000094: Received new cluster view: [slave:customerA02/hibernate|0] [slave:customerA02/hibernate]

      16:40:43,918 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) ISPN000079: Cache local address is slave:customerA02/hibernate, physical addresses are [10.9.0.12:9057]

      ...

      16:41:03,862 WARNING [org.jgroups.protocols.TCP] (ReceiverThread) null: no physical address for d9235711-056b-197b-ce36-0c79c9426483, dropping message

      16:42:45,613 WARNING [org.jgroups.protocols.TCP] (ReceiverThread) null: no physical address for d9235711-056b-197b-ce36-0c79c9426483, dropping message

      16:43:41,405 WARNING [org.jgroups.protocols.TCP] (ReceiverThread) null: no physical address for d9235711-056b-197b-ce36-0c79c9426483, dropping message

      16:44:24,430 WARNING [org.jgroups.protocols.TCP] (ReceiverThread) null: no physical address for d9235711-056b-197b-ce36-0c79c9426483, dropping message

      The server thinks it is alone but the server on node 1 which previously was part of the cluster is still running - but not found by the new process.

       

      Now I stop and start the server-process on node 1 and it finds the cluster:

      16:45:07,433 FINE  [org.jgroups.protocols.FRAG2] (ChannelService lifecycle - 1) received CONFIG event: {bind_addr=/10.9.0.11}

      16:45:07,434 FINE  [org.jgroups.protocols.MPING] (ChannelService lifecycle - 1) bind_addr=/10.9.0.11 mcast_addr=/228.1.2.23, mcast_port=9060

      ...

      16:45:07,527 INFO  [stdout] (ChannelService lifecycle - 1)

      16:45:07,527 INFO  [stdout] (ChannelService lifecycle - 1) -------------------------------------------------------------------

      16:45:07,527 INFO  [stdout] (ChannelService lifecycle - 1) GMS: address=master:customerA01/hibernate, cluster=hibernate, physical address=10.9.0.11:9057

      16:45:07,527 INFO  [stdout] (ChannelService lifecycle - 1) -------------------------------------------------------------------

      16:45:07,540 FINE  [org.jgroups.protocols.pbcast.GMS] (ChannelService lifecycle - 1) election results: {slave:customerA02/hibernate=1}

      16:45:07,541 FINE  [org.jgroups.protocols.pbcast.GMS] (ChannelService lifecycle - 1) sending JOIN(master:customerA01/hibernate) to slave:customerA02/hibernate

      16:45:07,619 FINE  [org.jgroups.protocols.pbcast.NAKACK] (ChannelService lifecycle - 1)

      [setDigest()]

      existing digest:  []

      new digest:       slave:customerA02/hibernate: [15 (15)], master:customerA01/hibernate: [0 (0)]

      resulting digest: master:customerA01/hibernate: [0 (0)], slave:customerA02/hibernate: [15 (15)]

      16:45:07,619 FINE  [org.jgroups.protocols.pbcast.GMS] (ChannelService lifecycle - 1) master:customerA01/hibernate: view is [slave:customerA02/hibernate|1] [slave:customerA02/hibernate, master:customerA01/hibernate]

      16:45:07,620 FINE  [org.jgroups.protocols.FD_SOCK] (ChannelService lifecycle - 1) VIEW_CHANGE received: [slave:customerA02/hibernate, master:customerA01/hibernate]

      16:45:07,622 FINE  [org.jgroups.protocols.pbcast.STABLE] (ChannelService lifecycle - 1) [ergonomics] setting max_bytes to 800KB (2 members)

      16:45:07,624 FINE  [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,hibernate,master:customerA01/hibernate) ping_dest is slave:customerA02/hibernate, pingable_mbrs=[slave:customerA02/hibernate, master:customerA01/hibernate]

      ...

      16:45:10,058 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-19) ISPN000078: Starting JGroups Channel

      16:45:10,062 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-19) ISPN000094: Received new cluster view: [slave:customerA02/hibernate|1] [slave:customerA02/hibernate, master:customerA01/hibernate]

      16:45:10,062 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-19) ISPN000079: Cache local address is master:customerA01/hibernate, physical addresses are [10.9.0.11:9057]

      16:45:10,065 INFO  [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-19) ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.4.FINAL

      ...

      16:45:11,409 INFO  [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss AS 7.1.2.Final "Steropes" started in 9031ms - Started 1021 of 1192 services (155 services are passive or on-demand)

      16:45:13,622 FINE  [org.jgroups.protocols.FD] (Timer-2,<ADDR>) sending are-you-alive msg to slave:customerA02/hibernate (own address=master:customerA01/hibernate)

       

      This is was is logged on node 2 during starting the server-process on node 1:

      16:45:07,598 FINE  [org.jgroups.protocols.pbcast.GMS] (ViewHandler,hibernate,slave:customerA02/hibernate) new=[master:customerA01/hibernate], suspected=[], leaving=[], new view: [slave:customerA02/hibernate|1] [slave:customerA02/hibernate, master:customerA01/hibernate]

      16:45:07,599 FINE  [org.jgroups.protocols.pbcast.STABLE] (ViewHandler,hibernate,slave:customerA02/hibernate) suspending message garbage collection

      16:45:07,600 FINE  [org.jgroups.protocols.pbcast.STABLE] (ViewHandler,hibernate,slave:customerA02/hibernate) resume task started, max_suspend_time=33000

      16:45:07,602 FINE  [org.jgroups.protocols.pbcast.NAKACK] (Incoming-1,null)

      [setDigest()]

      existing digest:  slave:customerA02/hibernate: [15 (15)]

      new digest:       slave:customerA02/hibernate: [14 (14)], master:customerA01/hibernate: [0 (0)]

      resulting digest: master:customerA01/hibernate: [0 (0)], slave:customerA02/hibernate: [15 (15)]

      16:45:07,603 FINE  [org.jgroups.protocols.pbcast.GMS] (Incoming-1,null) slave:customerA02/hibernate: view is [slave:customerA02/hibernate|1] [slave:customerA02/hibernate, master:customerA01/hibernate]

      16:45:07,604 FINE  [org.jgroups.protocols.FD_SOCK] (Incoming-1,null) VIEW_CHANGE received: [slave:customerA02/hibernate, master:customerA01/hibernate]

      16:45:07,607 FINE  [org.jgroups.protocols.pbcast.STABLE] (Incoming-1,null) [ergonomics] setting max_bytes to 800KB (2 members)

      16:45:07,606 FINE  [org.jgroups.protocols.FD_SOCK] (FD_SOCK pinger,hibernate,slave:customerA02/hibernate) ping_dest is master:customerA01/hibernate, pingable_mbrs=[slave:customerA02/hibernate, master:customerA01/hibernate]

      16:45:07,607 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,null) ISPN000094: Received new cluster view: [slave:customerA02/hibernate|1] [slave:customerA02/hibernate, master:customerA01/hibernate]

      16:45:07,625 FINE  [org.jgroups.protocols.pbcast.STABLE] (ViewHandler,hibernate,slave:customerA02/hibernate) resuming message garbage collection

      16:45:13,607 FINE  [org.jgroups.protocols.FD] (Timer-5,<ADDR>) sending are-you-alive msg to master:customerA01/hibernate (own address=slave:customerA02/hibernate)

       

      I do not understand what's the reason for this. I tried to find a solution in the web, but no one seems to expirience the same problem. I don't know if this is a network-issue but I don't think so because the jgroups-tests work. Is there any reason why there could be only one cluster at the time? Or what else could bring the problem I see?

       

      Regards,

      Stephan

        • 1. Re: Cluster-nodes don't find each other
          apatispelikan

          I did another UDP multicast-test and it worked. So this is another hint that it is not a networking-issue.

           

          Node 1 multicast-listener:

          $ iperf -s -u -B 228.1.2.23 -i 1 -p 5260

          ------------------------------------------------------------

          Server listening on UDP port 5260

          Binding to local address 228.1.2.23

          Joining multicast group  228.1.2.23

          Receiving 1470 byte datagrams

          UDP buffer size: 32.0 MByte (default)

          ------------------------------------------------------------

          [  3] local 228.1.2.23 port 5260 connected with 194.158.132.184 port 34529

          [ ID] Interval       Transfer     Bandwidth        Jitter   Lost/Total Datagrams

          [  3]  0.0- 1.0 sec   128 KBytes  1.05 Mbits/sec   0.003 ms    0/   89 (0%)

          [  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec   0.003 ms    0/   89 (0%)

          [  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec   0.004 ms    0/   89 (0%)

          [  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec   0.004 ms    0/  269 (0%)

          [  4] local 228.1.2.23 port 5260 connected with 194.158.132.185 port 56747

          [  4]  0.0- 1.0 sec   128 KBytes  1.05 Mbits/sec   0.034 ms    0/   89 (0%)

          [  4]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec   0.071 ms    0/   89 (0%)

          [  4]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec   0.032 ms    0/   89 (0%)

          [  4]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec   0.033 ms    0/  269 (0%)

           

          Node 2 multicast-listener:

          $ iperf -s -u -B 228.1.2.23 -i 1 -p 5260

          ------------------------------------------------------------

          Server listening on UDP port 5260

          Binding to local address 228.1.2.23

          Joining multicast group  228.1.2.23

          Receiving 1470 byte datagrams

          UDP buffer size: 32.0 MByte (default)

          ------------------------------------------------------------

          [  3] local 228.1.2.23 port 5260 connected with 194.158.132.184 port 34529

          [ ID] Interval       Transfer     Bandwidth        Jitter   Lost/Total Datagrams

          [  3]  0.0- 1.0 sec   128 KBytes  1.05 Mbits/sec   0.051 ms    0/   89 (0%)

          [  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec   0.054 ms    0/   89 (0%)

          [  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec   0.033 ms    0/   89 (0%)

          [  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec   0.034 ms    0/  269 (0%)

          [  4] local 228.1.2.23 port 5260 connected with 194.158.132.185 port 56747

          [  4]  0.0- 1.0 sec   128 KBytes  1.05 Mbits/sec   0.002 ms    0/   89 (0%)

          [  4]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec   0.019 ms    0/   89 (0%)

          [  4]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec   0.008 ms    0/   89 (0%)

          [  4]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec   0.008 ms    0/  269 (0%)

           

          Node 1 sending-test:

          $ iperf -c 228.1.2.23 -u -T 32 -t 3 -i 1 -p 5260

          ------------------------------------------------------------

          Client connecting to 228.1.2.23, UDP port 5260

          Sending 1470 byte datagrams

          Setting multicast TTL to 32

          UDP buffer size: 1.00 MByte (default)

          ------------------------------------------------------------

          [  3] local 194.158.132.184 port 34529 connected with 228.1.2.23 port 5260

          [ ID] Interval       Transfer     Bandwidth

          [  3]  0.0- 1.0 sec   129 KBytes  1.06 Mbits/sec

          [  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec

          [  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec

          [  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec

          [  3] Sent 269 datagrams

           

          Node 2 sending-test:

          $ iperf -c 228.1.2.23 -u -T 32 -t 3 -i 1 -p 5260

          ------------------------------------------------------------

          Client connecting to 228.1.2.23, UDP port 5260

          Sending 1470 byte datagrams

          Setting multicast TTL to 32

          UDP buffer size: 1.00 MByte (default)

          ------------------------------------------------------------

          [  3] local 194.158.132.185 port 56747 connected with 228.1.2.23 port 5260

          [ ID] Interval       Transfer     Bandwidth

          [  3]  0.0- 1.0 sec   129 KBytes  1.06 Mbits/sec

          [  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec

          [  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec

          [  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec

          [  3] Sent 269 datagrams

          • 2. Re: Cluster-nodes don't find each other
            belaban

            You'll need to separate your clusters cleanly; for instance jgroups-tcp uses the same port (9007) for both clusters. I suggest make this and jgroups-tcp-fd use a system property (which is different per cluster), too.

            • 3. Re: Cluster-nodes don't find each other
              belaban

              If this doesn't work, I suggest come up with a JGroups-only test and - once you get it working - translate it back to JBoss 7.

              • 4. Re: Cluster-nodes don't find each other
                apatispelikan

                Hello Bela,

                 

                the port 9007 will be increased for each cluster by the jboss domain-controller. In my example cluster A uses port-offset "50" and this results in "9057" and cluster B uses port-offset "250" which results in "9257". This works - I verified this by using "lsof":

                 

                $ lsof -i -n -P |grep 9057

                java      32112       ipadsrv  357u  IPv4 122475706      0t0  TCP 10.9.0.11:9057 (LISTEN)

                $ lsof -i -n -P |grep 9257

                java      1607       ipadsrv  357u  IPv4 124464833      0t0  TCP 10.9.0.11:9257 (LISTEN)

                 

                So this should not be a problem. I used lsof to ensure that every process binds as configured - which works well.

                 

                Regards,

                Stephan

                • 5. Re: Cluster-nodes don't find each other
                  apatispelikan

                  Hello Bela,

                   

                  >If this doesn't work, I suggest come up with a JGroups-only test and - once you get it working - translate it back to JBoss 7.

                   

                  I can try this, but I do not know the current JGroups-settings because JBoss AS7 is a zero-config-system - I cannot see the default configuration! So I have to start from scratch what is a problem because I don't know much about those details :-(. But I will do my very best - results will be posted.

                   

                  Regards,

                  Stephan

                  • 6. Re: Cluster-nodes don't find each other
                    rhusar

                    I can try this, but I do not know the current JGroups-settings because JBoss AS7 is a zero-config-system - I cannot see the default configuration!

                     

                    Well you can use the CLI to switch to the subsystem and list current values and what the defaults are.

                     

                    So I have to start from scratch what is a problem because I don't know much about those details :-(. But I will do my very best - results will be posted.

                    We have implemented an operation that exports AS7 native config to JGroups native config, so there is no problem for you to try JGroups.

                     

                    [rhusar@rhusar jboss-as-7.1.1.Final]$ ./bin/jboss-cli.sh --connect
                    [standalone@localhost:9999 /] /subsystem=jgroups/stack=udp:export-native-configuration
                    {
                        "outcome" => "success",
                        "result" => "<config>
                        <UDP oob_thread_pool.max_threads=\"200\" bind_addr=\"127.0.0.1\" oob_thread_pool.keep_alive_time=\"1000\" max_bundle_size=\"64000\" mcast_send_buf_size=\"640000\" diagnostics_addr=\"224.0.75.75\" mcast_recv_buf_size=\"25000000\" bind_port=\"55200\" mcast_port=\"45688\" thread_pool.min_threads=\"20\" oob_thread_pool.rejection_policy=\"discard\" thread_pool.max_threads=\"200\" enable_diagnostics=\"true\" ucast_send_buf_size=\"640000\" ucast_recv_buf_size=\"20000000\" thread_pool.enabled=\"true\" oob_thread_pool.enabled=\"true\" ip_ttl=\"2\" enable_bundling=\"false\" thread_pool.rejection_policy=\"discard\" discard_incompatible_packets=\"true\" thread_pool.keep_alive_time=\"5000\" diagnostics_port=\"7500\" thread_pool.queue_enabled=\"true\" mcast_addr=\"230.0.0.4\" singleton_name=\"udp\" max_bundle_timeout=\"30\" oob_thread_pool.queue_enabled=\"false\" oob_thread_pool.min_threads=\"20\" thread_pool.queue_max_size=\"1000\"/>
                        <PING num_initial_members=\"3\" timeout=\"2000\"/>
                        <MERGE2 min_interval=\"20000\" max_interval=\"100000\"/>
                        <FD_SOCK bind_addr=\"127.0.0.1\" start_port=\"54200\"/>
                        <FD max_tries=\"5\" timeout=\"6000\"/>
                        <VERIFY_SUSPECT bind_addr=\"127.0.0.1\" timeout=\"1500\"/>
                        <BARRIER />
                        <pbcast.NAKACK use_mcast_xmit=\"true\" retransmit_timeout=\"300,600,1200,2400,4800\" discard_delivered_msgs=\"true\"/>
                        <UNICAST2 timeout=\"300,600,1200,2400,3600\"/>
                        <pbcast.STABLE desired_avg_gossip=\"50000\" max_bytes=\"400000\" stability_delay=\"1000\"/>
                        <pbcast.GMS print_local_addr=\"true\" view_bundling=\"true\" join_timeout=\"3000\" view_ack_collection_timeout=\"5000\" resume_task_timeout=\"7500\"/>
                        <UFC max_credits=\"2000000\" ignore_synchronous_response=\"true\"/>
                        <MFC max_credits=\"2000000\" ignore_synchronous_response=\"true\"/>
                        <FRAG2 frag_size=\"60000\"/>
                    </config>"
                    }
                    [standalone@localhost:9999 /] 
                    

                     

                    Implemented as part of https://issues.jboss.org/browse/AS7-1908

                     

                    HTH,

                    Rado

                    1 of 1 people found this helpful
                    • 7. Re: Cluster-nodes don't find each other
                      apatispelikan

                      This is the config exported:

                       

                      <config>
                                <TCP
                                          oob_thread_pool.max_threads="200"
                                          bind_addr="10.9.0.11"
                                          oob_thread_pool.keep_alive_time="1000"
                                          max_bundle_size="64000"
                                          bind_port="9257"
                                          thread_pool.min_threads="20"
                                          oob_thread_pool.rejection_policy="discard"
                                          thread_pool.max_threads="200"
                                          enable_diagnostics="false"
                                          thread_pool.enabled="true"
                                          oob_thread_pool.enabled="true"
                                          send_buf_size="640000"
                                          use_send_queues="false"
                                          enable_bundling="false"
                                          thread_pool.rejection_policy="discard"
                                          discard_incompatible_packets="true"
                                          thread_pool.keep_alive_time="5000"
                                          thread_pool.queue_enabled="true"
                                          singleton_name="tcp"
                                          max_bundle_timeout="30"
                                          oob_thread_pool.queue_enabled="false"
                                          sock_conn_timeout="300"
                                          oob_thread_pool.min_threads="20"
                                          recv_buf_size="20000000"
                                          thread_pool.queue_max_size="1000"/>
                                <MPING
                                          bind_addr="10.9.0.11"
                                          num_initial_members="3"
                                          mcast_port="9260"
                                          mcast_addr="228.1.2.23"
                                          timeout="3000"
                                          ip_ttl="2"/>
                                <MERGE2
                                          min_interval="20000"
                                          max_interval="100000"/>
                                <FD_SOCK
                                          bind_addr="10.9.0.11"
                                          start_port="9258"/>
                                <FD
                                          max_tries="5"
                                          timeout="6000"/>
                                <VERIFY_SUSPECT
                                          bind_addr="10.9.0.11"
                                          timeout="1500"/>
                                <BARRIER/>
                                <pbcast.NAKACK
                                          use_mcast_xmit="false"
                                          retransmit_timeout="300,600,1200,2400,4800"
                                          discard_delivered_msgs="true"/>
                                <UNICAST2
                                          max_bytes="1m"
                                          timeout="300,600,1200,2400,3600"
                                          stable_interval="5000"/>
                                <pbcast.STABLE
                                          desired_avg_gossip="50000"
                                          max_bytes="400000"
                                          stability_delay="1000"/>
                                <pbcast.GMS
                                          print_local_addr="true"
                                          view_bundling="true"
                                          join_timeout="3000"
                                          view_ack_collection_timeout="5000"
                                          resume_task_timeout="7500"/>
                                <UFC
                                          max_credits="2000000"
                                          ignore_synchronous_response="true"/>
                                <MFC
                                          max_credits="2000000"
                                          ignore_synchronous_response="true"/>
                                <FRAG2
                                          frag_size="60000"/>
                                <RSVP
                                          resend_interval="500"
                                          ack_on_delivery="false"
                                          timeout="60000"/>
                      </config>
                      
                      

                       

                      The first thing I could recognize is that my jboss-configuration-adaption is ignored. My domain.xml is

                       

                                      <stack name="tcp">
                                          <transport type="TCP" socket-binding="jgroups-tcp"/>
                                          <protocol type="MPING" socket-binding="jgroups-mping">
                                              <property name="num_initial_members">
                                                  2
                                              </property>
                                              <property name="ip_ttl">
                                                  32
                                              </property>
                                              <property name="timeout">
                                                  6000
                                              </property>
                                          </protocol>
                                          <protocol type="MERGE2"/>
                                          ....
                      

                      and my configuration wasn't applied! The syntax of my adaptions seems to be correct because JBoss starts without any error. Also the spelling of the property-names is correct. Is there something wrong with my configuration? I' afraid of even if I find a working configuration and will not be able to apply it in JBoss because of this problem.

                       

                      Never the less jgroups does not work - with and without my adaptions. So I will proceed...

                      • 8. Re: Cluster-nodes don't find each other
                        rachmato

                        Hi Stephan

                         

                        I tried to do the following using a master (host 192.168.0.102) and a slave (host 192.168.0.103):

                        - define a server group called other-server group which contains servers master/server-three and slave/server-three

                        - define a server group called another-server-group which contains master/server-four and slave/server-four

                         

                        To separate processes on the same host and server groups in the cluster, I used:

                        - port offsets of 250 for master/server-three and slave/server-three

                        - port offsets of 350 for master/server-four and slave/server-four

                        - a system property jboss.default.multicast.address of 239.11.12.13 for other-server-group

                        - a system property jboss.default.multicast.address of 239.11.12.14 for another-server-group

                         

                        Using the admin console attached to master, I started the servers on each host and once they started successfully I deployed a sample clustered app to each server-group.

                         

                        I saw hosts in the individual server groups clustering, both in the case of UDP and TCP (by changing the default stack setting).

                         

                        For example:

                         

                         

                        [snip]

                         

                        [Server:server-four] 21:54:25,748 INFO  [stdout] (ServerService Thread Pool -- 53)

                        [Server:server-four] 21:54:25,749 INFO  [stdout] (ServerService Thread Pool -- 53) -------------------------------------------------------------------

                        [Server:server-four] 21:54:25,749 INFO  [stdout] (ServerService Thread Pool -- 53) GMS: address=master:server-four/web, cluster=web, physical address=192.168.0.102:7950

                        [Server:server-four] 21:54:25,750 INFO  [stdout] (ServerService Thread Pool -- 53) -------------------------------------------------------------------

                        [Server:server-three] 21:54:25,761 INFO  [stdout] (ServerService Thread Pool -- 53)

                        [Server:server-three] 21:54:25,761 INFO  [stdout] (ServerService Thread Pool -- 53) -------------------------------------------------------------------

                        [Server:server-three] 21:54:25,762 INFO  [stdout] (ServerService Thread Pool -- 53) GMS: address=master:server-three/web, cluster=web, physical address=192.168.0.102:7850

                        [Server:server-three] 21:54:25,762 INFO  [stdout] (ServerService Thread Pool -- 53) -------------------------------------------------------------------

                        [Server:server-four] 21:54:31,843 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 54) ISPN000078: Starting JGroups Channel

                        [Server:server-four] 21:54:31,850 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 54) ISPN000094: Received new cluster view: [master:server-four/web|0] [master:server-four/web]

                        [Server:server-four] 21:54:31,851 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 54) ISPN000079: Cache local address is master:server-four/web, physical addresses are [192.168.0.102:7950]

                        [Server:server-four] 21:54:31,857 INFO  [org.infinispan.factories.GlobalComponentRegistry] (ServerService Thread Pool -- 54) ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.6.FINAL-redhat-1

                        [Server:server-four] 21:54:31,858 INFO  [org.infinispan.config.ConfigurationValidatingVisitor] (ServerService Thread Pool -- 54) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

                        [Server:server-four] 21:54:31,986 INFO  [org.infinispan.config.ConfigurationValidatingVisitor] (ServerService Thread Pool -- 58) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

                        [Server:server-four] 21:54:32,031 INFO  [org.infinispan.jmx.CacheJmxRegistration] (ServerService Thread Pool -- 54) ISPN000031: MBeans were successfully registered to the platform mbean server.

                        [Server:server-four] 21:54:32,031 INFO  [org.infinispan.jmx.CacheJmxRegistration] (ServerService Thread Pool -- 58) ISPN000031: MBeans were successfully registered to the platform mbean server.

                        [Server:server-four] 21:54:32,056 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 58) JBAS010281: Started repl cache from web container

                        [Server:server-four] 21:54:32,058 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 54) JBAS010281: Started default-host/cluster-demo cache from web container

                        [Server:server-four] 21:54:32,071 INFO  [org.jboss.as.clustering] (MSC service thread 1-8) JBAS010238: Number of cluster members: 1

                        [Server:server-four] 21:54:32,113 INFO  [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-8) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

                        [Server:server-four] 21:54:32,124 INFO  [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-8) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

                        [Server:server-four] 21:54:32,125 INFO  [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-8) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

                        [Server:server-four] 21:54:32,125 INFO  [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-8) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

                        [Server:server-four] 21:54:32,211 INFO  [org.jboss.web] (MSC service thread 1-8) JBAS018210: Registering web context: /cluster-demo

                        [Server:server-four] 21:54:32,250 INFO  [org.jboss.as.clustering] (Incoming-1,null) JBAS010225: New cluster view for partition web (id: 1, delta: 1, merge: false) : [master:server-four/web, slave:server-four/web]

                        [Server:server-four] 21:54:32,251 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-1,null) ISPN000094: Received new cluster view: [master:server-four/web|1] [master:server-four/web, slave:server-four/web]

                        [Server:server-three] 21:54:32,449 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 57) ISPN000078: Starting JGroups Channel

                        [Server:server-three] 21:54:32,455 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 57) ISPN000094: Received new cluster view: [slave:server-three/web|1] [slave:server-three/web, master:server-three/web]

                        [Server:server-three] 21:54:32,456 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (ServerService Thread Pool -- 57) ISPN000079: Cache local address is master:server-three/web, physical addresses are [192.168.0.102:7850]

                        [Server:server-three] 21:54:32,459 INFO  [org.infinispan.factories.GlobalComponentRegistry] (ServerService Thread Pool -- 57) ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.6.FINAL-redhat-1

                        [Server:server-three] 21:54:32,460 INFO  [org.infinispan.config.ConfigurationValidatingVisitor] (ServerService Thread Pool -- 57) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

                        [Server:server-three] 21:54:32,585 INFO  [org.infinispan.config.ConfigurationValidatingVisitor] (ServerService Thread Pool -- 56) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

                        [Server:server-three] 21:54:32,635 INFO  [org.infinispan.jmx.CacheJmxRegistration] (ServerService Thread Pool -- 56) ISPN000031: MBeans were successfully registered to the platform mbean server.

                        [Server:server-three] 21:54:32,651 INFO  [org.infinispan.jmx.CacheJmxRegistration] (ServerService Thread Pool -- 57) ISPN000031: MBeans were successfully registered to the platform mbean server.

                        [Server:server-three] 21:54:32,792 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 57) JBAS010281: Started repl cache from web container

                        [Server:server-three] 21:54:32,805 INFO  [org.jboss.as.clustering] (MSC service thread 1-1) JBAS010238: Number of cluster members: 2

                        [Server:server-three] 21:54:32,815 INFO  [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 56) JBAS010281: Started default-host/cluster-demo cache from web container

                        [Server:server-three] 21:54:32,854 INFO  [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-5) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

                        [Server:server-three] 21:54:32,864 INFO  [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-5) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

                        [Server:server-three] 21:54:32,865 INFO  [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-5) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

                        [Server:server-three] 21:54:32,865 INFO  [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (MSC service thread 1-5) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

                        [Server:server-three] 21:54:32,987 INFO  [org.jboss.web] (MSC service thread 1-5) JBAS018210: Registering web context: /cluster-demo

                        [Server:server-four] 21:54:33,152 INFO  [org.jboss.as.server] (host-controller-connection-threads - 2) JBAS018559: Deployed "cluster-demo.war"

                        [Server:server-three] 21:54:33,154 INFO  [org.jboss.as.server] (host-controller-connection-threads - 2) JBAS018559: Deployed "cluster-demo.war"

                         

                         

                        I assume this is the sort of configuration you are trying to set up: two isolated server-groups. Is that correct? I'll attach the domain.xml and host.xml in any case.

                         

                        There is a problem in checking the JGroups configuration using the operation Rado mentioned above - it doesn't seem to be available for me in domain mode and i'm looking into why this is the case. But from what I can tell, JGroups protocol properties are getting processed correctly - certainly in standalone where we verify with the export operation.

                         

                        This was done with the community AS version 7.1.3.Final-SNAPSHOT; I haven't yet had a chance to try it with the version of AS7 that you were using.

                         

                        Richard

                        • 9. Re: Cluster-nodes don't find each other
                          apatispelikan

                          Hello Richard,

                           

                          thank you for your response. Meanwhile I know what the problem is (see next post) - thank you for testing!

                           

                          There is a problem in checking the JGroups configuration using the operation Rado mentioned above - it doesn't seem to be available for me in domain mode and i'm looking into why this is the case. But from what I can tell, JGroups protocol properties are getting processed correctly - certainly in standalone where we verify with the export operation.

                           

                          This was done with the community AS version 7.1.3.Final-SNAPSHOT; I haven't yet had a chance to try it with the version of AS7 that you were using.

                           

                          In 7.1.2.Final I get the config in cluster-mode. But as I mentioned it does reflect the properties I set. Is there another possibility to check whether my properties has been applied?

                           

                          Stephan

                          • 10. Re: Cluster-nodes don't find each other
                            apatispelikan

                            Hello,

                             

                            it seems I found the reason! I bound my MPING as logged:

                             

                            FINE  [org.jgroups.protocols.MPING] (ChannelService lifecycle - 1) bind_addr=/10.9.0.12 mcast_addr=/228.1.2.23, mcast_port=9260

                             

                            It works if I set the bind_addr to the main-address of the machine. The 10.9.0.x-address is bound on a second network-card to separate cluster- and business-traffic. It seems my colleagues managing the operating system (I only implement Java-software and manage the JBoss-servers) did not configure the multicast-address to use this second network-card either. I will check this...

                             

                            Stephan

                            • 11. Re: Cluster-nodes don't find each other
                              rachmato

                              Hi Stephan

                               

                              Some updates.

                               

                              Firstly, we have found out why the export-native-configuration operation was not working in domain mode in AS 7.1.2, and this will be fixed for AS 7.1.3. So, calling this command can be done as follows:

                              In standalone mode:

                              /subsystem=jgroups/stack=X:export-native-configuration()

                              In domain mode:

                              /host=X/server=Y/subsystem=jgroups/stack=Z:export-native-configuration()

                               

                              Secondly, concerning the application of JGroups subsystem protocol properties to the actual JGroups protocdol instances created,  the visibility of these properties needs to be improved and will be (https://issues.jboss.org/browse/AS7-4083) . However, I did verify that any protocol properties specified in the JGroups subsystem  do get correctly applied, using a debugger. The exception to this are properties which try to override socket binding values, and these properties are ignored - the values specified in the socket binding take precedence. So, for example, trying to override the value of mcast_port in MPING as an MPING property will have no effect. We plan to issue a warning message in such cases for AS 7.1.3.

                               

                               

                              Richard

                              • 12. Re: Cluster-nodes don't find each other
                                akostadinov

                                Why am I getting this with EAP6? UPDATE: Ok, I see it is fixed in AS 7.1.3 and that's still not in EAP.

                                 

                                [standalone@localhost:9999 /] /subsystem=jgroups/stack=udp:export-native-configuration
                                {
                                    "outcome" => "failed",
                                    "failure-description" => "JBAS014739: No handler for export-native-configuration at address [
                                    (\"subsystem\" => \"jgroups\"),
                                    (\"stack\" => \"udp\")
                                ]",
                                    "rolled-back" => true,
                                    "response-headers" => {"process-state" => "reload-required"}
                                }
                                [standalone@localhost:9999 /] /subsystem=jgroups/stack=tcp:export-native-configuration
                                {
                                    "outcome" => "failed",
                                    "failure-description" => "JBAS014739: No handler for export-native-configuration at address [
                                    (\"subsystem\" => \"jgroups\"),
                                    (\"stack\" => \"tcp\")
                                ]",
                                    "rolled-back" => true,
                                    "response-headers" => {"process-state" => "reload-required"}
                                }
                                

                                 

                                I am actually trying to set "ip_mcast=false" but don't know where

                                • 13. Re: Cluster-nodes don't find each other
                                  rachmato

                                  In a native JGroups configuration, ip_mcast is an attribute of the UDP transport protocol.

                                   

                                  In an AS7 JGroups subsystem configuration, this is set as a property of the UDP transport layer as follows:

                                   

                                   

                                   

                                  [standalone@localhost:9999 /] /subsystem=jgroups/stack=udp/transport=TRANSPORT/property=ip_mcast:read-attribute(name=value)
                                  {
                                      "outcome" => "success",
                                      "result" => false,
                                      "response-headers" => {"process-state" => "reload-required"}
                                  }
                                  [standalone@localhost:9999 /] /subsystem=jgroups/stack=udp/transport=TRANSPORT/property=ip_mcast:write-attribute(name=value,value=true)
                                  {
                                      "outcome" => "success",
                                      "response-headers" => {
                                          "operation-requires-reload" => true,
                                          "process-state" => "reload-required"
                                      }
                                  }
                                  [standalone@localhost:9999 /] /subsystem=jgroups/stack=udp/transport=TRANSPORT/property=ip_mcast:read-attribute(name=value)            
                                  {
                                      "outcome" => "success",
                                      "result" => true,
                                      "response-headers" => {"process-state" => "reload-required"}
                                  }
                                  

                                   

                                  In other words, a property of a protocol layer is an addressable resource with the last component of the address being property=<property_name> which needs to be first added to the management api. This resource has an attribute named "value" which can be read and written as above. It can also be removed.

                                  • 14. Re: Cluster-nodes don't find each other
                                    akostadinov

                                    Thanks Richard, that's certainly useful but I was looking for a static configuration (which Rado provided me with):

                                                    <transport type="UDP" socket-binding="jgroups-udp" diagnostics-socket-binding="jgroups-diagnostics">
                                                        <property name="ip_mcast">
                                                            false
                                                        </property>
                                                    </transport>
                                    
                                    1 2 Previous Next