3 Replies Latest reply on Mar 13, 2017 2:32 AM by mnovak

    multi-host jms cluster on wildfly10.0.0?

    emitani

      Hello,

       

      I'm currently trying to set up a jms cluster with full-ha profile.

      I got to the point where jms cluster works fine if I have two nodes in the same machine. When I send a topic message, both instances receive it, which is what I want.

      But when I try to set up a cluster across two machines, it seems to me like each machine has its own cluster, and they don't know about each other.

      Currently I use default settings for messaging-activemq subsystem in domain.xml, other than changing cluster user and password. Both hosts have the exact same copy of domain.xml. 

      How can I configure a cluster across multiple hosts? Am I missing some additional configuration?

       

      Below is the output from a jboss-cli command. Each node has two active server instances. maser has ip=10.10.35.59, and slave ip is 10.10.35.38:

       

      [domain@w7-0476:9990 cluster-connection=my-cluster] /host=master/server=server-one/subsystem=messaging-activemq/server=default/cluster-connection=my-cluster:read-attribute(name=topology)

      {

          "outcome" => "success",

          "result" => "topology on Topology@6d5d458e[owner=ClusterConnectionImpl@1727559813[nodeUUID=1b568e94-051c-11e7-9631-57d6194264c7, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8080&host=10-10-35-59, address=jms, server=ActiveMQServerImpl::serverUUID=1b568e94-051c-11e7-9631-57d6194264c7]]:

              1d6b651a-051c-11e7-a85c-217e75985f77 => TopologyMember[id = 1d6b651a-051c-11e7-a85c-217e75985f77, connector=Pair[a=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8230&host=10-10-35-59, b=null], backupGroupName=null, scaleDownGroupName=null]

              1b568e94-051c-11e7-9631-57d6194264c7 => TopologyMember[id = 1b568e94-051c-11e7-9631-57d6194264c7, connector=Pair[a=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8080&host=10-10-35-59, b=null], backupGroupName=null, scaleDownGroupName=null]

              nodes=2 members=2"

      }

       

      [domain@w7-0476:9990 cluster-connection=my-cluster] /host=slave/server=server-one/subsystem=messaging-activemq/server=default/cluster-connection=my-cluster:read-attribute(name=topology)

      {

          "outcome" => "success",

          "result" => "topology on Topology@49c21d[owner=ClusterConnectionImpl@17675183[nodeUUID=292110a7-051e-11e7-b52d-a7a7ab3e2455, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8080&host=10-10-35-85, address=jms, server=ActiveMQServerImpl::serverUUID=292110a7-051e-11e7-b52d-a7a7ab3e2455]]:

              292110a7-051e-11e7-b52d-a7a7ab3e2455 => TopologyMember[id = 292110a7-051e-11e7-b52d-a7a7ab3e2455, connector=Pair[a=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8080&host=10-10-35-85, b=null], backupGroupName=null, scaleDownGroupName=null]

              2a2c669c-051e-11e7-afb9-7fdbf16f1ab2 => TopologyMember[id = 2a2c669c-051e-11e7-afb9-7fdbf16f1ab2, connector=Pair[a=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8230&host=10-10-35-85, b=null], backupGroupName=null, scaleDownGroupName=null]

              nodes=2 members=2"

      }

       

       

       

      Thank you very much for your help.

        • 1. Re: multi-host jms cluster on wildfly10.0.0?
          mnovak

          I think the problem could be in socket-bindings for your profile.

           

          Artemis is using "udp" JGroups stack by default to discover nodes in cluster. It's using "jgroups-udp" socket binding:

          <socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>
          

           

          which is be default on "private" interface:

          <interface name="private">
                <inet-address value="${jboss.bind.address.private:127.0.0.1}"/>
              </interface>
          

           

          as you can see it binds to 127.0.0.1 so udp multicast cannot go to the other machine.

           

          Could you modify the configuration of "jgroups-udp" socket binding to use "public" interface:

          <socket-binding name="jgroups-udp" interface="public" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>
          

           

          This is wild shot :-) I'll need to see your configuration to know better.

           

          Thanks,

          Mirek

          • 2. Re: multi-host jms cluster on wildfly10.0.0?
            emitani

            Hi Mirek, that resolved the problem! My jms cluster is now working as expected. Thank you very much!

            • 3. Re: multi-host jms cluster on wildfly10.0.0?
              mnovak

              You're welcome! I'm glad it helped.