6 Replies Latest reply on Mar 14, 2016 5:42 AM by Miroslav Novak

    A problem with Artemis (ActiveMQ) messaging in Wildfly 10 cluster (domain)

    Tomas S. Newbie

      Hello, could someone provide an example of messaging application working under Wildfly 10 cluster (domain)? We are struggling with it and given that it is a new technology, there is a terrible lack of resources.

       

      Currently, we have the following:

       

      A domain consisting of two hosts (nodes) and three groups on each, i.e. six separate servers in the domain.

       

      A relevant part of server configuration (in domain.xml):

       

                  <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">

                      <server name="default">

                          <security enabled="false"/>

                          <cluster password="${jboss.messaging.cluster.password}"/>

                          <security-setting name="#">

                              <role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>

                          </security-setting>

                          <address-setting name="#" redistribution-delay="1000" message-counter-history-day-limit="10" page-size-bytes="2097152" max-siz

                          <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/>

                          <http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">

                              <param name="batch-delay" value="50"/>

                          </http-connector>

                          <in-vm-connector name="in-vm" server-id="0"/>

                          <http-acceptor name="http-acceptor" http-listener="default"/>

                          <http-acceptor name="http-acceptor-throughput" http-listener="default">

                              <param name="batch-delay" value="50"/>

                              <param name="direct-deliver" value="false"/>

                          </http-acceptor>

                          <in-vm-acceptor name="in-vm" server-id="0"/>

                          <broadcast-group name="bg-group1" connectors="http-connector" jgroups-channel="activemq-cluster" jgroups-stack="tcphq"/>

                          <discovery-group name="dg-group1" jgroups-channel="activemq-cluster" jgroups-stack="tcphq"/>

                          <cluster-connection name="my-cluster" discovery-group="dg-group1" connector-name="http-connector" address="jms"/>

                          <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>

                          <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>

                          <jms-queue name="TestQ" entries="java:jboss/exported/jms/queue/testq"/>

                          <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>

                          <connection-factory name="RemoteConnectionFactory" reconnect-attempts="-1" block-on-acknowledge="true" ha="true" entries="java

                          <pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" co

                      </server>

                  </subsystem>

       

      The configuration is more or less default, except added *TestQ* queue.

       

      *tcphq* stack is defined in the JGroups configuration as follows:

       

                          <stack name="tcphq">

                              <transport type="TCP" socket-binding="jgroups-tcp-hq"/>

                              <protocol type="TCPPING">

                                  <property name="initial_hosts">

                                    dev1[7660],dev1[7810],dev1[7960],dev2[7660],dev2[7810],dev2[7960]

                                  </property>

                                  <property name="port_range">

                                      0

                                  </property>

                              </protocol>

                              <protocol type="MERGE3"/>

                              <protocol type="FD_SOCK" socket-binding="jgroups-tcp-hq-fd"/>

                              <protocol type="FD"/>

                              <protocol type="VERIFY_SUSPECT"/>

                              <protocol type="pbcast.NAKACK2"/>

                              <protocol type="UNICAST3"/>

                              <protocol type="pbcast.STABLE"/>

                              <protocol type="pbcast.GMS"/>

                              <protocol type="MFC"/>

                              <protocol type="FRAG2"/>

                          </stack>

       

      I have written a testing application consisting from a simple "server", meaning MDB and a client as follows:

       

      Server (MDB):

       

          @MessageDriven(mappedName = "test", activationConfig = {

              @ActivationConfigProperty(propertyName = "subscriptionDurability", propertyValue = "Durable"),

              @ActivationConfigProperty(propertyName = "destination", propertyValue = "java:jboss/exported/jms/queue/testq"),

              @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue")

          })

          public class MessageServer implements MessageListener {

              

              @Override

              public void onMessage(Message message) {

                

                  try {

                      ObjectMessage msg = null;

                    

                      if (message instanceof ObjectMessage) {

                          msg = (ObjectMessage) message;

                      }

                      System.out.print("The number in the message: "+ msg.getIntProperty("count"));

                  } catch (JMSException ex) {

                      Logger.getLogger(MessageServer.class.getName()).log(Level.SEVERE, null, ex);

                  }

              }

          }

       

      Client:

       

          @Singleton

          @Startup

          public class ClientBean implements ClientBeanLocal {

        

              @Resource(mappedName = "java:jboss/exported/jms/RemoteConnectionFactory")

              private ConnectionFactory factory;

        

              @Resource(mappedName = "java:jboss/exported/jms/queue/testq")

              private Queue queue;

        

              @PostConstruct

              public void sendMessage() {

        

                  Connection connection = null;

                  try {

        

                      connection = factory.createConnection();

                      Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);

                      MessageProducer producer = session.createProducer(queue);

        

                      Message message = session.createObjectMessage();

                      message.setIntProperty("count", 1);

        

                      producer.send(message);

                      System.out.println("Message sent.");

        

                  } catch (JMSException ex) {

                      Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);

                  } catch (NamingException ex) {

                      Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);

                  } finally {

                      try {

                          if (connection != null) connection.close();

                      } catch (JMSException ex) {

                          Logger.getLogger(ClientBean.class.getName()).log(Level.SEVERE, null, ex);

                      }

                  }

              }

        

          }

       

      It actually works well if both client and server reside in the same group. In such a case it even seems it communicates between hosts (nodes). However if the server and client are in different groups, MDB is not invoked. Moreover, it even seems that MDB is invoked only if it resides in the group with 0 offset. When I moved the server MDB into a different group, it was not responding even if the client was in the same group.

       

      I am a bit confused about JMS in Wildfly 10. There is a lot of examples and materials for older versions with HornetQ, however very few for Artemis. Could someone help? Many thanks.

        • 1. Re: A problem with Artemis (ActiveMQ) messaging in Wildfly 10 cluster (domain)
          Justin Bertram Master

          The bottom line here is that if you have a producer on one machine and a consumer (e.g. MDB) on another machine and they are both configured to use their respective local broker then those brokers have to be clustered if you want the consumer to be able to get the messages that the producer sends.  I don't know the ins and the outs of Wildfly domain configuration but my guess is that the configuration that works for you ultimately results in a Artemis cluster and the configurations that don't work for you doesn't result in a Artemis cluster.

           

          One other note...Your client is using an anti-pattern since it is creating a JMS connection every time it sends a message and because it not using a pooled-connection-factory.  You can fix this by using the "JmsXA" connection factory instead of "RemoteConnectionFactory."  The "RemoteConnectionFactory" is really only for remote clients.  Local clients should either use the non-pooled "ConnectionFactory" or the pooled "JmsXA."

          • 2. Re: A problem with Artemis (ActiveMQ) messaging in Wildfly 10 cluster (domain)
            Tomas S. Newbie

            Hi Justin. Regarding the pooled connection factory, that's a good point, thank you.

             

            Regarding the cluster, I am not sure if I understand. Our cluster is working normally. The nodes are connected, bean state transfer works, Infinispan caches are shared correctly, etc. Just JMS works only within a single server. Unfortunately, I haven't enough experience with JMS and that's why I would appreciate any help. Here is our domain.xml http://pastebin.com/muGbj3wP and host.xml http://pastebin.com/16itGKgp

             

            If you see an error or if you could provide an example of configuration, it would really help.

             

            Many thanks.

            Tomas

            • 3. Re: A problem with Artemis (ActiveMQ) messaging in Wildfly 10 cluster (domain)
              Justin Bertram Master

              Regarding the cluster, I am not sure if I understand. Our cluster is working normally. The nodes are connected, bean state transfer works, Infinispan caches are shared correctly, etc. Just JMS works only within a single server.

              Each component within the application server (e.g. Artemis, Infinispan, etc.) is really doing its own clustering although there might be some configuration overlap with the JGroups stack.  Just because one component is clustering properly doesn't mean the others are.  If you aren't seeing messages in your log about Artemis cluster bridges being connected then that indicates the Artemis cluster isn't being formed as expected which I think would lead to the behavior you've described.

               

              In general, I would encourage you to get the server/broker configuration you want in standalone mode and then transition that to a domain configuration.  That way you can be sure the server/broker configuration works as expected in your environment before you have to deal with the complexities of the domain configuration.

              • 4. Re: A problem with Artemis (ActiveMQ) messaging in Wildfly 10 cluster (domain)
                Tomas S. Newbie

                Ok, is there any example of a working standalone configuration?

                • 5. Re: A problem with Artemis (ActiveMQ) messaging in Wildfly 10 cluster (domain)
                  Miroslav Novak Master

                  Hi Tomas,

                   

                  this is working for me in standalone. This config is the same in both of the servers.

                   

                  TCP JGroups stack looks like:

                  <stack name="tcp">
                                          <transport type="TCP" socket-binding="jgroups-tcp"/>
                                          <protocol type="TCPPING">
                                              <property name="initial_hosts">
                                                                            127.0.0.1[7600],127.0.0.1[9600]
                                              </property>
                                              <property name="port_range">
                                                  0
                                              </property>
                                          </protocol>
                                          <protocol type="MERGE3"/>
                                          <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
                                          <protocol type="FD"/>
                                          <protocol type="VERIFY_SUSPECT"/>
                                          <protocol type="pbcast.NAKACK2"/>
                                          <protocol type="UNICAST3"/>
                                          <protocol type="pbcast.STABLE"/>
                                          <protocol type="pbcast.GMS"/>
                                          <protocol type="MFC"/>
                                          <protocol type="FRAG2"/>
                                      </stack>
                  

                   

                  and messaging subsystem:

                   

                  <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
                              <server name="default">
                                  <security enabled="false"/>
                                  <cluster password="${jboss.messaging.cluster.password:CHANGE ME!!}"/>
                                  <journal compact-min-files="0" min-files="10"/>
                                  <security-setting name="#">
                                      <role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
                                  </security-setting>
                                  <address-setting name="#" redistribution-delay="0" page-size-bytes="1048576" max-size-bytes="52428800" expiry-address="jms.queue.DLQ" dead-letter-address="jms.queue.ExpiryQueue"/>
                                  <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/>
                                  <http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">
                                      <param name="batch-delay" value="50"/>
                                  </http-connector>
                                  <in-vm-connector name="in-vm" server-id="0"/>
                                  <http-acceptor name="http-acceptor" http-listener="default"/>
                                  <http-acceptor name="http-acceptor-throughput" http-listener="default">
                                      <param name="batch-delay" value="50"/>
                                      <param name="direct-deliver" value="false"/>
                                  </http-acceptor>
                                  <in-vm-acceptor name="in-vm" server-id="0"/>
                                  <broadcast-group name="bg-group1" connectors="http-connector" jgroups-channel="tcp" jgroups-stack="tcp"/>
                                  <discovery-group name="dg-group1" refresh-timeout="10000" jgroups-channel="tcp" jgroups-stack="tcp"/>
                                  <cluster-connection name="my-cluster" discovery-group="dg-group1" retry-interval="1000" connector-name="http-connector" address="jms"/>
                                  <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
                                  <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
                                  <jms-queue name="testQueue0" entries="jms/queue/testQueue0 java:jboss/exported/jms/queue/testQueue0"/>
                                  <jms-queue name="testQueue1" entries="jms/queue/testQueue1 java:jboss/exported/jms/queue/testQueue1"/>
                                  <jms-queue name="InQueue" entries="jms/queue/InQueue java:jboss/exported/jms/queue/InQueue"/>
                                  <jms-queue name="OutQueue" entries="jms/queue/OutQueue java:jboss/exported/jms/queue/OutQueue"/>
                                  <jms-topic name="InTopic" entries="jms/topic/InTopic java:jboss/exported/jms/topic/InTopic"/>
                                  <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
                                  <connection-factory name="RemoteConnectionFactory" reconnect-attempts="-1" block-on-acknowledge="true" ha="true" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>
                                  <pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm"/>
                              </server>
                          </subsystem>
                  

                   

                   

                  There is the same config in both of the servers. Second server must be started with port offset 2000 like:

                  sh standalone.sh -c standalone-full-ha.xml -Djboss.socket.binding.port-offset=2000
                  

                   

                  You should see log like:

                  09:08:05,145 INFO  [org.apache.activemq.artemis.core.server] (Thread-12 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@5eaf372c-573693146)) AMQ221027: Bridge ClusterConnectionBridge@4e949450 [name=sf.my-cluster.ab723330-e692-11e5-994e-d5e6561c8017, queue=QueueImpl[name=sf.my-cluster.ab723330-e692-11e5-994e-d5e6561c8017, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=bbb14a65-e692-11e5-8d06-e31ae15b1485]]@7049296d targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@4e949450 [name=sf.my-cluster.ab723330-e692-11e5-994e-d5e6561c8017, queue=QueueImpl[name=sf.my-cluster.ab723330-e692-11e5-994e-d5e6561c8017, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=bbb14a65-e692-11e5-8d06-e31ae15b1485]]@7049296d targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8080&host=localhost], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@2029966400[nodeUUID=bbb14a65-e692-11e5-8d06-e31ae15b1485, connector=TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=10080&host=localhost, address=jms, server=ActiveMQServerImpl::serverUUID=bbb14a65-e692-11e5-8d06-e31ae15b1485])) [initialConnectors=[TransportConfiguration(name=http-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?httpUpgradeEnabled=true&httpPpgradeEndpoint=http-acceptor&port=8080&host=localhost], discoveryGroupConfiguration=null]] is connected
                  

                   

                  on both of the servers if cluster is connected.

                  • 6. Re: A problem with Artemis (ActiveMQ) messaging in Wildfly 10 cluster (domain)
                    Miroslav Novak Master

                    Could you double check that port-offset for servers in server-groups are the same as described in your tck jgroups stack :

                    dev1[7660],dev1[7810],dev1[7960],dev2[7660],dev2[7810],dev2[7960]


                    Each server in domain has its port offset so there is no port collision. It's in configuration for the given server in host.xml. "jgroups-tcp-hq" socket binding will then add this offset for this server when it's started.


                    For example for server "dev1[7660]" this should have "jgroups-tcp-hq" socket binding on port 7600. And its port-offset is 60. For server "dev1[7810]" it should have port offset 210.