4 Replies Latest reply on Mar 28, 2018 9:48 AM by pablorocha

    Load Balancing via Client Side doesn’t work

    pablorocha

      Good morning, I'm trying to set up the Load Balancing Client Side with HornetQ.

       

      The client is ActiveMQ. Below are the settings for the Connection Factory configured in ActiveMQ.

       

       <!-- HornetQ -->
         <bean name="hornetq" class="org.apache.camel.component.jms.JmsComponent">
            <property name="connectionFactory" ref="connectionFactory" />
         </bean>
         <!-- ConnectionFactory Definition -->
         <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
            <constructor-arg ref="hornetConnectionFactory" />
         </bean>
      
       <bean id="hornetConnectionFactory" class="org.hornetq.jms.client.HornetQJMSConnectionFactory">
            <property name="connectionLoadBalancingPolicyClassName" value="org.hornetq.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy"/>
            <property name="failoverOnInitialConnection" value="true"/>
            <property name="clientFailureCheckPeriod" value="10000"/>
            <constructor-arg name="ha" value="true" />
            <constructor-arg>
              <list>
                   <bean class="org.hornetq.api.core.TransportConfiguration">
                      <constructor-arg value="org.hornetq.core.remoting.impl.netty.NettyConnectorFactory" />
                      <constructor-arg>
                         <map key-type="java.lang.String" value-type="java.lang.Object">
                            <!-- HornetQ standalone instance details -->
                            <entry key="host" value="xyx.xy.xy.xy" />
                            <entry key="port" value="5445" />
                         </map>
                      </constructor-arg>
                   </bean>
                    <bean class="org.hornetq.api.core.TransportConfiguration">
                      <constructor-arg value="org.hornetq.core.remoting.impl.netty.NettyConnectorFactory" />
                      <constructor-arg>
                         <map key-type="java.lang.String" value-type="java.lang.Object">
                            <!-- HornetQ standalone instance details -->
                            <entry key="host" value="localhost" />
                            <entry key="port" value="5445" />
                         </map>
                      </constructor-arg>
                   </bean>
               </list>
            </constructor-arg>
         </bean>
      

       

      I'm having an instance of HornetQ and ActiveMQ running on my local machine, and another instance of the HornetQ running on another machine on the network, which would be the ip xyx.xy.xy.xy shown in the TransportConfiguration above.

       

      I can communicate with HornetQ on the other machine, and Failover also works.

       

      According to the HorentQ documentation, to configure the Load Balance via Client Side, only the code below, which would already be default, so would be necessary.

       

      <property name="connectionLoadBalancingPolicyClassName" value="org.hornetq.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy"/>
      

       

      However, the Load Balance Client Side does doesn’t work. When sending messages through ActiveMQ, the messages are forwarded only for instance of the HornetQ on the local machine or to the other machine on the network, but the balancing never occurs.

       

      The HornetQ servers I'm using are non-clustered. Below the configuration of hornetq-configuration.xml

       

      <configuration xmlns="urn:hornetq"
                     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                     xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
      
         <paging-directory>${data.dir:../data}/paging</paging-directory>
         
         <bindings-directory>${data.dir:../data}/bindings</bindings-directory>
         
         <journal-directory>${data.dir:../data}/journal</journal-directory>
         
         <journal-min-files>10</journal-min-files>
         
         <large-messages-directory>${data.dir:../data}/large-messages</large-messages-directory>
         
         <connectors>
            <connector name="netty">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
               <param key="host"  value="${hornetq.remoting.netty.host:localhost}"/>
               <param key="port"  value="${hornetq.remoting.netty.port:5445}"/>
            </connector>
            
            <connector name="netty-throughput">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
               <param key="host"  value="${hornetq.remoting.netty.host:localhost}"/>
               <param key="port"  value="${hornetq.remoting.netty.batch.port:5455}"/>
               <param key="batch-delay" value="50"/>
            </connector>
         </connectors>
      
         <acceptors>
            <acceptor name="netty">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
               <param key="host"  value="${hornetq.remoting.netty.host:0.0.0.0}"/>
               <param key="port"  value="${hornetq.remoting.netty.port:5445}"/>
            </acceptor>
            
            <acceptor name="netty-throughput">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
               <param key="host"  value="${hornetq.remoting.netty.host:0.0.0.0}"/>
               <param key="port"  value="${hornetq.remoting.netty.batch.port:5455}"/>
               <param key="batch-delay" value="50"/>
               <param key="direct-deliver" value="false"/>
            </acceptor>
         </acceptors>
      
         <security-settings>
            <security-setting match="#">
               <permission type="createNonDurableQueue" roles="guest"/>
               <permission type="deleteNonDurableQueue" roles="guest"/>
               <permission type="consume" roles="guest"/>
               <permission type="send" roles="guest"/>
            </security-setting>
         </security-settings>
      
         <address-settings>
            <!--default for catch all-->
            <address-setting match="#">
               <dead-letter-address>jms.queue.DLQ</dead-letter-address>
               <expiry-address>jms.queue.ExpiryQueue</expiry-address>
               <redelivery-delay>0</redelivery-delay>
               <max-size-bytes>10485760</max-size-bytes>       
               <message-counter-history-day-limit>10</message-counter-history-day-limit>
               <address-full-policy>BLOCK</address-full-policy>
            </address-setting>
         </address-settings>
      
      </configuration>
      
      

       

      Can the Load Balance Client Side be configured on a non-clustered server, with one instance on each machine?

       

      Would be necessary to make any changes in the connectors and acceptors of hornetq-configuration.xml?

       

      What I want is to configure Load Balancing and Failover only on the client side, which in that case is ActiveMQ

       

      Could someone help me?

       

      Thank you very much in advance.

        • 1. Re: Load Balancing via Client Side doesn’t work
          jbertram

          The first thing to note here is that the HornetQ code-base was donated to the Apache ActiveMQ community several years ago and has continued life as the ActiveMQ Artemis broker. When the donation was made development on the HornetQ code-base essentially stopped. Therefore, I would encourage you to migrate to ActiveMQ Artemis as soon as possible as it has several years worth of bug-fixes and improvements which you do not have.

           

          To answer your specific question...The connection load-balancing feature is meant to be used with a cluster of servers.  However, I don't see any cluster-related configuration in the XML you pasted. How can you expect connections and/or messages to be load-balanced between two servers if they are not in a cluster?

          • 2. Re: Load Balancing via Client Side doesn’t work
            pablorocha

            Thanks for the answer.

             

            Since I have two hornetQ instantiations, they are in diffrent  machines, I thoght that putting the host and port of both machines

            in the Connection Factory, it could have two instances of the hornet active, being managed by the Connection Factory below.

             

            <bean id="hornetConnectionFactory" class="org.hornetq.jms.client.HornetQJMSConnectionFactory">
                  <property name="connectionLoadBalancingPolicyClassName" value="org.hornetq.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy"/>
                  <property name="failoverOnInitialConnection" value="true"/>
                  <property name="clientFailureCheckPeriod" value="10000"/>
                  <constructor-arg name="ha" value="true" />
                  <constructor-arg>
                    <list>
                         <bean class="org.hornetq.api.core.TransportConfiguration">
                            <constructor-arg value="org.hornetq.core.remoting.impl.netty.NettyConnectorFactory" />
                            <constructor-arg>
                               <map key-type="java.lang.String" value-type="java.lang.Object">
                                  <!-- HornetQ standalone instance details -->
                                  <entry key="host" value="xyx.xy.xy.xy" />
                                  <entry key="port" value="5445" />
                               </map>
                            </constructor-arg>
                         </bean>
                          <bean class="org.hornetq.api.core.TransportConfiguration">
                            <constructor-arg value="org.hornetq.core.remoting.impl.netty.NettyConnectorFactory" />
                            <constructor-arg>
                               <map key-type="java.lang.String" value-type="java.lang.Object">
                                  <!-- HornetQ standalone instance details -->
                                  <entry key="host" value="localhost" />
                                  <entry key="port" value="5445" />
                               </map>
                            </constructor-arg>
                         </bean>
                     </list>
                  </constructor-arg>
               </bean>
            
            • 3. Re: Load Balancing via Client Side doesn’t work
              jbertram

              I believe your understanding of this functionality is incorrect.

               

              As I indicated previously the connection load-balancing functionality is meant to be used with a cluster.  The transport configuration data supplied to the HornetQJMSConnectionFactory instance is used only for the initial connection to the cluster.  On the initial connection the RoundRobinConnectionLoadBalancingPolicy will select one of the transport configurations at random.  Once it has connected to a server then it will download the cluster topology information from that server and use that information for additional connections from that connection factory.  Since your servers are not part of a cluster then there will be no connection load-balancing for subsequent connections from that connection factory instance.

               

              Even if it did work as you expect you would be connecting to independent, non-clustered servers which means messages sent to one server would not be available to consumers on the other server.  This is typically not the kind of functionality that users want.

               

              I may be able to help more if you actually explain your use-case and your goals.

              • 4. Re: Load Balancing via Client Side doesn’t work
                pablorocha

                I understand now. You're right.

                I will configure HornetQ as cluster.

                 

                Thank you Justin