1 2 Previous Next 19 Replies Latest reply on Sep 30, 2012 7:29 PM by harikrishnan_pillai

    Round Robin Client Load Balancing not working?

    m.miklas

      Hi,

       

      Default configuration should use Round Robin on the client for load balancing.

       

      I have two instances (A and B) in one cluster. JNDI configuration contains only server A. After starting the client I can disable server A and client will send messages to server B - this means that cluster configuration is correct.

       

      Now my problem: client send messages only to node A or to node B and the loadbalancing takes place on the server. I would like to have it also on the client in the first place.

       

      I've implemented my own ConnectionLoadBalancingPolicy for Test - method select is being called only once or twice. It sould be called before each message is being send to provide right node for load balancing policy.

       

      I am using Spring JMS Template and stand alone client deployed in tomcat. Spring is caching JMS resources - but still this should not impact load balancing. I have not transactions and DUPS_OK_ACKNOWLEDGE

       

      Regards,

      Maciej

        • 1. Re: Round Robin Client Load Balancing not working?
          m.miklas

          looks like I need to wait a bit after creating first session... if I wait the next session is being redirected to next server from cluster.

          The cluster is already running for a long time - client should get information about cluster structure already at the beginning....

           

          The Method: org.hornetq.core.client.impl.ClientSessionFactoryImpl#connectorsChanged - obtains from #discoveryGroup the cluster structure - sometimes it contains one node, sometimes all nodes.

           

          I would like to enable client load balancing right after start (discoveryGroup should contain all cluster nodes right after client start) - is it possible?

          • 2. Re: Round Robin Client Load Balancing not working?
            m.miklas

            Let me continue my discussion

             

            discovery-initial-wait-timeout is set to 2000, but DiscoveryGroupImpl#waitForBroadcast(long) never waits! Something is calling notify on lock before cluster is propagated right. When I block this call for like one second everything is fine.

             

             

            Is it possible that DiscoveryGroupImpl receives broadcast from the first server and releases lock on #waitForBroadcast ? I am missing something - this broadcast should still contain both server nodes, because they are in the same cluster....

            • 3. Re: Round Robin Client Load Balancing not working?
              clebert.suconic

              We didn't see your message because you posted it on the wrong forum. Dev Forum instead of user's forum.

               

               

              I will move your message to the correct forum.

              • 4. Re: Round Robin Client Load Balancing not working?
                clebert.suconic

                Load balancing on the client is to balance Session creation.

                 

                I believe you should not use load balancing on the server and only use message redistribution.

                 

                Let me take a look how to configure this option and I will get back to you. This is an area that Andy and Tim were dealing more... I will take a quick research in how to do it.

                • 5. Re: Round Robin Client Load Balancing not working?
                  m.miklas

                  Thak you for the answer and sorry for my english...  anyway

                  This is what I have discovered after some tests:

                   

                  Create first session and wait few seconds - other sessions will be load balanced in the right way trough all nodes (A and B) in cluster.

                  But if I create xx sessions right after starting the JMS client without "waiting described above", all sessions will be directed to one node from cluster. This will also happen, if cluster crashes - all nodes are gone and sessions recover automatically.

                   

                  One more thing: starting the JMS client means for example: starting 30 threads, each one starts its own session - we have 30 sessions and all go on one node in the cluster.

                   

                  This problem will only occur if your system is under heavy last from the beginning, well mine is - I am expecting up to 1000 messages / second

                  • 6. Re: Round Robin Client Load Balancing not working?
                    clebert.suconic

                    Are you talking about HornetQ Core sessions (our API) or JMS Sessions?

                     

                    If you're talking about JMSSessions it's a different thing: The main Connection will be holding a session and all the JMS Sessions will be going towards the same node respected the connection at the JMSConnection.

                    • 7. Re: Round Robin Client Load Balancing not working?
                      m.miklas

                      I am using Spring JMS - it obtains JMS resources trough JNDI - this is HornetQ implementation. Spring is using PooledCoeectionFactory - it pools on Connection level. In my case it should create different Connection and corresponding Session for each Thread (I will validate it tomorrow, it could be only different Session....)

                       

                      My understanding was, that one JMS Connection should create Sessions that are connected to different cluster nodes based on client's load balancing policy. This makes also sense - Connection it therefore thread save.

                      • 8. Re: Round Robin Client Load Balancing not working?
                        clebert.suconic

                        if you were using CoreAPI, yes: Different core Sessions.

                         

                        If you are using the JMS API... once the connection is stick to the node.. all the JMS Sessions on that JMS Connection will respect the expected semantic (i.e. they are all connected to the same server).

                        1 of 1 people found this helpful
                        • 9. Re: Round Robin Client Load Balancing not working?
                          m.miklas

                          Spring is using single javax.jms.Connection - it is thread save and I would expect it to load balance sessions - are you sure, that jms api will not do that?

                           

                          I can even see that it works - only it is not working when I create xxx sessions right after starting the client - cluster topology is not propagated to the client fast enough (my guess).

                           

                          My Spring client uses single Connection - there are two test cases:

                           

                          Test 1:

                          1) start xxx threads

                          2) each thread creates one Session - we have xx Sessions from single Connection

                          3) each Session goes to the single node in cluster

                           

                          Test 2:

                          1) start xxx threads

                          1a) create one Session and send one message to the queue

                          2) each thread creates one Session - we have xx Sessions from single Connection

                          3) each Session goes to the different node in cluster - load balancing is working

                           

                          I do not understand difference between core API and JMS API - Spring code does JNDI lookup and basically it works with HornetQ implementation for Connection, Session and Destination.

                           

                          http://hornetq.sourceforge.net/docs/hornetq-2.1.2.Final/user-manual/en/html_single/index.html

                          With HornetQ client-side load balancing, subsequent sessions created using a single session factory can be connected to different nodes of the cluster. This allows sessions to spread smoothly across the nodes of a cluster and not be "clumped" on any particular node.

                          • 10. Re: Round Robin Client Load Balancing not working?
                            m.miklas

                            one more thing: spring obtains trough JNDI connection factory defined in hornetq-jms.xml

                             

                            <connection-factory name="NettyConnectionFactory">
                            <connectors>
                            <connector-ref connector-name="netty" />
                            </connectors>
                            <entries>
                            <entry name="/TestConnectionFactory" />
                            </entries>
                            <call-timeout>500</call-timeout>
                            <discovery-initial-wait-timeout>2200</discovery-initial-wait-timeout>
                            <client-failure-check-period>500</client-failure-check-period>
                            <block-on-acknowledge>false</block-on-acknowledge>
                            <discovery-group-ref discovery-group-name="test-jms-discovery-group" />
                            <reconnect-attempts>0</reconnect-attempts>
                            <connection-load-balancing-policy-class-name>
                            org.hornetq.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy
                            </connection-load-balancing-policy-class-name>
                            </connection-factory>

                            <connection-factory name="NettyConnectionFactory">

                            <connectors>

                            <connector-ref connector-name="netty" />

                            </connectors>

                            <entries>

                            <entry name="/TestConnectionFactory" />

                            </entries>

                             

                            <call-timeout>500</call-timeout>

                             

                            <discovery-initial-wait-timeout>2200</discovery-initial-wait-timeout>

                             

                            <client-failure-check-period>500</client-failure-check-period>

                             

                            <block-on-acknowledge>false</block-on-acknowledge>

                             

                            <discovery-group-ref discovery-group-name="test-jms-discovery-group" />

                             

                            <reconnect-attempts>0</reconnect-attempts>

                             

                            <connection-load-balancing-policy-class-name>

                            org.hornetq.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy

                            </connection-load-balancing-policy-class-name>

                             

                            </connection-factory>

                            • 11. Re: Round Robin Client Load Balancing not working?
                              ataylor

                              JMS Sessions are load balanced between the all nodes, when the connection is made it will wait for discovery of the first node before allowing sessions to be created. However, since this is time sensitive, there will be a window where 1 node may have been discovered but the others havent. full discovery can take up to broadcast-period * 2.

                               

                              Again with consumers once they are created locally they are propogated around the cluster, any messages sent before this propogation has happened will not be distributed evenly.

                              • 12. Re: Round Robin Client Load Balancing not working?
                                m.miklas

                                Thank you! This is exacelly what I've observed.

                                 

                                This could be also improved: Cluster is running for long time. Client connects to broadcast group, client needs to receive broadcast from all nodes in cluster to provide proper load balancing. But again - server is running for long time and already first brodcast to client could contain all cluster nodes.

                                 

                                My application runns with heavy load from just first second - it this case I am running into problem that most of the load is being redirected into one node

                                • 13. Re: Round Robin Client Load Balancing not working?
                                  ataylor

                                  You could always use static connectors instead.

                                   

                                  HA is changing quite a bit in the next version and this won't happen.

                                  • 14. Re: Round Robin Client Load Balancing not working?
                                    harikrishnan_pillai

                                    Hi, I am facing a problem in spring hornetq integration.The hornetq is running in a two node cluster mode.My requirement is  jms session load balancing to hornetq nodes.i configured a spring integration consumer (message driven channel adapter) to connect to hornetq through spring caching connection factory.so if i configure concurrent consumers as 10,5 consumers should be connected  to node 1 and 5 consumers should connect to node2.But the behaviour is all 10 consumers is get attached to one hornetq node only.

                                    If we remove spring caching connection factory from spring integration message driven channel adapter and attcah hornetq conenction fcatory ,the load balencing is working fine.Also i tried to use diffrent cache levels like 0,1,2,3.the laod balencing is not at all working with a spring caching conenction factory.How will we solve this problem.if i remove spring ccahing conenction fcatory the performance seems drastically reduced.

                                     

                                    My configuration is as below.

                                     

                                    @Configuration

                                    public class JMSConfig {

                                     

                                     

                                    @Value("${session.cacheSize}")

                                    private Integer sessioncacheSize;

                                     

                                    @Value("${cache.producers}")

                                    private Boolean isCacheProducers;

                                     

                                    @Value("${ems.server}")

                                    private String emsServerUrls;

                                     

                                    @Value("${ems.port}")

                                    private String port;

                                     

                                    @Value("${ems.high.available}")

                                    private Boolean isHighAvailable;

                                     

                                    @Value("${ems.username}")

                                    private String emsUserName;

                                     

                                    @Value("${ems.password}")

                                    private String emsPassword;

                                     

                                    @Value("${ems.transacted}")

                                    private Boolean isTransacted;

                                     

                                    @Value("${ems.cache.level.name}")

                                    private String cacheLevelName;

                                     

                                    @Value("${ems.maxconsumers}")

                                    private Integer maxConsumers;

                                     

                                    @Value("${ems.acknowledgment.mode}")

                                    private String consumerAcknowledgementMode;

                                     

                                    @Value("${ems.concurrent.consumers}")

                                    private Integer concurrentConsumers;

                                     

                                    @Value("${ems.broadcast.group}")

                                    private String broadCastGroup;

                                     

                                    @Value("${ems.broadcast.port}")

                                    private Integer broadcastPort;

                                     

                                    private List<String> transportServers = Lists.newArrayList();

                                     

                                     

                                    @PostConstruct

                                    public void initilizeTransportServers() {

                                     

                                    Splitter splitter = Splitter.on(DELEMITER_COMMA).omitEmptyStrings().tr  imResults();

                                     

                                    for (String url : splitter.split(emsServerUrls)) {

                                    transportServers.add(url);

                                     

                                    }

                                     

                                    }

                                     

                                    @Bean

                                    public UserCredentialsConnectionFactoryAdapter userCredentialsConnectionFactory() {

                                     

                                    UserCredentialsConnectionFactoryAdapter userCredentialsConnectionFactory = new UserCredentialsConnectionFactoryAdapter();

                                    userCredentialsConnectionFactory.setTargetConnecti  onFactory(connectionFactory());

                                    userCredentialsConnectionFactory.setUsername(emsUs  erName);

                                    userCredentialsConnectionFactory.setPassword(emsPa  ssword);

                                     

                                    return userCredentialsConnectionFactory;

                                    }

                                     

                                    @Bean

                                    public CachingConnectionFactory connectionFactory() {

                                     

                                    CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory();

                                    cachingConnectionFactory.setSessionCacheSize(sessi  oncacheSize);

                                    cachingConnectionFactory.setCacheProducers(false);

                                    cachingConnectionFactory.setCacheConsumers(false);

                                    cachingConnectionFactory.setTargetConnectionFactor  y(hornetQConnectionFactory());

                                     

                                    return cachingConnectionFactory;

                                    }

                                     

                                     

                                     

                                    /*

                                    * Direct Connection factory use is recommended for client side load balancing as HA JNDI is complex and

                                    * requires configuration in messaging system.JNDI server can be completely avoided with direct connection factory usage.

                                    * also provides additional flexibility to add interceptor for logging and monitoring ,and   perform better than HA JNDI.

                                    *

                                    */

                                    @Bean

                                    public ConnectionFactory hornetQConnectionFactory() {

                                    TransportConfiguration[] transportConfigurations = transportConfiguration();

                                    //DiscoveryGroupConfiguration groupConfiguration = new DiscoveryGroupConfiguration(broadCastGroup, broadcastPort);

                                    //groupConfiguration.setDiscoveryInitialWaitTimeout(  3000);

                                     

                                    //ConnectionFactory connectionFactory = (ConnectionFactory) HornetQJMSClient.createConnectionFactoryWithHA(JMS FactoryType.QUEUE_CF,transportConfigurations);

                                     

                                    HornetQJMSConnectionFactory connectionFactory = new HornetQJMSConnectionFactory(Boolean.FALSE, transportConfigurations);

                                    return connectionFactory;

                                    }

                                     

                                    public TransportConfiguration[] transportConfiguration() {

                                     

                                    List<TransportConfiguration> transportConfigurations = Lists.newArrayList();

                                     

                                    for (String url : transportServers) {

                                     

                                    Map<String, Object> map = Maps.newHashMap();

                                    map.put(EMS_HOST, url);

                                    map.put(EMS_PORT, port);

                                     

                                    TransportConfiguration server = new TransportConfiguration(NettyConnectorFactory.class  .getName(), map);

                                    transportConfigurations.add(server);

                                     

                                    }

                                     

                                    return transportConfigurations.toArray(new TransportConfiguration[transportConfigurations.size()]);

                                     

                                    }

                                     

                                     

                                    }

                                     

                                     

                                    and SI configuration:

                                     

                                    Load balencing Working:

                                    <jms:message-driven-channel-adapter

                                    id="limelightCatalogAdapterInboundAck"  

                                    channel="messageReceiverChannel"       

                                    cache-level="0"     

                                    destination-name="DLQ"

                                    concurrent-consumers="${ems.concurrent.consumers}"

                                    connection-factory="hornetQConnectionFactory"

                                    max-concurrent-consumers="${ems.maxconsumers}" />

                                     

                                     

                                    Load balencing Not working:

                                     

                                     

                                    <jms:message-driven-channel-adapter

                                    id="limelightCatalogAdapterInboundAck"  

                                    channel="messageReceiverChannel"       

                                    cache-level="0"     

                                    destination-name="DLQ"

                                    concurrent-consumers="${ems.concurrent.consumers}"

                                    connection-factory="userCredentialsConnectionFactory"

                                    max-concurrent-consumers="${ems.maxconsumers}" />

                                    1 2 Previous Next