3 Replies Latest reply on Jan 31, 2014 8:56 PM by evidence01

    Distribute consumers between hornetq nodes in a symmetric cluster with discovery

    lifeonatrip

      Hi all,

       

      I have 2 consumers (clientA & clientB) connecting to 2 hornetq servers (nodeA & nodeB) with a JNDI lookup to a discovery group. The cluster is a load balanced cluster (IE both nodes actives) and both nodes are running on Jboss as 7.2.

      My question is, using to the RoundRobin or Radom load balancing policy there is a big possibility that the 2 consumers are connecting to the same node (let's assume is nodeA).

       

      This means that every single message sent to nodeB needs to be redistributed to nodeA, impacting performance and general message flow design.

      How can I configure my consumers, via camel JMS, to connect to the server with less consumers on that specific queue?

       

       

      PS. I inspected the UDP broadcast and there is no information on the load or amount of consumers/producers per queue, there is just a generic topology view of the servers (ip address, port).

       

      That's what I want to achieve:

       

      That's what it is:

        • 1. Re: Distribute consumers between hornetq nodes in a symmetric cluster with discovery
          evidence01

          Yes it is possible via cluster config doing server side JMS load-balancing and/or client side load-balancing.It can be done with Queues or topics. What do you want to know specifically?

          -J

          • 2. Re: Distribute consumers between hornetq nodes in a symmetric cluster with discovery
            lifeonatrip

            Hi Jacek,

             

            I just want a real load based balancing policy.

            I tried various configurations but the consumers are always connecting at startup to the 2 servers in a random fashion. This basically means that there is a high chance that the consumers are all connected to one queue server rather than both.

             

            I would like to know if there is a way to load balance these consumers based on the actual queue server load and not randomly. All the hornetq load balancing classes are basically random, I tried to implement my own but I can't read from inside the topology UDP broadcast, I had to modify the client to read the UDP broadcast informations but there is no load related informations that I could use in order to balance my consumers, it's just a bunch of ip addresses.

             

            Whether a semi-random policy for a producer is fine in most cases, a consumer should be able to connect where there is more load. Redistribute the messages across servers should not be necessary in that case.

             

            Is there a way to do it?

             

            If not, as far as you know, Is there a plan to implement this feature ?

             

            Thanks for the answer.

             

            -- lifeonatrip

            • 3. Re: Distribute consumers between hornetq nodes in a symmetric cluster with discovery
              evidence01

              What you seek is outside the messaging layer. The client side LB strategies where designed with publishers in mind. Typically to control consumers like that you would have the consumer not use a discovery group but define a static binding where you have a primary and secondary IP that the client always connects to. More specifically, the LB on the consumer side is a function of the application/consumer design. In you example you would have consumer A have queue server A, B and client B have the opposite.

               

              To complement use JMSXGroupID to sticky "Application" data to one consumer versus another in a symmetric cluster. In that sense you could always ensure that one consumer processes data tagged with that GroupID (while they are available of coarse). For example, if you have an Event/Event Collection with a unique ID, each Event/Collection would go to and stick to a random consumer, in your case N/2 would go to A versus B.

               

              This is the architecture we have in our stack.

              1 of 1 people found this helpful