2 Replies Latest reply on Aug 19, 2010 12:02 PM by Joe Schmoe

    Message Redistribution Limitations

    Joe Schmoe Newbie

      Perhaps there is another way of handling this that I'm not seeing, so I would be very grateful of any suggestions that could be thrown my way.


      Here is the problem.  We are running 2 individual HornetQ application servers in our production environment, configured as a cluster.  We have a few producers producing messages onto the clustered queue.


      Here is where it gets interesting for me:


      • On each application server, we also have a pool of cached consumers. 
      • A single consumer is retrieved from the pool for each incoming http web request to the appserver (similar to the way a database connection pool would work). 
      • The consumer is then used to consume exactly 1 message from the queue.


      Now as you can imagine, we may have 50 users on appserver 1, and 10 users on appserver 2.  So the overall effect is that appserver 1 ends up getting starved of messages because there are not as many consumers on appserver 2 and it backs up.


      What would be ideal, is a way to configure HornetQ to redistribute messages to another node based on how many messages are left in the node, at some configurable time interval.  In other words, a way to tell HornetQ that I always want the same consumer to message ratio on each cluster node.


      Unless I'm missing something, the only way messages ever get redistributed is if ALL the consumers on a given node are closed, which is not a viable option under this scenario.


      I'm also open to other suggestions on how to accomplish something like this short of writing my own queue browser to go into the queue and remove messages and redistribute them manually.

        • 1. Re: Message Redistribution Limitations
          Tim Fox Master

          You can set route-when-no-consumers to true, then it will round robin irrespective of the number of consumers on each node.


          But I'd say the real issue here is your architecture is broken. Maintaining a pool of consumers and doing synchronous receives on them is ugly.


          This will be slow, and will also require you to turn off consumer buffering. Sounds like the kind of thing Spring does

          • 2. Re: Message Redistribution Limitations
            Joe Schmoe Newbie

            Hi Tim, and thanks for the response. 


            Maybe I didn't explain my use case well enough.  The whole purpose of this is not to complete tasks as quickly as possible.  These are actual human tasks being handed out via hornetq.  Tasks are loaded into the queue and then as users come into the system they grab a task off the queue and complete it.


            Like I said, I'm open to other suggestions.  Even if the suggestion is that HornetQ is not a good fit for this scenario.  We like using HornetQ though because it scales well and performs really great.  We also can't have these tasks getting lost which is why we want to use HornetQ's durable messages.


            The advice to set route-when-no-consumers to true is not a viable solution since I'm not looking for round robin.  If Appserver1 has 3 times the number of active users as Appserver2, then round robin will not keep Appserver2's queue from backing up and starving Appserver1. 


            Again, what I am looking for is a way to redistribute the messages based on queue size to ensure there are always tasks on each node.  We have no way of controlling how fast people work either, so we may have a bunch of fast workers on one node and slow workers on another.


            I can't imagine I'm the only one out there with this type of usage scenario (one node completes faster than another) even if it does not involve human task completion.  It seems that HornetQ would benefit from having some more robust message redistribution options other than simply, if there are no consumers left on a node then we will redistribute.  Especially when the best practices for performance reasons is to not open and close consumers for each message but rather keep them connected permanently.