6 Replies Latest reply on Jun 3, 2012 1:07 PM by stefanzier

    Looking for ideas: Message stream fairness

    stefanzier

      We want to use HornetQ to implement a basic level of fairness between message flows. I'd like to hear if anybody on the forum has a good idea on implementing this.

       

      Setup

      • The application, a multi-tenant system, processes flows of messages, one per tenant.
      • Cluster A produces messages into HornetQ.
      • Cluster B consumes messages from HornetQ.
      • The number of message streams from A to B is dynamic. The number of active tenants varies. At any time, there are between 100-10,000 active message flows.
      • Message volume varies by 1-2 orders of magnitude between tenants.
      • Message volume within a tenant spikes occasionally.

       

      Scenario 1: Load spike in one message stream

      Ocassionally, one of our tenants misbehaves or sends us a large amount of data. The node(s) in cluster B consuming the tenants queue will not be able to keep up. The desired behavior in this case is for the particular tenant to receive back pressure, and their messages to build up in HornetQ. Once the memory allocated to the tenant is full, HornetQ should BLOCK. Other tenants should be unaffected by this.

       

      Scenario 2: Cluster B stopped

      When Cluster B is unavailable to consume any messages, HornetQ should store messages until Cluster B comes back online. It should do so fairly, allocating comparable amounts of space to all tenants.

       

      How would one best setup a set of HornetQ addresses, diverts, queues, groups, etc (using Core API), such that:

      • An overload condition (more incoming vs. consumed messages) in one stream does not affect the other streams.
      • Memory in HornetQ is distributed fairly between the message streams, without wasting any memory in a global overload/back pressure situation.

       

      My (not-so-great) ideas:

      1. Use one address/queue per stream. As I understand, this requires me to manually manage memory across the addresses, which is tricky. (example, I have 10GB memory, with 10 streams, I'd like to assign 1GB to each stream, but as we grow to 100 streams, I'd need to shrink all the addresses to 100MB each to leave room).
      2. Use rate limited flow control. Monitor address utilization and adjust rate limits on an ongoing basis. This seems very indirect and clumsy.

       

      Is there any obvious approach I'm missing? Any other good ideas on implementing fairness?

        • 1. Re: Looking for ideas: Message stream fairness
          ataylor

          This is the developer forum for HornetQ developer topics, the user forum is the appropriate place for this topic, however see comments:

          Scenario 1: Load spike in one message stream

          Ocassionally, one of our tenants misbehaves or sends us a large amount of data. The node(s) in cluster B consuming the tenants queue will not be able to keep up. The desired behavior in this case is for the particular tenant to receive back pressure, and their messages to build up in HornetQ. Once the memory allocated to the tenant is full, HornetQ should BLOCK. Other tenants should be unaffected by this.

          Im not sure what you mean by this and your comment about messages building up in HornetQ, this is excactly the point of a queue, messages are held in it until a consumer is available. If you mean you want to stop producers sending messages then you can set the max size of an address and set the policy to BLOCK, alternatively you can use paging to make sure the server doesnt run out of memory.

           

          Scenario 2: Cluster B stopped

          When Cluster B is unavailable to consume any messages, HornetQ should store messages until Cluster B comes back online. It should do so fairly, allocating comparable amounts of space to all tenants.

          Again, this is the purpose of a queue

           

          How would one best setup a set of HornetQ addresses, diverts, queues, groups, etc (using Core API), such that:

          • An overload condition (more incoming vs. consumed messages) in one stream does not affect the other streams.
          • Memory in HornetQ is distributed fairly between the message streams, without wasting any memory in a global overload/back pressure situation.

          Like i mentioned earlier you can block once the address is full, alternatively, use producer flow control (see user manual). as for memory, HornetQ has no control over that it is the JVM.

           

          My (not-so-great) ideas:

          1. Use one address/queue per stream. As I understand, this requires me to manually manage memory across the addresses, which is tricky. (example, I have 10GB memory, with 10 streams, I'd like to assign 1GB to each stream, but as we grow to 100 streams, I'd need to shrink all the addresses to 100MB each to leave room).

          HornetQ has no control over memory, only the size of an address, this cant be dynamically changed as its set on the page store on queue creation.

           

          2. Use rate limited flow control. Monitor address utilization and adjust rate limits on an ongoing basis. This seems very indirect and clumsy.

          your probably correct.

           

          Is there any obvious approach I'm missing? Any other good ideas on implementing fairness?

          HornetQ has different functionality, flow control, paging etc etc to allow you to do this, how your application is designed is not something we can really help you with, although we can help you with any specific questions you have.

          • 2. Re: Looking for ideas: Message stream fairness
            stefanzier

            Andy, thanks for the reply. I've moved the thread out of the developer forum. Seems I didn't do a very good job explaining my question. I do understand the point of queues and am familiar with address sizing and address full policies. I was hoping others on the forums had a good idea on how to accomplish fairness and isolation between message flows. Let me try to clarify my question.

             

            Presently, all tenants share a single address, which leads to undesirable behaviors when back pressure occurs. One tenant can misbehave and fill up the entire address. The most plausible alternative to this is to have an address per tenant to isolate them from one another. This would be perfect in a scenario where the number of tenants is fixed or close to it. In our scenario, though, the number of tenants varies. A conservative approach would be to divide the heap by the maximum number if expected tenants and size addresses such that this number of addresses fits. This is, however, not ideal, since there will often be times where there are fewer active tenants. At those times, we'd like to have larger addresses to be able to store more messages before they fill up and block senders.

             

            Any suggestions on solving this would be appreciated.

            • 3. Re: Looking for ideas: Message stream fairness
              ataylor

              Ok, i get it now, firstly I would definately have 1 address/queue per tenant to isolate them from each other. The difficulty is how do you aportion the available memory between the queues. Maybe you could have ever decreasing address sizes, so something like:

               

              1. first 100 address have n size bytes space, these could be named queues.one.*

              2. second 100 have have n/2 size bytes, say queues.two.* and so on.

               

              Just an idea, whether this would workm would depend on how long tenants live, probably wouldnt work well with long lived tenants.

               

              alternatively you could maintain a cluste and write something to bribgf clusters up and down depending on load.

               

              lastly you could add an abstraction between the delivery of the messages and the work itself, maybe use some sort of schedling system to do the actual work,

               

              or when you create the producer, use your own flow control dependant on the currenst state of the server.

               

              just a few ideas

              1 of 1 people found this helpful
              • 4. Re: Looking for ideas: Message stream fairness
                stefanzier

                Andy, that's a really good idea, basically this allows you to allocate new addresses without ever likely running out of space.

                 

                This sparked another similar idea, do you think this is a good approach: Allocate the max number of addresses (say, 10,000) ahead of time. Dynamically manage assignment of address to tenant on the producer side. When the load is 100 tenants, each tenant is assigned 100 addresses from the pool (address.0 - address.99 for tenant 0, address.100 - address.199 for tenant 1, etc). When the load is 1,000 tenants, each is assigned 10 addresses.When sending a message, the producer round robins over the addresses for a tenant. 

                 

                Thinking about the numbers here, is it reasonable to want to have 10,000 addresses in a single server, each connected to 100 producers and 100 consumers? That'd be about 1 million producer/address and 1 million consumer/queue pairs. Can HornetQ handle that much without much degradation? If not, what's a recommended ceiling for the number of addresses/queues per server?

                • 5. Re: Looking for ideas: Message stream fairness
                  ataylor

                  This sparked another similar idea, do you think this is a good approach: Allocate the max number of addresses (say, 10,000) ahead of time. Dynamically manage assignment of address to tenant on the producer side. When the load is 100 tenants, each tenant is assigned 100 addresses from the pool (address.0 - address.99 for tenant 0, address.100 - address.199 for tenant 1, etc). When the load is 1,000 tenants, each is assigned 10 addresses.When sending a message, the producer round robins over the addresses for a tenant.

                  Yeah thats sort of my idea.

                   

                  Thinking about the numbers here, is it reasonable to want to have 10,000 addresses in a single server, each connected to 100 producers and 100 consumers? That'd be about 1 million producer/address and 1 million consumer/queue pairs. Can HornetQ handle that much without much degradation? If not, what's a recommended ceiling for the number of addresses/queues per server?

                  Yeah thats fine, addresses and queues shoudlnt take up to muck memory, its the messages themselves that cause OOM's.

                   

                   

                  Another thing you could do is do it in 2 steps.

                   

                  Have an MDB that monitors the load etc and is used by each client to assign resources, so:

                   

                  1)client sends message to MDB asking for resource

                  2) MDB replies saying use queue A for n bits of work

                  3) client uses queue for work

                  4) repeat.

                   

                  This way you dont get blocked tenants as you hand out resources dependant on the current load, so say client asks for 1000 bits of work, MDB replies and says, I am busy but you can send 10, client sends 10 and then goes back and asks for more.

                  1 of 1 people found this helpful
                  • 6. Re: Looking for ideas: Message stream fairness
                    stefanzier

                    Have an MDB that monitors the load etc and is used by each client to assign resources, so:

                     

                    1) client sends message to MDB asking for resource

                    2) MDB replies saying use queue A for n bits of work

                    3) client uses queue for work

                    4) repeat.

                     

                    This way you dont get blocked tenants as you hand out resources dependant on the current load, so say client asks for 1000 bits of work, MDB replies and says, I am busy but you can send 10, client sends 10 and then goes back and asks for more.

                     

                    Thanks, Andy! That's another great idea... I believe this gives us a few options to work with! Will update this thread once we decide and report on how things worked out!