10 Replies Latest reply on Dec 28, 2006 12:33 PM by timfox

    Streamlined delivery of messages to consumer

    timfox

      I have refactored the way we deliver messages to consumers.

      This has resulted in a major simplification of the code, aiding it's clarity and improving it's maintainability as well as hopefully being more performant.

      The way it is designed to work is as follows:

      The client side consumer has a local buffer with a max size of maxBufferSize - it always consumes messages locally from that buffer.

      The server will send messages as fast as it can as they arrive to the consumer. It writes the messages onto the wire and doesn't wait for a response (i.e. it's not RPC - so we are not held ransom by networks with highbandwidth but high latency).

      On the other end of the wire, messages are read off and added to the buffer.

      If the buffer becomes full, because the consumer consumption rate cannot keep up with the rate the server is sending, then the client side consumer sends a flow rate change message to the server with an argument of zero.

      (The flow rate change message is sent asynch from client to server and has a parameter "newRate" which means "please try and send messages at this new rate". Currently we are only binary, i.e. value = 0 means stop sending and value >0 means send as fast as you can. But this is a placeholder for the future where specific rates can be specified.)

      The server receives the flow rate change (0) and stops sending messages. The client consumes the backlog and when the number remaining reaches bufferSize / 2 it sends another flow rate change message to the server with a +ve value.

      The server receives this and starts sending again.

      Note that we do not wait until the buffer is empty before sending the asynch flow rate change (+ve) message since this would introduce extra latency since the consumer wouldn't be consuming while it was waiting for new messages to arrive.

      Currently we are waiting on remoting to release a version with true asynchronous invokes (which are in remoting HEAD but not in a release) but this is done we should see the benefits.

        • 1. Re: Streamlined delivery of messages to consumer
          timfox

          You may have noticed there are parallels between this and how TCP windowing works.

          • 2. Re: Streamlined delivery of messages to consumer
            ovidiu.feodorov

             

            Tim wrote:
            If the buffer becomes full, because the consumer consumption rate cannot keep up with the rate the server is sending, then the client side consumer sends a flow rate change message to the server with an argument of zero.


            What happens in the case the client buffer becomes full AND there are in-flight messages still arriving? The client may send its "slow down" asynchronous message back to the server, and after a while, the server may react to it, but what happens with the messages that have been sent during this interval?


            • 3. Re: Streamlined delivery of messages to consumer
              timfox

               

              "ovidiu.feodorov@jboss.com" wrote:

              What happens in the case the client buffer becomes full AND there are in-flight messages still arriving? The client may send its "slow down" asynchronous message back to the server, and after a while, the server may react to it, but what happens with the messages that have been sent during this interval?


              Any messages received from the server side when the consumer is not closed will be accepted. I.e. the max buffer size is not a hard limit.

              • 4. Re: Streamlined delivery of messages to consumer
                jeffdelong

                How does this work when there are multiple consumers for the same queue?

                • 5. Re: Streamlined delivery of messages to consumer
                  timfox

                   

                  "jeffdelong" wrote:
                  How does this work when there are multiple consumers for the same queue?


                  It should work the same for any number of consumers.

                  • 6. Re: Streamlined delivery of messages to consumer
                    jeffdelong

                    Does this imply that all messages on the queue are sent to all consumers of the queue?

                    If so, how does the JMS server enforce that the message only get consumed by a single consumer?

                    • 7. Re: Streamlined delivery of messages to consumer
                      timfox

                       

                      "jeffdelong" wrote:
                      Does this imply that all messages on the queue are sent to all consumers of the queue?


                      No. JMS queue semantics demand that a particular message is delivered to only one consumer.


                      If so, how does the JMS server enforce that the message only get consumed by a single consumer?


                      Currently, the default policy is to round robin between all available consumers, but this has nothing to do with how we deliver messages to the client side.

                      • 8. Re: Streamlined delivery of messages to consumer
                        timfox

                        So, basically what happens is this:

                        The queue may have many consumers. When delivery occurs, the queue will select one of those consumers baside on the implementation of Router that is being used with the queue. The default implementation round robins between the consumers.

                        Once the server side representation of the consumer has the message, it then needs to send it to client side. This works as described in the first post on this thread.

                        • 9. Re: Streamlined delivery of messages to consumer
                          jeffdelong

                           

                          The server receives the flow rate change (0) and stops sending messages. The client consumes the backlog and when the number remaining reaches bufferSize / 2 it sends another flow rate change message to the server with a +ve value.


                          By server in the previous quote is this the router or the server side representation of the consumer?

                          For example - if I had two consumers, and one is "backed up" and has sent a flow rate change (0). Will the round robin router continue to select the server side representation of this consumer? Or conversely, will the router be told this consumer is unavailable and temporarily stop selecting it?

                          Is the "server side representation" of the consumer the same as the "channel" in the JBoss Messaging architecture?

                          • 10. Re: Streamlined delivery of messages to consumer
                            timfox

                             

                            "jeffdelong" wrote:


                            By server in the previous quote is this the router or the server side representation of the consumer?



                            The latter. The router is not really pertitent to this discussion which is about how messages get from a ServerConsumerEndpoin to the client side consumer. The router is just the thing that routes the message from the queue to the ServerConsumerEndpoint.



                            Is the "server side representation" of the consumer the same as the "channel" in the JBoss Messaging architecture?


                            No, the channel is the queue. The "server side representation" is the ServerConsumerEndpoint.