1 2 Previous Next 15 Replies Latest reply on Oct 31, 2005 12:55 PM by ovidiu.feodorov

    JBoss Messaging - Issues with client side message buffering.

    timfox

      In the current mode of operation, we asynchonously shift messages from a destination to the client side buffer of the consumer, so that when a receive() is called on the consumer, a message is possibly already available in the client side buffer, preventing us having to do a server side call - (i.e. this is an optimisation.)

      Unfortunately this also brings undesirable side effects - a client can open a consumer and not actually receive() the message (or take a long time between receives) resulting in the message(s) sitting in the client side buffer, potentially for ever.

      While messages are in the client side buffer the server considers them as being in the process of delivery and will not deliver them to another consumer if it attaches to the destination.

      This can result in messages never being delivered for even well behaved jms clients.

      (Aside - Currently we do not actively NACK a message. Messages are only implicitly NACKed when a consumer is closed, at which point any messages in the "being delivered" state revert back into the "available to be delivered" state.)

      Possibly solutions to this are as follows:

      1. Make every call to receive result in a synchronous (possibly remote) call to the server to receive the "next" message.
      This would be a drastic solution, that would involve more network overhead, plus scaling problems with the number of threads in the server performing concurrent receive() operations.
      We could enable the above the "default" operation to give spec compliant behaviour, and have the current behaviour as a jboss specific option.
      I'm not sure how desirable that would be.

      2. Put a internal "jboss-time-to-live" (do not confuse with jms time-to-live) on the message in the client side consumer buffer. When receive() is called on the consumer, the local buffer is inspected for messages, any messages with an expired jboss-time-to-live are discarded and not delivered.
      The return value of calls from the server to the consumer client to asynchronously deliver messages could be piggy backed with the ids of messages for which delivery has started in order to prevent the server then timing them out and making the eligible for redelivery simply because the client was taking a long time to process them.
      (We would also have to ping consumers from the server even if there were no messages to be sent to consumers in order to find out any more messages that were being processed)
      Similarly on the server side, messages that have been sent to the client side consumer buffers for delivery are marked with a similar jboss-time-to-live, preventing them from being re-delivered while the jboss-time-to-live is not expired.
      Difficulties with this method are in dealing with differences in clocks between different machines in order to synchronize time-to-live values - which could result in messages being delivered more than once -this could be compensated with by making sure the client side "time-to-live" is always less than the server side "time-to-live" by a safety factor to allow for network latency and difference in clock speeds. The complexity of this could make this one a difficult one to implement and still retain once and only once semantics.

      3. A better solution.....

      Comments please.

        • 1. Re: JBoss Messaging - Issues with client side message buffer
          timfox

           

          "timfox" wrote:

          The return value of calls from the server to the consumer client to asynchronously deliver messages could be piggy backed with the ids of messages for which delivery has started in order to prevent the server then timing them out and making the eligible for redelivery simply because the client was taking a long time to process them.
          (We would also have to ping consumers from the server even if there were no messages to be sent to consumers in order to find out any more messages that were being processed)

          This isn't quite right. When a consumer starts processing message(s) from it's buffer it would need to notify the server that it had done so, in order that the server doesn't time out those messages on the server. This could be piggybagged on the return of a call to deposit messages, or a call could be initiated from the consumer to do this. We wouldn't need a ping to do this.
          In any case, this avoids lots of threads blocking on receive calls on the server.

          • 2. Re: JBoss Messaging - Issues with client side message buffer
            starksm64

            First, why should a client side buffer have any messages delivered to it without at least one receive or a message listener associated with the connection? In fact, messages should not be pushed unless there is a message listener, or an optmization flag set by the client indicating that its interactions with a set of destinations is going to use the pull receive call. Pushing messages that are going to be discarded is not going to be an optimization.

            Second, what is the current relationship between the thread pooling and the client connection processing? We should have some shared thread pool abstraction on the server that could make use of the multi-plexing cabability being development as part of the jbossweb/apr work in the case that massive connection counts were needed.

            • 3. Re: JBoss Messaging - Issues with client side message buffer
              timfox

               

              "scott.stark@jboss.org" wrote:
              First, why should a client side buffer have any messages delivered to it without at least one receive or a message listener associated with the connection? In fact, messages should not be pushed unless there is a message listener, or an optmization flag set by the client indicating that its interactions with a set of destinations is going to use the pull receive call. Pushing messages that are going to be discarded is not going to be an optimization.


              My understanding of why it was done this way, is to prevent many threads blocking on calls to receive() on the server. Having said that, there may well be some other reason I am missing here, since I wasn't around on the original design discussions.

              Also, I guess, for certain types of consumers (ones that don't discard messages) it might be an advantage to push messages rather than wait for them to perform receive() - we might even be able to take advantage of reliable multicast in the case of topic subscribers (we're not doing this right now) and get some good performance for very large amounts of subscribers.

              But I agree, in general pushing messags won't necessarily be an optimisation, and shouldn't be the default mode of operation.

              "scott.stark@jboss.org" wrote:

              Second, what is the current relationship between the thread pooling and the client connection processing? We should have some shared thread pool abstraction on the server that could make use of the multi-plexing cabability being development as part of the jbossweb/apr work in the case that massive connection counts were needed.


              Currently we're using JBoss remoting to handle the connections and provide the thread pooling. I believe this is currently using a standard blocking approach.

              I'm not familiar with jbossweb/apr but I'm guessing this is a non-blocking io approach to handle large number of connections (??)

              A non-blocking approach would certainly be an alternative way to tackle the scaling problem I guess.

              I wonder if JBoss remoting can plug in a non-blocking connection management strategy?

              I'm going to find out more about jbossweb/apr....





              • 4. Re: JBoss Messaging - Issues with client side message buffer
                timfox

                As an observation, I believe Weblogic JMS, for example, almost always uses pull semantics for receive() operations.

                I believe messages are only pushed for asynch. consumption, i.e. onMessage(). I believe you can specify unreliable multicast for the push if it makes sense for you and you can deal with the reduced QoS.

                (Actually I think there is one exception where there is a special optimisation with non-durable topic subscribers where you can specify a special acknowledge mode (no_acknowledge) which basically means messages are acknowledged before the message is delivered. This means they can use multicast (non-reliable in their case) to deliver the messages in that case, and buffer them on the client side.)

                • 5. Re: JBoss Messaging - Issues with client side message buffer
                  ovidiu.feodorov

                   


                  In the current mode of operation, we asynchronously shift messages from a destination to the client side buffer of the consumer, so that when a receive() is called on the consumer, a message is possibly already available in the client side buffer, preventing us having to do a server side call - (i.e. this is an optimization.)


                  The main idea is that the core pushes messages towards clients, using its own threads to deliver messages to the server-side JMS facade and doing it in such a manner that the core threads interaction with the facade is shortest possible. Of course, the facade must accommodate this. The server-side JMS facade, in this case ServerConsumerDelegate instances, use their own thread pool to submit the messages to remoting.

                  Remoting drops the message in a client-side facade bounded buffer, making it available for either direct receive() calls, or if there is a MessageListener installed, for delivery to the client on the session's listener thread.

                  In order to receive a message, the client doesn't have to send an invocation to the server and tie up a server thread for that.


                  Unfortunately this also brings undesirable side effects - a client can open a consumer and not actually receive() the message (or take a long time between receives) resulting in the message(s) sitting in the client side buffer, potentially for ever..


                  I don't think this is a problem. Once the core hands over a message to a ServerConsumerDelegate, the message is consider delivered (even if not acknowledged), and the core imposes no restrictions on the client as how long it will keep an unacknowledged message on its behalf. As long there is an active ServerConsumerDelegate, the client it represents is considered valid and it can spend as much times it wants before acknowledging the message (message expiration rules notwithstanding).

                  From the core's point of view, if there is ServerConsumerDelegate and it accepts a message, the message is delivered (received), regardless the fact that client code actually gets the message or not. A positive acknowledgment from the client tells the core that it can forget about the message, a negative acknowledgment tells the core to "un-deliver" the message.

                  While messages are in the client side buffer the server considers them as being in the process of delivery and will not deliver them to another consumer if it attaches to the destination.


                  Yes. Why should this be a problem?

                  This can result in messages never being delivered for even well behaved jms clients.


                  If the client is well behaved, it will either call receive() or install a listener. Of course one can write a client that registers a consumer on a queue, gets a message and keep it in its buffers (actually facade's buffers) until the message expires, without receiving it. This is entirely possible and not a spec violation, as far as I know.

                  If this is the way the client "consumes" the message, considering that both the server and the client live infinite lifes, so be it. In reality, either the client will disconnect/crash, and in this case the server will get its NACK and return the message back to the queue, or the server will shut-down/crash, and it this case, hopefully our recovery mechanism will kick in (and work correctly) and we'll find the message in the queue, after recovery.

                  What would be cool is to have some sort of acknowledgment timeout: if the message is not acknowledged in a certain amout of time, then the core decides the client is not interested in the message (or it misbehaves) and forcibly "undelivers" it. This is equivalent, I think, with your "jboss-time-to-live" from your second solution. But this violates somehow the queue semantics: this way, a message could be possibly delivered to two cosumers: a "rogue" one and a well behaved one.

                  Fortunately, the issue is not whether the client is well behaved or not (the core doesn't actually care), but whether our ServerConsumerDelegate is smart enough to correctly detect closing/misbehaving/crashing clients and NACK (cancel()) deliveries.


                  (Aside - Currently we do not actively NACK a message. Messages are only implicitly NACKed when a consumer is closed, at which point any messages in the "being delivered" state revert back into the "available to be delivered" state.)


                  Delivery has a cancel() method. It is true that the current code doesn't call it every time it is needed - this is a bug -, but we're going to change that. Various tests we were struggling with (MessageConsumerTest.testRedel0, aso) fail for various other reasons: because ServerConsumerDelegate doesn't correctly detect a closed Consumer, or because PointToPointRouter implementation is broken. In my opinion it has nothing to do with the way messages are buffered by the facade.

                  The way the buffering is done could be a performance issue, I agree, and we're going to measure it so we can take informed decisions.
                  If buffering messages on the client is such a troublesome thought, I can think of a scheme where we keep them on the server in queues where those queues are plugged between the router and the ServerConsumerDelegate, the same way the topic is implemented now.




                  • 6. Re: JBoss Messaging - Issues with client side message buffer
                    ovidiu.feodorov

                     

                    First, why should a client side buffer have any messages delivered to it without at least one receive or a message listener associated with the connection? In fact, messages should not be pushed unless there is a message listener, or an optimization flag set by the client indicating that its interactions with a set of destinations is going to use the pull receive call. Pushing messages that are going to be discarded is not going to be an optimization.


                    The fact that a client creates a consumer is all that matters. Creating a consumer expresses client's intention to receive messages from a destination. Whether it calls receive() or not is not so relevant. Think about a topic for example: what makes a client a subscriber? The fact that has a listener registered/ it is invoking receive() or the fact that it created a subscription?

                    If it is the first one then subscribers that are not actively receiving() and don't have listeners will surely miss messages. The specification says: "They (the TopicSubscribers) only receive messages that are published while they are active". What does "active" mean? My interpretation of active is that a subscriber is created and it was not closed.

                    Now, the messages received by an "active" subscription have to be stored somewhere. Whether is on the server or on the client, it's up for discussions, and you're probably right when you say the client is not the place, but I would like to measure that to prove it.


                    • 7. Re: JBoss Messaging - Issues with client side message buffer
                      timfox

                       

                      "ovidiu.feodorov@jboss.com" wrote:

                      I don't think this is a problem. Once the core hands over a message to a ServerConsumerDelegate, the message is consider delivered (even if not acknowledged), and the core imposes no restrictions on the client as how long it will keep an unacknowledged message on its behalf. As long there is an active ServerConsumerDelegate, the client it represents is considered valid and it can spend as much times it wants before acknowledging the message (message expiration rules notwithstanding).

                      From the core's point of view, if there is ServerConsumerDelegate and it accepts a message, the message is delivered (received), regardless the fact that client code actually gets the message or not. A positive acknowledgment from the client tells the core that it can forget about the message, a negative acknowledgment tells the core to "un-deliver" the message.

                      "tim.fox@jboss.com" wrote:

                      Problem is, we are considering messages as delivered (but not acked) even before they have reached the client's code. I think we should be considering them as delivered (but not acked) *only* after the call to receive() has returned or the onMessage() call is being executed.


                      If the client is well behaved, it will either call receive() or install a listener. Of course one can write a client that registers a consumer on a queue, gets a message and keep it in its buffers (actually facade's buffers) until the message expires, without receiving it. This is entirely possible and not a spec violation, as far as I know.

                      "tim.fox@jboss.com" wrote:


                      Consider the following code example, do you consider the current behaviour correct behaviour?

                      
                      Session sess = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
                      
                      MessageProducer prod = sess.createProducer(queue1);
                      
                      //Send 3 messages
                      
                      prod.send(m1);
                      prod.send(m2);
                      prod.send(m3);
                      
                      conn.start();
                      
                      MessageConsumer cons1 = sess.createConsumer(queue1);
                      cons1.close();
                      
                      MessageConsumer cons2 = sess.createConsumer(queue2);
                      
                      Message r1 = cons2.receive();
                      Message r2 = cons2.receive();
                      Message r3 = cons2.receive();
                      
                      //Messages should be received?
                      
                      


                      Current behaviour is the messages are *not* received by by cons2. They are still sitting in the client side buffer of cons1 (until the session closes).
                      AFAICT the above code is absolutely fine client side JMS code and the messages should be received.

                      How could we fix the above case while still buffering messages on the client side?

                      (BTW Just cancelling messages when cons1.close() is called won't work)

                      The fact that we buffer messages on the client side is an implementation detail of our system. The jms application developer shouldn't have to worry about that and write their code around it.

                      Again I think the key point here, is that we are considering messages "delivered" too early on. IMHO messages should only be considered "delivered" after receive() has returned or onMessage() or as onMessage is called(). I think this is the root of our problems. We are considering messages delivered when they are still in the guts of our system.

                      So, without resorting to complex timeouts or other such things I think it wil be very difficult to provide "normal" jms semantics while bufferring messages on the client side, or introducing other states for messages on the client side and shifting them around when consumers are closed. (I.e. complex and possibly non-performant)

                      I suspect this is why weblogic for instance only allow buffering on the client side with a special vendor specific acknowledgement mode (NO_ACKNOWLEDGE), which basically means messages messages are acked *before* they are delivered to the client, since then the messages don't have to be worried about any more. I'm not sure if other vendors do seomthing similar but i wouldn't be surprised.


                      If buffering messages on the client is such a troublesome thought, I can think of a scheme where we keep them on the server in queues where those queues are plugged between the router and the ServerConsumerDelegate, the same way the topic is implemented now.

                      "tim.fox@jboss.com" wrote:


                      My vote would be for "normal" operation, to:

                      1. Only push messages to the client for message listeners (onMessage)
                      2. Receive() calls result in calls to the server. (No buffering on client side).

                      On top of that we could provide some jboss specific optimisations:

                      1. Introduce a special ack mode PRE_ACKNOWLEDGE, which means messages are acked first then pushed to the client. We could then buffer on the client side even for non message listeners (since the server doesn't care about the message any more). (This could also be coupled with multicast in the case of topic subscribers and we could get blinding performance since there's no need for the consumers to ack back to the server - propagtion of stock ticker data for trading client applications would be a good use case for this - but that's a different story)

                      Otherwise I'm happy to stay with the current way of doing things, as long as I can see a way of solving the problem we currently have, as explained above. So far I haven't seen a solution that works with the current way of doing things. (And is not horrendously complex!)



                      • 8. Re: JBoss Messaging - Issues with client side message buffer
                        ovidiu.feodorov

                         

                        Problem is, we are considering messages as delivered (but not acked) even before they have reached the client's code. I think we should be considering them as delivered (but not acked) *only* after the call to receive() has returned or the onMessage() call is being executed.


                        Why? That means a core thread has to wait until the client code decides to receive. Otherwise how the core would know the message is delivered?

                        From the queue's point of view, the server-side consumer, the remoting mechanism, the client-side facade buffers AND the client code are seen as a big, black box, logical client, sitting behind a Receiver interface. A message should be considrered delivered as soon as it is handed over to this Receiver. The fact that the client code receive()s it or not has nothing to do with it.




                        Consider the following code example, do you consider the current behaviour correct behaviour?

                        
                        Session sess = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
                        
                        MessageProducer prod = sess.createProducer(queue1);
                        
                        //Send 3 messages
                        
                        prod.send(m1);
                        prod.send(m2);
                        prod.send(m3);
                        
                        conn.start();
                        
                        MessageConsumer cons1 = sess.createConsumer(queue1);
                        cons1.close();
                        
                        MessageConsumer cons2 = sess.createConsumer(queue2);
                        
                        Message r1 = cons2.receive();
                        Message r2 = cons2.receive();
                        Message r3 = cons2.receive();
                        
                        //Messages should be received?
                        
                        
                        


                        Current behaviour is the messages are *not* received by by cons2. They are still sitting in the client side buffer of cons1 (until the session closes).

                        How could we fix the above case while still buffering messages on the client side?

                        (BTW Just cancelling messages when cons1.close() is called won't work)


                        The current implementation is broken. It should cancel deliveries when the consumer is closed. The idea is that during close(), the ServerConsumerDelegate finds out there are messages waiting in buffers that will never be received, because the JMS consumer is closing. This way, the ServerConsumerDelegate can cancel deliveries corresponding to those messages, and keep those of messages already delivered, so the client code can acknowledge them later.


                        The fact that we buffer messages on the client side is an implementation detail of our system. The jms application developer shouldn't have to worry about that and write their code around it.


                        Correct. JMS facade worries about that.


                        Again I think the key point here, is that we are considering messages "delivered" too early on. IMHO messages should only be considered "delivered" after receive() has returned or onMessage() or as onMessage is called(). I think this is the root of our problems. We are considering messages delivered when they are still in the guts of our system.


                        delivered != acknowledged

                        A queue delivers to a receiver. From the queue's point of view receiver = (ServerConsumerDelegate + remoting + MessageCallbackHandler + JBossMessageConsumer + client code). From the queue's point of view, the message is delivered to the receiver when it gets back a Delivery instance. ServerConsumerDelegate creates this Delivery instance, returns it to the queue and holds on to it, so it can cancel it if necessary.

                        I am almost done implementing the mechanism that cancels deliveries for messages sitting in buffers. It's in my workarea. Please allow me one more days to finish it. This are the files that I am working on:

                        M src/main/org/jboss/jms/client/container/ConsumerInterceptor.java
                        M src/main/org/jboss/jms/client/container/ReceiverInterceptor.java
                        M src/main/org/jboss/jms/client/remoting/MessageCallbackHandler.java
                        M src/main/org/jboss/jms/server/container/InstanceInterceptor.java
                        M src/main/org/jboss/jms/server/container/JMSAdvisor.java
                        M src/main/org/jboss/jms/server/endpoint/ServerConsumerDelegate.java
                        M src/main/org/jboss/messaging/core/ChannelSupport.java
                        M src/main/org/jboss/messaging/core/Delivery.java
                        M src/main/org/jboss/messaging/core/DeliveryObserver.java
                        M src/main/org/jboss/messaging/core/SimpleDelivery.java

                        If you avoid working on them for a little bit, I won't have to deal with conflicts.


                        • 9. Re: JBoss Messaging - Issues with client side message buffer
                          timfox

                           

                          "ovidiu.feodorov@jboss.com" wrote:
                          Problem is, we are considering messages as delivered (but not acked) even before they have reached the client's code. I think we should be considering them as delivered (but not acked) *only* after the call to receive() has returned or the onMessage() call is being executed.


                          Why? That means a core thread has to wait until the client code decides to receive. Otherwise how the core would know the message is delivered?

                          "tim.fox@jboss.com" wrote:

                          Well.... that depends how we manage threads. We could do something like the following:
                          Client calls receive().
                          We send message to server saying we're receiving.
                          Client thread blocks on receive.
                          Server registers client consumer as waiting on receive.
                          When a message arrives, message is delivered to all consumers known to be waiting on receive.
                          The same mechanism as we currently have could be used for that.
                          So you wouln't have any more server threads than we currently do.


                          The current implementation is broken. It should cancel deliveries when the consumer is closed.

                          tim.fox@jboss.com wrote:
                          I don't think it's a simple as cancelling deliveries on consumer close - which is why I mentioned that in my previous reply, here are a couple of examples as to why, I should add these as tests (if they're not there already)



                          
                          Session sess = conn.createSession(false, Session.CLIENT_ACKNOWLEDGE);
                          
                          MessageProducer prod = sess.createProducer(queue1);
                          prod.send(m1);
                          prod.send(m2);
                          prod.send(m3);
                          
                          MessageConsumer cons1 = sess.createConsumer(queue1);
                          
                          Message r1 = cons1.receive();
                          
                          cons1.close();
                          
                          MessageConsumer cons2 = sess.createConsumer(queue2);
                          
                          Message r2 = cons2.receive();
                          
                          Message r3 = cons2.receive();
                          
                          r1.acknowledge();
                          r2.acknowledge();
                          r3.acknowledge();
                          
                          


                          In the above example, if deliveries are cancelled on the call to cons1.close(), the the delivery of r1 is cancelled too, since the server has no way currently of distinguishing them.
                          This means the call to r1.acknowledge() would fail.
                          In order to get get this to work, we would have to distinguish between those messages that have just been sitting in the buffer but the application client code has never seen them, and those which the client has actually seen, and *only* cancel the ones that have just been sitting in the client buffer.
                          This means the client will have to call the server with a list of the messages it wants to cancel (network traffic)
                          Again, going back to my previous point, a message shouldn't be considered as delivered unless the client has seen it. We are considering them as delivered when they have not been.
                          AFAIK delivery in the context of JMS means, "a message that the application client code has seen, but it has not necessarily been acknowleged yet" (my quotes).
                          Unfortunately I can't find a definition for this in the JMS spec. But from the semantics of redelivery I think we could probably imply that meaning.




                          • 10. Re: JBoss Messaging - Issues with client side message buffer
                            ovidiu.feodorov

                            I am trying to summarize the discussion on this thread so far. The thread is about the fact that in the current implementation the server side consumer delegate unconditionally accepts messages immediately following its creation until it is closed. The messages are asynchronously forwarded to the client-side facade where they fill up buffers and are maintained "near" to the client code.

                            The discussion actually covers two different issues and so far the separation was not very explicit. I will try to do it now:

                            1. What exactly "delivery" means for server-side consumer delegate. Or, otherwise said, when a server side consumer delegate should start and stop accepting messages.
                            2. Whether using client-side buffers is a flawed concept (this is what the name of the thread implies)

                            The conclusion for 1. is the current implementation is broken. The server side consumer delegate MUST NOT unconditionally accept messages immediately following its creation. It must start accepting messages following either a MessageConsumer.receive() invocation (one message only) or the registration of a MessageListener on the consumer (as many messages as possible). Its behavior must be also obviously correlated with the started/stopped status of the connection.

                            The conclusion for 2. is not so clear cut. My opinion is that there are NO fundamental issues with client-side buffering, it is just missing a little bit of code. I applied a small patch to repository that actually makes all tests listed below (you can find them as testRedel7() and testRedel8() in MessageConsumerTest) plus the outstanding failing tests in MessageConsumerTest pass. We still have client-side buffering that works pretty much like before, and I still think that client-side bounded buffering is useful. Take a look at the code diffs and analyze the test cases to see how this actually works. Obviously, the patch is rather hackish, I have to review it and tidy it up soon.

                            The only tests that still fail are testRedel4() and testRedel5(), which say in a very convoluted way the same thing the conclusion to point 1 above says. I added a couple of much clearer tests (the testReceiveX() series) that exhibits the same behavior. The problem will be fixed in a matter of hours, it's nothing more fundamental than an additional invocation to the server.

                            Tim: Concerning testRedel(), are you sure you wanted to say Session.AUTO_ACKNOWELDGE there? If this is the case, in my opinion it will never pass, because for Session.AUTO_ACKNOWLEDGE message rm1 is received, auto-acknowledged and gone forever, recover() will never cause its redelivery. If I use Session.CLIENT_ACKNOWLEDGE instead, testsRedel5() passes right away.

                            • 11. Re: JBoss Messaging - Issues with client side message buffer
                              timfox

                              I think we are almost there.

                              As we've discovered, client side buffering causes problems for us in terms of redelivery.

                              However, the same applies to message listeners. Code inside message listeners is also allowed to recover sessions, acknowledge etc., so exactly the same reasons as to why it doesn't work for synchronous receive() also applies to message listeners.

                              For normal JMS semantics client side buffering doesn't work (or at least makes things so complicated as to be unworkable), however I wouldn't say client side buffering has no use.

                              We could provide a specific optimisation with a non JMS JBoss specific ack_mode "PRE_ACKNOWLEDGE" which means messages are acked at the server before they are sent to the client. Then we could buffer messages since the server doesn't care about them any more.
                              This would be useful combined with multicast to give high performance delivery to topic subscribers.

                              Weblogic, for example, does this.

                              But that would be a non-standard feature.

                              Going back to our original issue.

                              I think we need to do something like the following:

                              When a receive() is called:

                              Block the client side thread executing the receive
                              Call the server and ask "give me any message ready now"
                              If there is a message return it now.
                              Otherwise set a flag on the serverconsumerdelegate saying "we are interested in a message"
                              In either case, return (preventing many threads blocking on the server)
                              If we obtained a message, return it to the client code.
                              Otherwise, wait until a message is pused to the client, or we timeout and return null.

                              OnMessage needs to basically do the same thing as above, the only difference is it needs to do it in a loop.

                              So how do we find out if a message is ready "now".

                              Currently the channel doesn't offer us any such functionality, so there is no way of currently doing this.

                              This should be a simple matter to add though.

                              One other thing blocking us from adding it though, is the fact that only unreliable message refs are currently stored in memory, we need to change it so all refs are stored in memory then working out what the next message is simply the top of the list.

                              I am currently putting together a proof of concept that basically does the above.

                              • 12. Re: JBoss Messaging - Issues with client side message buffer
                                timfox

                                I have put together a rough proof of concept locally.

                                All the tests that were failing because of client side buffering now pass :)

                                However, there is a problem:

                                In the absence of a method on the Channel that gives me the next available message, I have basically tried to get the same result by calling deliver() on the channel, and waiting to be given a message.

                                Whilst this allows the tests to pass, it falls apart when any load is put on the system. This means some of the stress tests fail.

                                This is because deliver() is currently an intensive process. It goes to the database to get a (big) set of undelivered messages, merges this with in memory state, then tries to deliver them *all* to any attached receivers.

                                So basically everything grinds to a halt under any kind of load.

                                It seems to me that for what we want to do, deliver() is fundamentally inappropriate - all we want is the next available message. This could potentially be as simple an operation as picking a message ref from the head of a linked list.

                                Also deliver() is inappropriate for us since it tries to deliver *all* the messages, we are only interested in the first.

                                (For non standard JMS use cases, deliver will still have a use though)

                                So, I think we definitely won't be able to do without a giveMeNextAvailableMessage() method.

                                • 13. Re: JBoss Messaging - Issues with client side message buffer
                                  ovidiu.feodorov

                                  I added a Channel.deliver(Receiver r) which addresses the problem you were mentioning. I also modified the channel state to keep all references (including the reliable ones) in memory and back them up to database only when necessary (http://jira.jboss.org/jira/browse/JBMESSAGING-157). I also continued you modifications to ServerConsumerDelegate / MessageCallbackHandler / ReceiverInterceptor so that server-side delegate is activated on receive()/addMessageListener() and the facade handles only one message at a time.

                                  All tests seem to pass, with two exceptions:

                                  - ConnectionConsumerTest tests time out sometimes. This is caused by a race condition that shows up in the receiveNoWait() implementation, which still requires some work.
                                  - TransactedSessionTest.testRedeliveredFlagLocalTopic() fails because two topic subscriptions share the same MessageReference instance and setting the redelivered flag on a reference maintained by a subscriber automatically changes the reference maintained by the second subscriber.

                                  • 14. Re: JBoss Messaging - Issues with client side message buffer
                                    starksm64

                                    When I think about non-trivial semantics such as the JMS semantics, what would be ideal is that a state machine drives the delivery so that there is a clear picture of what needs to be done as a function of the message state and conditional optimizations. If this is not explicitly coded into the implementation, we at least need to go through it logically to validate the behavior. We need to consider the behavior in the non-trivial cases of failures, recovery, contended queues with a mixture of message listeners, msg pullers, etc.

                                    1 2 Previous Next