14 Replies Latest reply on Jul 16, 2012 1:08 PM by groovenarula

    Durable messages getting stuck in Queue.

    groovenarula

      Hello all,

       

      We are running 2.2.5.Final (HQ_2_2_5_FINAL_AS7, 121) integrated with Jboss AS 5.1.0-Final. This configuration has been running since about 3-4 weeks now. However, we're run into a problem where we're now seeing serveral ( > 2500 ) messages getting stuck in the queue. It seems as though some messages are going through while other remain stuck.

       

      Is there a way to determine / debug what's going on and why these messages are 'stuck' ? There are other queues that have different consumers and those messages are being consumed. It's just one of the queues that don't seem to be passing messages to its consumer. The consumer is a MDB deployed in the same instance of Horner!

       

      A messages typically take a few milliseconds to process (consume). At this point the queue's message count and Schedule message count are both 'fixed' at 2602 messages. The 'MessagesAdded' count however seems to keep incrementing slowly. And the logs do show that new messages are being consumed and processed.

       

      Is there a way to 'inspect' the queue's state to see what's causing the messages to stay in the queue ? When I try to inspect the queue using Hermes, it shows that the queue is empty.

       

      Any help will be appreciated.

       

      Thanks

      Groove

        • 1. Re: Durable messages getting stuck in Queue.
          groovenarula

          On invoking the 'listScheduledMessagesAsJSON' the first few messages list as follows:

          {"timestamp":1341949014203,"userID":"ID:9a7cc9a1-cac6-11e1-8ddc-005056a500c1","messageID":32215946673,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019203,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014186,"userID":"ID:9a7a318d-cac6-11e1-8ddc-005056a500c1","messageID":32215946668,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019186,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014159,"userID":"ID:9a7612d9-cac6-11e1-8ddc-005056a500c1","messageID":32215946663,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019159,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014140,"userID":"ID:9a732ca5-cac6-11e1-8ddc-005056a500c1","messageID":32215946658,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019140,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014107,"userID":"ID:9a6e2391-cac6-11e1-8ddc-005056a500c1","messageID":32215946653,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019107,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014067,"userID":"ID:9a68090d-cac6-11e1-8ddc-005056a500c1","messageID":32215946648,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019067,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014049,"userID":"ID:9a6549e9-cac6-11e1-8ddc-005056a500c1","messageID":32215946643,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019049,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014031,"userID":"ID:9a628ac5-cac6-11e1-8ddc-005056a500c1","messageID":32215946638,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019031,"pricing_upload_listener_value":1,"durable":true,"type":5},{"timestamp":1341949014013,"userID":"ID:9a5fcba1-cac6-11e1-8ddc-005056a500c1","messageID":32215946633,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"_HQ_SCHED_DELIVERY":1341949019013,"pricing_upload_listener_value":1,"durable":true,"type":5}

           

          When I try to remove/expire any messages using corresponding messageid, the response I get is 'false' and the messages stay in the queue. Even changing message priority does not help and the messages stay stuck in the queue and their priority also does not change.

           

          Is there any other steps that can be taken to try and release or remove these messages ?

           

          Any assistance/insight/help will truly be appreciated.

           

          Thanks in advance,

          Gurvinder

          • 2. Re: Durable messages getting stuck in Queue.
            ataylor

            The messages you have shown look like they arent in the queue yet and scheduled for delivery, however they should be removed using the ID, could you provide a test so we can take a look.

            • 3. Re: Durable messages getting stuck in Queue.
              groovenarula

              Andy.

               

              Thank you for a response.

               

              Here's an update - we tried restarting the server earlier in the morning today. The 'state' of the messages seems to have changed :

               

              {"timestamp":1341949014203,"userID":"ID:9a7cc9a1-cac6-11e1-8ddc-005056a500c1","messageID":32215946673,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"pricing_upload_listener_value":1,"durable":true,"type":5}

              {"timestamp":1341949014186,"userID":"ID:9a7a318d-cac6-11e1-8ddc-005056a500c1","messageID":32215946668,"expiration":0,"address":"jms.queue.RetailPriceRequestQueue","priority":4,"pricing_upload_listener_value":1,"durable":true,"type":5}

               

              I no-longer see '_HQ_SCHED_DELIVERY' property in the message headers. Also after we restarted the server, we noticed that about 11 messages getting processed (I can't tell if these are new messages that got processed or if these were existing messages that were stuck earlier that went through).

               

              I'm not sure how to provide a test ! When I said that I tried to remove/expire these messages, I tried doing that throught the application servers (jboss-5.1.0 + Hornetq) jmx-console. If you think that's not the right way to adminster these message and I should try something else, then please let me know. Or if you think zipping up the jboss-folder and uploading it here so that you'll can take a look at what's going on would help, then I can do that as well.

              • 4. Re: Durable messages getting stuck in Queue.
                ataylor

                that implies that the scheduled messages were put on the queue and consumed, these will be new messages. Why do you think that they are stuck? are the MDB's still active (you can check to see if the queue has any consumers in the console).

                • 5. Re: Durable messages getting stuck in Queue.
                  groovenarula

                  We have 1 MDB (the backend system that processes our messages is only capable of processing one dataset at a time via a web service) configured and it's active :

                   

                  DeliveringCountRintAttribute exposed for management
                  0    
                  ConsumerCountRintAttribute exposed for management
                  1    
                  DurableRbooleanAttribute exposed for management
                  True    

                   

                  This is the output from invoking listConsumersAsJSON :

                   

                  [{"sessionID":"3e51326c-cb26-11e1-9bbd-005056a50108","connectionID":"3e4e734b-cb26-11e1-9bbd-005056a50108","creationTime":1341990091300,"browseOnly":false,"consumerID":0}]

                   

                  The reason why I believe they're stuck is because :

                   

                      1. Our messages don't take more than about a second to process. Right now there is little to-no actvity, yet the message counts (MessageCount / ScheduledMessageCount) has not dropped at all in the last hour or so. The count has stayed fixed at 2465 since the server was restarted earlier today.

                   

                      2. Even though the messages are there in the queue, there is no actvity (in coming requests) being registered in the backend system.

                   

                  When I do not see the message count or schedule message count reducing, I'm assuming that they're 'stuck'. I can't tell why they're not process at this point. Our MDB logs a lot of status information in the logs and we see these updates in the logs when messages are consumed. At this point I only see very little messages coming from this consumer - I should see a lot more activity in the logs.

                   

                  If the messages were consumed, why is the MessageCount still showing 2465 ? Our messageCount never exceed 5-7 at any given point in time. Yes our ScheduleMessage Count does rise when our backend service goes down. But then when it comes backup, we normally see that drop down as well (in 2-4 hours typically). But it's now been 2 days and we have not seen these numbers dropped.

                  • 6. Re: Durable messages getting stuck in Queue.
                    ataylor

                    What is your MDB pool size, the reason I ask is that the defailt pool size is 15 and you should 15 consumers.

                     

                    also, there may be a bug in message count when a server is restarted.

                     

                    Also are you using transactions for the MDB, check prepared tx's to make sure there are no pending tx's, i.e. the messages have been consumed but not commited.

                    • 7. Re: Durable messages getting stuck in Queue.
                      groovenarula

                      We have set our pool size to 1 and session size to 1.

                       

                      Here are the annotations we have defined for the MDB.

                       

                      @MessageDriven(

                      activationConfig = {

                      @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),

                      @ActivationConfigProperty(propertyName = "destination", propertyValue = "queue/RetailPriceRequestQueue"),

                      @ActivationConfigProperty(propertyName = "maxSession",propertyValue = "1" )                    

                      })

                      @Pool(value=PoolDefaults.POOL_IMPLEMENTATION_STRICTMAX, maxSize=1)

                      public class ForwardResponse implements MessageListener {

                       

                      Can you please let me know how to check for pending tx's ?

                       

                      I don't think the messages have been consumed since even our backend service has not registered the data in these messages. In our system, we keep a log of the messages that are 'send' by the producer and we also log this data in our backend system. So what we're see is that several of the messages that have been sent by our produced have never made it to our backend system. I will still check the pending txs once I've figured out how to. If you send me some pointers on how to do that, it would be great.

                       

                      Andy, I would like to thank you for your effort in helping me out. Truely appreciate it.

                       

                      Thanks again.

                      • 8. Re: Durable messages getting stuck in Queue.
                        clebert.suconic

                        You are using 2.2.5.. there were a few fixes since them.

                         

                         

                        One of the fixes was around PriorityLinkedList. The Queue would lose messages (until you restarted the system), and there were another ocurrency where this could occur after a redelivery.

                        1 of 1 people found this helpful
                        • 9. Re: Durable messages getting stuck in Queue.
                          clebert.suconic

                          BTW: I"m not saying you're hitting the bug. Just that if you moved to a later version maybe the issue will go away.

                          • 10. Re: Durable messages getting stuck in Queue.
                            groovenarula

                            Thanks for the update Clebert. We'll work on upgrading to the later release. In the mean time, is there anyway to 'clear' out the existing queue ? The reason I ask is becuase when we tried to 'resubmit' these messages for processing, the resumitted messages simply piled up in the queue again. From our logs we can make out that there are ~ 750 requests that have not been processed. Yesterday the queue had about 1500 messages that were in the 'stuck' state. When we resumitted our requests for process, the 750 requests simply got added to the queue and did not process. So now our queue is sitting at ~ 2250 unprocessed messages. We need to get these 750 requests processed ASAP. So is there a way we can 'reset' the PriorityLinkedList so that we can re-sumit the ~750 requests ?

                             

                            Again, thank you and Andy for your help and would really appreciate any addtional assistance you can provide to resolve this. Unfortunately upgarding to a new-release is going to mean quite  bit of testing and we can't wait that long to process the 750 pending requests. So if there's a way to clear the current queue (like renaming the existing queue and creating a new one with the same name etc), it will be of tremendeous value to us.

                            • 11. Re: Durable messages getting stuck in Queue.
                              ataylor

                              if you use the console and delete using the ID that would work, if it doesnt then without some sort of test its hard to really help. Ive never seen an issue before of this tho.

                              • 12. Re: Durable messages getting stuck in Queue.
                                groovenarula

                                This is getting even worst - we're seeing the same behaiour now on a completely different server. We submitted about 700 messages to a test server on Friday (7/13). We normally see these messages being processed in about 45 mins. But the queue has just processed about 200 messages until now. I'm going to try to delete these messages etc.

                                 

                                I'm prepared to provide a test. Just not sure how to go about this ? Can you just provide me guidance on all the artifacts needed for the test ?

                                • 13. Re: Durable messages getting stuck in Queue.
                                  clebert.suconic

                                  With all the indications so far it seems that you are hitting a bug fixed after 2.2.5. It will be hard to fix a bug that was already fixed...

                                   

                                   

                                  if you replicate it on the latest.. then we can fix it.

                                  • 14. Re: Durable messages getting stuck in Queue.
                                    groovenarula

                                    Thanks Clebert,

                                     

                                    Is it possible to use 2.2.14 with Jboss 5.1.0.GA ? I've tried to deploy 2.2.14 into a clean install of 5.1.0 and have run into this issue :

                                     

                                    https://community.jboss.org/thread/199829

                                     

                                    How do I resolve this ScopeKey issue ?

                                     

                                    I can't move forward to 7.1.1 until the entire application is migrated. We do have a seperate initiative going towards that, but that's going to be take several weeks and we can't really wait so long to resolve this issue.