4 Replies Latest reply on Nov 26, 2014 8:57 AM by yairogen

    HornetQ client producer threads hang in ClientProducerCreditsImpl.acquireCredits

    manu_1185

      Hi,

       

      We are using hornetq version 2.2.14 Final in our production environment. In the last few days, our user count has increased quite a bit and every day, our server hangs because all the producer threads are blocked. Given below is the stack trace:

       

      "pool-5-thread-100" prio=10 tid=0x00007fc694112000 nid=0x7599 waiting on condition [0x00007fc62cb4a000]

         java.lang.Thread.State: WAITING (parking)

              at sun.misc.Unsafe.park(Native Method)

              - parking to wait for  <0x000000050098fb68> (a java.util.concurrent.Semaphore$NonfairSync)

              at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)

              at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:838)

              at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)

              at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)

              at java.util.concurrent.Semaphore.acquire(Semaphore.java:468)

              at org.hornetq.core.client.impl.ClientProducerCreditsImpl.acquireCredits(ClientProducerCreditsImpl.java:74)

              at org.hornetq.core.client.impl.ClientProducerImpl.doSend(ClientProducerImpl.java:305)

              at org.hornetq.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:142)

              at org.hornetq.jms.client.HornetQMessageProducer.doSend(HornetQMessageProducer.java:451)

              at org.hornetq.jms.client.HornetQMessageProducer.send(HornetQMessageProducer.java:246)

              at com.bsb.hike.pubsub.jms.JMSProducer.send(JMSProducer.java:129)

              at com.bsb.hike.pubsub.jms.JMSProducer.send(JMSProducer.java:117)

              at com.bsb.hike.pubsub.ProducerPool$MessageSendTask.run(ProducerPool.java:150)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)

              at java.lang.Thread.run(Thread.java:679)

       

      It starts working after a restart but happens again after sometime. We are using BLOCK in address-full-policy, and max-size-bytes is 1GB for all addresses. Initially we thought that we might be sending messages at a higher rate than consuming of messages....so after sometime the address might be getting blocked. So we wrote a program to use the management api to get messageCount from queues. In out application we always have a consumer consuming, so the output of the above mentioned program showed that queues have very less (<100) or no messages at all times (even when producer threads are blocked).

       

      We are stuck here and are looking for some suggestion. Any help will be appreciated.