We're testing HornetQ 2.1.1Final, getting ready to put it into production. However, every so often we've seen this disconcerting problem where producers on certain hosts become blocked. The only way to fix the problem is to restart the hornetq server. I searched the discussions and the problem is somewhat similar to this (http://community.jboss.org/message/537666), but there are some key differences.
First off, we have two network segments, let's call them A and B. The hornetq server is on network A. Network B is a cluster of webservers behind a load balancer (thus, connections from apps running on servers in network B appear to come from the same IP address). There are producers apps on networks A and B. When the problem happens, all producers on network B become blocked in org.hornetq.jms.client.HornetQMessageProducer.send(). Producers on network A continue to publish just fine.
When the blocked producer problem happens, the only thing that fixes it is restarting the hornetq server: producers on network B then reconnect and continue on their merry way. Restarting the producer applications on network B (running in Tomcat) has no effect. The producers reconnect and get blocked again.
The message throughput for our application is fairly low, peaking around 40 msgs/sec. Messages are about 250 bytes. Messages are not persistent. We are only making use of JMS topics, not queues. Given this usage scenario, I would not expect the hornetq server to block producers based on flow control policies. Am I missing something here?
A stack dump of the hung producer looks like this:
"AVLParserPublisher" daemon prio=10 tid=0x3e35e400 nid=0x1f47 waiting on condition [0x4678c000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x7ed6f7b8> (a java.util.concurrent.Semaphore$NonfairSync)
- locked <0x7ecd2e08> (a com.mycompany.VehicleReportJMSPublisher)
Any ideas? I was hoping for some clues in the hornetq logs but didn't see anything about blocking producers or flow-control kicking in.