HornetQ, like most messaging systems, has a concept of producer flow control.
The idea behind producer flow control is it slows down or stops producing sending more messages when memory resource limits on the server have been exceeded, to avoid OOMing the server.
There is a chapter on this in the user manual:
In 188.8.131.52 there is a typo, it should read:
This is our setting in hornetq-configuration.xml
<!--default for catch all-->
</address-settings>This is our setting in hornetq-jms.xml<connection-factory name="ConnectionFactory"><connectors><connector-ref connector-name="netty-connector" /></connectors><entries><entry name="ConnectionFactory" /></entries><!-- 10mb --><consumer-window-size>10000000</consumer-window-size><producer-window-size>10000000</producer-window-size><block-on-durable-send>false</block-on-durable-send></connection-factory>which limit do you think we are hitting? The "max-size-bytes" or the "producer-window-size"? We are monitoring the messages in the queue and it doesn't show that the messages are queueing up.
Also we are using Spring's CacheConnectionFactory and therefore caching the Producers, would this cause a problem?
The producer-window-size is not a limit. It's the amount of credits each producer allocates in a single go.
The limit is max-size-bytes, which is what you're hitting.
In 2.0.GA each producer will allocate producer-window-size at startup.
You've probably created a lot of producers, most of which are sitting around doing nothing.
BTW, this behaviour has changed in TRUNK
I notice that you have max-size-bytes set to 100 MiB and have increased producer window size to about 10MiB
This means that after creating 10 producers, further ones won't be able to send messages.
(Another reason I hate this whole Spring inspired caching approach!)
Like I say, this won't occur in TRUNK, since we have a more permissive producer flow control policy now.
We bumped max-size-bytes to 200mb and lowered producer-window-size and consumer-window-size to 1mb. We were able to run for 10 hours or so with this setting, putting 100 messages a second in the queue, the messages being 20-50 bytes each. But we eventually hit a state where the producers are blocked again. Any suggestions on how to diagnose this? Do our settings look right for the amount of traffic we are doing?
I appreciate the response.
Like I said in my previous comments - this looks like expected behaviour to me. Your hitting the limit you configured and you're caching producers.
This behaviour isn't in TRUNK anyway - have you tried that?
Thanks Tim. We found out that there were too many producers being started. So far it's working nicely. We'll also check out the trunk.