Is it possible, using JBM (1.4.x), to evenly distribute message processing across the nodes of a cluster (AS 4.2.3)?
If not - and my reading of the docs is that it only starts redistribution when the original node gets overloaded - are there any plans to do so for 2.0?
JBM optimally distributes load, always giving it to the local node if it can cope with it.
Sending it to a remote node when it can be processed by the local node is not a good use of resources.
This behaviour worked fine with MQ ... but I don't want to have to go back to it.
I'm trying to understand why you would want to do that? (Round robin distribution of load)
The behaviour I want to see is that, when processing a large queue (100k+ msgs), the messages are processed roughly evenly across all machines in the cluster. The reasoning is that, if 1 machine takes (roughly, elapsed time) 110 minutes, 3 machines will take (e.g.) 35 minutes.
This is in the context of other, user-interactive, applications on the same cluster - I don't want to swamp an individual machine.
This behaviour is implemented by JBM, and is configurable as follows :
1) Allocate a given number of MDB's, appropriate to the server, using the MaxSession parameter for the invoker (standardjboss.xml) - this doesn't seem to work for me, but is an AS question, not JBM.
2) Modify preFetch size to pull 'optimal' number of messages to each machine for processing.
3) ... Any other tunable elements?
This causes the other machines to kick in when the queue gets too full for the initial machine to process the messages - this happens correctly.
Round robin is a special case of this, where both are set to 1 (obviously not a good idea). I would like to ensure that all machines process batches of X at a time, where X is determined for the application/machine etc.
JBM optimally distributes load, always giving it to the local node if it can cope with it
My question then becomes - How does JBM determine when the load is too great for the initial machine? Is this configurable (or, better, pluggable)?
I can imagine numerous scenarios - CPU load, MDB pool usage, Messages on Queue etc.
Could you point me towards how this is done please (source/wiki/uri all fine).
Thanks for your time.
My question then becomes - How does JBM determine when the load is too great for the initial machine?
Each consumer has a client side buffer of messages (see prefetchsize), when that buffer is full the consumer is deemed "busy", and won't be sent any more messages.
So.. once the local consumers buffer is full, then and only then will it allow remote consumers to consume from the queue. So the local consumer will always get messages as long as it can consume them fast enough.
if you blindly round-robin'd to remote machines when the local consumer was fast enough then that would be inefficient since you'd be incurring extra unnecessary network traffic.
Thanks Tim. That does it for me. All I wanted was even distribution (not really round-robin) and it seems that the prefetchsize is the key.