Maybe this helps ( See "Server Side Loadbalancing" section ):
This behaviour can sometimes cause confusion. The classic case is a user sets up a cluster of N nodes with a clustered queue, then sends messages to a single node and wonder why all their consumers on different nodes aren't processing the messages. This is because the local node still has spare cpu cycles so there is no point in allowing other nodes to consume, since that would involve unnecessary network round tripping. So it's actually the optimal use of resources.
My experience with JMS messaging with JBoss 5.0.1 GA in a cluster is that, in a cluster of two nodes ( A and B ), when a JMS client is connected to say node A and produces messages to node A, the messages are always consumed first on node A if there are consumers on node A and never propagated to node B. Only if node A does not have enough consumers will it start to propagate to node B.
That is, the JMS messages are never propagated to another node in the cluster so long as there is enough JMS consumers on that node. This kind of behaviour is specific to JBoss Messaging 1.4.x though as far as I know, as other JMS implementations that I know and used ( e.g. iPlanet / SunOne / and now called GlassFish MessageQueue ) will always propagate the messages to other nodes in the cluster.
HornetQ, which will be in JBoss6, will have an entirely different behaviour again I suppose.