This is on by default.
It is configurable in the connection factory as described in
prefetchSize is a parameter that determines how many messages are prefetched.
According to http://jira.jboss.com/jira/browse/JBMESSAGING-328, JBoss messaging now supports batching of messages from the server to the client.
Can anyone explain how this works or how it can be configured/turned on? The performance we are seeing on a network with a large round-trip time seems to imply that this isn't being used, and we can't find any documentation that refers this, apart from the Release Notes for 1.0.1.CR4.[/url]
If your network has a large latency, then you may be falling victim to the fact that JBoss Remoting (which we currently use) is based on an RPC model, so even if your network has high bandwidth the throughput is limited since your latency is bad.
We will be fixing this for 1.2, where we should be able to fully benefit from the high bandwidth without the drawbacks.
The performance we are seeing on a network with a large round-trip time
What do you mean by that? Can you give more details? What exactly is the performance problem?
I shall clarify my previous post.
Currently JBoss Messaging (and JBoss MQ) send messages from the server to the client using an RPC mechanism.
This means they send a bunch of messages then wait for a response (i.e. request/response).
This means that if the network has high bandwidth, but also has high latency (think high round trip time) then for each send is going to take a minimum of 2 x latency, *irrespective* of the bandwidth.
In other words it doesn't take advantage of the bandwidth.
We have already seen other customers suffering from this problem.
A better approach (and one we will have implemented in 1.2) is to push messages asynchrously across the network (i.e. not wait for a response).
Then the throughput is not dependent on the latency.
"tpaterson", can you give us more information about the hardware? Any specific requirement that determined you to go for a high-bandwith, high-latency LAN.
How high is the latency? Do you have numbers?
It would be great if our QA labs can get a hold on a similar equipment so we're able to experiment a little bit with it.