With persistent messages normally the database is the bottleneck.
There's not a lot that can be done about that as long as we use a database for persistence.
Having said that, last time we measured JBM was significantly faster than JBoss MQ even for persistent messages.
IN JBM 2.0, we will be moving to a fast file journal based persistence where each node maintains its own journal.
This should give much better performance and scalability in a cluster.
We will also be moving to a fast NIO transport based on Apache MINA. This will remove our other perceived performance bottleneck - JBoss Remoting.
I wonder why our performance is so much worse for JBM than JBossMQ with UIL2, compared to your tests. There isn't a lot of tuning parameters I can think of. Can you? The JMS code is the same, we just change the jars.
I wonder if we have a code issue that JBM is exposing but that JBossMQ is fine with.
I switched back to JBossMQ and then, as you said, the persistence is the bottleneck. When the Queue grows, the server load is increasing rapidly. MySQL process is eating the CPU. The JMS client delivers the messages really fast.
With JBM the Queue dosen't grow. It seems like the JMS client can't send messages fast enough. The MDB (consumer) can easily deal with the load.
The same problem is happening with our server in production.
We switched to Jboss messaging and the application seems so much slower.
We don't understand why.
Hi Can you give some figure to show how slow it is? Thanks.
Thanks for your answer,
When we switch to messaging mode, the connection to the web application is very slow as if apache has too much connections opened. For instance, it can take many attempts to display the connection mage. It has the same behaviour as when apache is under load. It seems to be a network issue, may be because og bisocket protocol but i can't see much configuration that can be done. We really have hope in jboss messaging because mq happens to be very unstable under certain load .
Help us please.
Any Answer please ??