We're using HornetQ running on two machines as a cluster using HA replication. This is running on Amazon Web Services (AWS).
When using durable queues the time messages take to be sent/consumed from queues increases (as expected) and we'd like to performance tune this.
More powerful AWS machines have been tried, including using provisioned-iops volumes for the HQ journal but this doesn't improve performance significantly.
We're trying to understand where the bottleneck in the system is (i.e. are we memory/cpu/io bound) and have been monitoring the AWS instances using sar and iostat and we don't seem to be hitting any limits (max memory/max cpu/max iops).
Are there any recommendations for identifying where the bottleneck could be?
Our setup involves four durable queues all on the same instance of HornetQ:
Message M1 (~ 50 bytes) produced to Q1
Message M1 consumed from Q1.
Message M2 (~ 2000 bytes) produced to Q2
Message M2 multicast to queues Q3 & Q4 (using Apache Camel)
You're probably not pushing the box hard enough to hit its limits despite the relative increase in processing time. Try pushing more messages with the same producers or better yet add more producers.