We have run into a problem with non-persistent messaging with high speed producers and a single slow consumers.
We have ~ 800 clients sending 4 x 16K messages per second to a single topic. There is a non-durable consumer attached reading the messages. In reality the messages will be sent less often, but we're using this as a test rig to check scalability.
All communication is via STOMP, using python stomp.py v2.0.1.
Broker is FUSE 126.96.36.199/RHEL4/java-1.5.0-sun-188.8.131.52-1. Activemq.xml attached
Initially we saw that the broker would run out of memory, so we followed the instructions in <http://activemq.apache.org/message-cursors.html> to enable paging to disk.
Now we see messages going correctly to disk, but see two problems:
1) The reported memory usage is corrupt - it often reports > 100 % e.g. See screen shot for memory usage of 3367 % We have set the memory limits very large in wrapper.conf :
2) There seems to be a large number of messages on disk even though there are no consumers present on the topic which is receiving the large messages.
root@gridmsg102 log]# du -s /var/cache/activemq/data/gridmsg102.cern.ch/tmp_storage/
Here's an example of what we do that illustrates well what I think is happening:
We run a test for defined period of time e.g. 1 hour. We have a slow consumer, so during the test we only consume a fraction of the messages sent by the producers while the consumer is connected and the rest are streamed to disk. After one hour we stop the consumer. We see that the unconsumed messages are still left on disk.
Are there any tools to show me what is in those tmp files, and what the status of messages in there are so I can see if our diagnosis is correct ?
Is there a way to purge them programatically (JMX or Command line ?) since they eventually fill up our temp area.