Ok, let me know how it goes, we can go from there.
One thing to do is as the test is progressing use the management API to monitor whats going on, i.e. how many messages are in the queue every hour or so to try and build up a picture. Also if you have a DLQ make sure there isnt an issue with messages being sent here after re delivery.
Unfortunately MDB will not solve our requirement as we expose our API to other services to dynamically listen for messages on any queue with any filters
we can not change MDB activation config properties at run time and this will not fit into our requirement since filters/queues are not known before deployments.
Seems like we are completely stuck with this problem.
Finally we fixed this issue
memory leak in QueueImpl was the result of a problem in receiver side,
by checking the heap dump in the receiver we saw nealy 600k instance of JournalImpl/JournalFileImpl objects consuming 99% of heap, causing receiver to choke
which subsequently results in sender also filling up fast and finally both crashing.
in the hornetq-server condifuration for some reasons we put the size of 100kb
ie <journal-file-size>102400</journal-file-size>
by restorig this to default 10 MB i.e. <journal-file-size>10240000</journal-file-size>
now receiver works fine, no memory leaks at all
we see the memory reachs peek of 80% then GC kicks in and reclaim all the memory so the overall footprint remains 3% stable.
and it works flawless.
But not sure why journal file size makes such huge difference ( could this be a bug?)
but we will keep the default 10mb for now.
//Manoj