Sounds great. If they're not already, can these settings be configurable, per Queue? SwiftMQ has a nice feature where you can set the number of messages being stored in memory. It would be nice to be able to tune the settings in JBoss as well.
For instance I know I'll have Queues which need a larger memory size than others. In order to more effectivly utilize the Java heap, it would be nice to be able to give these Queues more mem, and others that I know will only get a few messages an hour and those message will be consumed immediatly, can be set to take up less memory.
Anyway, just my $0.02.
Right now it's set for all the queues. It could be setup per queue. But would it make good sense?
The heap is shared by all the queues, so thats why this cache I have is shared by all the queues also. No matter what queue the message is in, I think the first message to go should be the message that has been sitting idle on the server the longest. I could be wrong here. Maybe my picking algorithm needs more refinment.
Now I do think that we should be able to configure at the queue level is something like: if memory gets tight should we just drop the message instead of persisting it.
Well lets say for example we're using JBoss like this.
Because we're moving from an expensive provider, we have the followin setup.
JMS development is done on a single machine company wide.
This machine has a queue set up for each developer to test JMS stuff on. Also each large project has a queue that is used in dev and production.
So we have the following JBoss setup:
5 production queues - These queues are important queues, they should handle most of their requests in memory if possible.
30 dev user queues - Each developer has their own working queue, and they sprinkle their working queue with messages. Messages sent to these queues aren't all that important and are generally read from the queue as soon as they are posted.
Now the machine has 500MB of RAM which JBossMQ could use. I would like it if I could set JBoss to only write out to disk if the "production" queues hit 10,000 messages or memory is running out, whatever comes first. And for each of the dev queues set this limit to 100.
Now in this setup user A sends a million messages to his dev queue. Well after the frist 100, they start being written out to disk. No big deal, access is slow for that user, but the production queues aren't really effected, nor does the system crash.
This is just a hypothetical situation, but it reflects a possible need to not share and share alike. There may be some queues which you'll want to have more resources than others. I'm hoping this option will be available.
OK I see how in that scenario the users 1 million messages would affect the other production queues.
So we need to set a limit for a queue so that it does not eat up all the memory resources allocated for another queue that has a higher Quality Of Service (Qos) requirments. Basicly we want to keep messages in the memory longer for the queues with a higher Qos so that consumers can get messages from it quicker.
We also need to keep in mind that we don't want to enforce the queue limits until memory resources start getting tight.
A better way to set queue 'limits' might by memory used by the queue than by the number of message in the queue.
I'll look into this stuff some more
Yea, you're right. Being able to divide up resources by memory would be nice. Saying that the 4 production queues should have 20% resources each and setting the user queues lower would be cool.
The problem I've run into is that it's hard to tell how much memory the stuff in your queue is taking up. If you come up with an elegant solution please let me know. Remember if you use this model you'll have to check on every incoming message (which may not be a good thing either).
Well, I've commited my message cache changes to the CVS HEAD. If you find it buggy (I haven't seen problems), you can get the Non message cache version by pulling out the RelMQ_1_0_0_2 tagged version.
It still does not allow you to setup per queue message limits. I'm a little busy this week so If someone else wants to submit a patch for the feature, I would be greatful.