> The current message cache tries to solve
> memory problems by flushing least recently used
> messages to disks.
> Instead, this should be based on which queues are
> and what throughput they have.
Agreed, the cache needs to say.. (no receivers on Queue A?? then all messages sent to queue get passivated immediatly)
> Similarly, no account is taken of the speed of
> when senders add messages. Senders to queues with
> slow/no receivers should be throttled back
> allowing senders to more active queues to get there
> work done.
I think that should be configurable per destination. I see the following options:
(A) Allow senders to go full speed. (As it is today)
(B) Throtal senders so that they send at the same speed as the consumers. ( could be hard to determin when to slow them down if the receiver is using a message selector)
(C) Allow non-peristent messages to get droped if the receivers are too slow.
Where is the persisted message going to be ... hopefully on loacl disk, right? Are you guys thinking of local indexed entity to store/persist messages in (queue/topic) like MQSeries. Each queue for instance in MQSeries is a mini-database on the local disk.
If you are just caching the headers, even that becomes quite a burdon in a high load situation. I think if you stick with a indexed file system object for a queue that is intelligently searcheable and make use of nio, you will actually cut out complexity of caching just headers and performance will be just as good as if you cached headers because in that case you are still going to the file system to retrieve th message body. I don't believe this is where any MOM product spends bulk of its time unless the indexing and search scheme is not very well thought out.
I would suggest writing all durable messages to file system queue objects (headers and all) and keeping any active queue non-durable messages in memory as completely as possible including the body. This is because if a queue is active, mostly likely that message will be consumed shortly without using too much memory. This scheme avoids unwarranted complexity to save a bit of memory. Out in the market all industrial strength MOM products are memory heavy and many are not even implemented in java. That is the price you pay for being scalable and fast.