0 Replies Latest reply on Sep 7, 2005 11:45 AM by Tim Fox

    Asynchronous delivery of messages, thread pools, queues and

    Tim Fox Master

      I've been making the first few performance runs against the new JMS server and I've come across some issues with thread pooling, queueing and locking, and I thought I'd share my experiences.

      The PooledExecutor that manages the threads that deliver messages to the client side buffers ready for consumption previously had no max pool size and no queue.

      However the client side BoundedBuffer into which the messages are placed ready for consumption was bounded.

      This resulted in exhaustion of available sockets on the server machine when delivery of many messages was attempted since new threads were being spawned to deliver messages while other deliveries were blocking on the client side bounded buffer.

      For now, I've remedied this by putting an upper limit m on the pool size and a non bounded queue (LinkedQueue) as the queue.

      This means there are only ever a max. of m threads delivering messages to client side consumer buffers, but any number of messages queued up waiting to be delivered.

      There is a danger with this. Imagine the following scenario:

      There are 100000 messages on a queue to be delivered - there are no consumers currently on the queue.

      A new consumer is created for the queue and the consumer is started.

      This causes deliver() to be called for the channel corresponding to the queue.

      The call to deliver() currently does not return until all the waiting messages are placed onto the delivery queue ready for delivery.

      If the delivery queue is not bounded, as is currently the case, then this operation will complete, assuming there is sufficient memory to add all the messages to the queue, but will cause exhaustion of memory if there is not enough.

      In order to prevent memory exhaustion, a BoundedBuffer could be used instead of a LinkedQueue.

      This would prevent more than a certain number of messages being on in the queue at any one time.

      Any other attempts to place a message on the queue would block until a place on the queue becomes available.

      However, in our scenario this would never happen since messages aren't consumed on the client side until the call to start the consumer returns which it never does. Hence we end up in a locked state. Not good.

      Another solution, IMHO would be to use a bounded buffer to guard against memory exhaustion but make the delivery operation asynchronous rather than synchronous as it currently is.

      In this way, consumer.start() would trigger delivery but return immediately, allowing consumption of messages to start thus preventing locking.

      This would also prevent a large pause on start() before messages are consumed as they are being placed on the queue.

      Is there any particular reason why deliver is a synchronous operation currently?