Currently JBm can't be configured to do this, typically you would have at least 1 backup server to failover too.
Feel free to add a JIRA for this functionality if you think its worth while.
P.S thanks for all the feedback.
I am trying to get my head around what you're asking here.
if you're referring to command buffering:
Then JBM will block by default on send if it hasn't received any acks back from the server. As mentioned in the docs you can turn this off by setting the ProducerWindowSize param to -1, but do so at your peril! Flow control is there to stop the server getting overwhelmed.
BTW JBM won't create any threads here - there will only be as many threads blocking as the number of threads from your application that have called send(), so I'm not sure how that could bring your application down.
You could also set an ExceptionListener which would be called when a problem with the connection is detected.
The threads are coming from my client application - there is a thread per http request and at the end of the request it calls send(). Hundreds of these threads are created every minute so if the call to send() blocks then it doesn't take long for the blocking to overwhelm the VM. I guess I could re-architect my application to put the messages in a queue and have one thread pull the messages off and send them, this just feels a bit like I'm implementing something which could (should?) be in the messaging client.
As for the ExceptionListener solution, I thought about this too. I have JBM setup to auto-reconnect so I don't actually get an exception when the server goes down (I just see some warning messages in the logs). From what I can tell the sender queues up the messages in the background. When the server comes back, only then does my ExceptionListener callback get called, at which point I redo the JNDI lookup and recreate the connections and sessions etc. This works well. I think if I set JBM not to auto-connect then the ExceptionListener callback is called right away when the server goes down, but then I have to do my own reconnect logic. I'm happy with this ExceptionListener logic, it would just be nice to be able to throw messages away instead of blocking, but maybe I should code this up myself.
Are your client threads calling send() on the same ClientProducer (core ) or MessageProducer (JMS) instance? If you do this you would have to synchronize access in your client application since neither JMS MessageProducers or core ClientProducer instances are designed to be used by more than one thread at a time,
Alternatively if you're creating a new MessageProducer for each message sent - that would also be frowned upon as an anti-pattern. Sessions/Producers/Consumers should always be reused between messages sent/consumed (unless you're not worried about performance).
If you enable auto reconnect you do so on the presumption that your server has not really failed but there is a temporary network glitch (like someone has pulled out the cable for a while), and when the network comes back the client will be able to automatically reconnect its session(s) with the sesssion(s) still remaining on the server and carry on as if nothing had happened (this is transparent reconnection). In your case the server has really failed so on recovery the server sessions will clearly not be there any more (since the server was restarted) and they won't be able to connect. That's why you don't get the exception until the client has tried to reconnect and not found the session(s) there. If you turn off auto reconnection you'll get the exception sooner.
The clients are calling send() on the same MessageProducer instance, and this isn't synchronized. This hasn't caused any noticeable problems in the past (when the MessageProducer was using ActiveMQ) or since we started evaluating JBM (although the loads on those 2 servers are still very low). This makes me think I really should queue up the messages to be sent myself and have one thread call send(), which will also get rid of the blocking problem.
As for the exception listener stuff - yes, I understand how that works with the different re-connect values and it makes sense.
The clients are calling send() on the same MessageProducer instance, and this isn't synchronized. This hasn't caused any noticeable problems in the past (when the MessageProducer was using ActiveMQ) or since we started evaluating JBM (although the loads on those 2 servers are still very low). This makes me think I really should queue up the messages to be sent myself and have one thread call send(), which will also get rid of the blocking problem..
You may have just got lucky in the past. All sorts of weird subtle errors might start happening if you use a JMS session concurrently with different threads.
"...A Session object is a single-threaded context for producing and consuming messages..."
Perhaps you also need a thread pool in your application to stop potentially unlimited number of threads being created for your HTTP requests...
We have a thread pool - the problem is, we really *do* need to be able to handle thousands of simultaneous request threads. The application in this case is an audio streaming server which ideally should only be limited by IO reading files from disk. Currently we have this scaling up to a few thousand simultaneous users but the idea is for this to go up into the tens of thousands.
I will code up a message queue so message sending happens in one thread. Thanks for the input.