Let's keep this specifc issue in this thread:
To make transparent failover work, two features:
1) The client proxy needs to know where the server failed over to
without any work done by the client application writer.
2) The server's client specific state needs to replicated.
1) The client proxy doing automatic failover
The client proxy needs to reget the ServerIL (connection factory) from the new
singleton and reinitialise.
This could be done:
a) Simply, but as not very reliable (Use HAJNDI). The HAJNDI stub would
need regular updates of active cluster locations - which wouldn't happen unless there
are regular invocations over a HA
b) Use the HA mechanism to do JMS communication, i.e. piggy back
changed cluster views on the back of jms invocations.
2) Maintaining client state across the cluster (similar to HTTP/SFSB replication)
This assumes we are not going to do full replication which is a lot more work
and much less performant.
The server maintains a list of messages that each client has received but not
In the event of a singleton failover, this information would need to be on the new
server to make sure recovering the destination messages from the db doesn't
make the messages available again.
Without this processing you get:
On initial server:
Again two possible solutions:
1) Update the db on every message receipt to log which client has which message
2) Replicate this state across the cluser.
Two additional comments:
1) Without full replication (including the list of messages in a destination) NON_PERSISTENT messages will be lost at failover
2) There needs to be a mechanism to cope with both the client and server
failing at the same time.
e.g. Before/After server failure
But during the failover Client1 fails. This would need some sort of timeout to detect
that Client1 didn't transparently reconnect.