You are confusing singleton deployments with singleton services. These are different things - although the underlying server infrastructure is common to both.
A singleton services is an MSC services, typically installed via a ServiceActivator, that only starts on 1 node in a cluster at a time. If that node fails, the service will automatically start on another node.
A singleton deployment is an application that will only be deployed on 1 node in a cluster at a time. If that node fails, the application will automatically deploy on another node.
Singleton deployments in WF10 function just as they did in AS 5.x, with a few differences:
- Instead of scanning for deployments in a special folder (e.g. deploy-hasingleton directory in AS5), singleton deployments in WF10 are identified by a deployment descriptor. Alternatively, this descriptor can be overlaid onto an existing application archive - and applied via the management API.
- WF10 allows each deployment to specify a specific singleton policy. While this was technically possible in AS5, it would require manual configuration of distinct deployment directories.
- Singleton policies in WF10 can require a quorum to prevent multiple servers from deploying a given application in the event of a network partition. This was not possible in AS5.
- Failover of the primary provider of an application to a backup is significantly faster in WF10.
I was not clear about the where to put the deployment descriptor in order to enable HA Singleton. I got that worked out. Thanks.
I have another question:
For the existing running environment that is set up as HA Singleton (two instances with node1 being active, and node2 being passive). Is there any other way to directly turn node1 to be passive and node2 to be active without shutting down node1?
The reason I want this capability is due to the scenario where the messages local to node1 and are normally accessible in clustered queue, but they are not visible to clustered queue and not accessible to node2 once node1 is shut down. It would require node1 to be re-started in order for the messages local to node1 to be part of clustered queue, and node2 can then process the messages.
If there is any way to just turn node1 into passive mode without shutting it down, I would expect the messages will still be accessible to node2, and node2 will process the message until messages local to node1 will be completely processed. Then I can shut down node1 and do some maintenance work.
Additional question is:
How do we check if all the messages in node1 have been processed in this case?
The singleton election policy controls which node is selected as the singleton provider of the service. Unfortunately, an elections are only triggered when the topology changes. You can, however, build your singleton service such that it depends on some resource that you can add/remove from the management console, e.g. a DataSource. When you remove the data source on the active node, this would cause the singleton to stop - and force an election.
My previous experience with shutting down the master node (without switching master node) was that the messages that were originally sent to the master node will not be available to the new master node until the old master node is restarted. That was the motivation of trying to switch master node (in stead of shutting down the master node) in order for the residue messages from the newly passive node to be processed by the new master node.
The approach is to set "name-preferences" attribute for a specific node to be master node. The scripts will be executed in all available nodes. This approach seems to work.
In my scenario, two servers/nodes with node#1 being master and node #2 being passive, and I sent a number of JMS messages to node#1. After a few seconds, I switched the master node to be node#2 while node#1 is still processing the messages through MDB. Now, node#1 is not shutdown, but it is passive, and node#2 picked up the job, and started to process the JMS messages.
Please note, in a production environment, I will have no idea whether the messages that were sent to node#1 is completely processed since the new master node (node#2) can also receive new messages and process new messages.
For the test, the only thing I can do is to continue to shut down node#1 (old master node, but passive now) during the middle of node#2 still processing the messages, I can see the node#2 continued to process messages. It seems to me that the messages that were sent to node#1 were made available to node#2 during the time of switching the master node.
This is all good. However, what is the minimal time I need to wait before the messages from node#1 can be made available to clustered queue?
Not sure. That's a question for the messaging team.