I have three different boxes sitting on two different subnets. Because of this setting I am not using udp for discovery instead I am only configuring TCP.
Lets provide a little background:
Node 1 192.168.10.1
Node 2 192.168.10.2
Node 3 192.168.0.1
Service 1 (HA Singleton)
Service 2 (Depends on S1)
service 3 (Depends on S1 and S2)
For each one of the nodes I have supplied a the approapriate list of IPs in the cluster-service.xml.
I generally start the servers in the following fashion Nodes 1 wait for it to be fully up then node 2 and finally 3.
The number of members on the cluster inclreases gradually, as is expected, as the servers start up.
I Also have a singleton service (S1) that is present on all nodes. Node 3 starts (S1) and together with it a series of farmed, i.e. deployed as part of the farm, services that depend on it (S2 and S3).
This brings my first question. Why are the farmed services not started on the other nodes?
Out of my understanding The normal operating situation would be the present
Node 1 -> S2 S3
Node 2 -> S2 S3
Node 3 -> S1 S2 S3
Instead all I have is Node 3 the two other nodes show services S2, S3 as destroyed.
Now the fun starts:
If I shutdown Node 3 S1 should be started in only one of the two other nodes. It is however started in both of them.
And When I check in HASingletonDeployer none of the nodes are reporting being the Master Node.
I would appreciate any help that would shed any light in this mid summer night dilema.
Thanking you in advance
We have similiar configuration and we singleton sechedulers that start on every node.
If the cluster is set on top of IP Multicast then the singleton works properly.