Have you got a copy of the clustering documentation?
Also in what way does the client need the list of servers?
If you are just accessing session beans the client will automatically have a list of the servers to use after the first server in the cluster has been contacted.
I bought the documentation of clustering yesterday, i found the topics "HA-JNDI" client autodiscovery, but i have some trouble to implement it.
I need the list of each node without any configuration, and i want to be able to stop and start many node as i want without have to restart the client.
I know i can get the server list with the first server, but the problem i dont want to know the first server in the client.
Thanks for you help.
What happens when you don't provide an initial server in the 'java.naming.provier.url'? - I have used the feature in the past without any problems to automatically find a three node cluster that the client knows nothing about.
I found the problem i've had with the auto-discovery.
the jar files jbossha was not in my classpath, and didn't give any errors.
So now it works fine, thankt alot.
I have seen you got the solution for your problem.
Could you please provide snippets from your cluster-service.xml, where you define the HAJNDI-NamingService, and from your client code where you connect to the cluster via multicast? I think that could be a real help.
Thank you very much!
I use the standard cluster.xml files in the server "all", so i didnt have to configured anything.
There is the snip set to use:
Hashtable env = new Hashtable();
Context ctx = new InitialContext(env);
Object obj = ctx.lookup("/Context");
If you want the complete code, i can send you by email.
Or if you have trouble with yours, i can try to debug it.
thank you very much for your snippet!! I have tried it and it seems to work. At least I got a connection some seconds before.
I had nearly the same properties, but the "java.naming.factory.url.pkgs" did miss. I haven't found it in the examples.
How have you got the idea to use these properties?
Thank you very much! Now I only have to solve my clustered singleton problem and then I'm happy.
I don't know why, but I'm still (again?) facing problems concerning cluster node auto detection.
I have now the following properties for the InitialContext:
Properties prop = System.getProperties();
I think these are right because sometimes I get a connection and am able to lookup my stateless session bean.
But most of the times it doesn't work. I think it works more often when I started the cluster nodes freshly, but I don't know if that is just my imagination.
It would be really great if somebody could help me with this problem. Perhaps someone out of the JGroups team.
By the way, here is the stacktrace of the exception when my client doesn't get a connection.
javax.naming.CommunicationException: Receive timed out. Root exception is java.net.SocketTimeoutException: Receive timed out
at java.net.PlainDatagramSocketImpl.receive(Native Method)
Have a nice day,
Try with that line added to your prop.
I've search for informations in my doc to know what exactly that properties does, and find nothing so,
I just reproduce the same error on my side, and its happens when I start my jboss server and does not wait until they have finished the init.
Maybee that can help.
The SocketTimeoutException only occurs when I have a local instance of JBoss running as a cluster node.
When there are only remote cluster nodes I get the javax.naming.CommunicationException: Failed to connect to server 0.0.0.0:1100.
So it seems as if
1. The multicast works
2. The local instance doesn't respond
3. The remote instances respond with a wrong address.
My local machine is a windows xp. I have changed the udp connectin settings to loopback="true" and added the bind_addr attribute to point to the right ip address.
That doesn't help because we're *not* using JGroups for the discovery ! Try 2 things:
- set BindAddress/RmiBindAddress in the HANaningService MBean in cluster-service.xml and/or
- Start JBoss with --host=<NIC to bind to>
You could also try setting the RMI system property (java.rmi.hostname ?).
Simple question (from the 'never overlook the obvious' category): your 'remote' server is on the same subnet as your 'local' server, or client? Meaning, there are no network hops between local and remote? And both are using the same mcast_addr and discovery group?
Autodiscovery (in default config) only works if you can UDP/multicast between the nodes, or between client and server.
thank you for your tip, but the net configuration is ok.
the connect works in one specific case: The remote machine has a jboss running, my local jboss has just been started (no connect has yet been made). Then I am able to connect my client once.
That's why I don't have a clue what might be the problem.
Thanks for any further hints!