Sorry, for some reason parts of my message are not getting posted, so I am trying to post the rest of this.
I hope you don't mind an email. I read your article on the master node stuff, and the jboss 3 clustering article from July 2002. At the bottom you indicate that the release jboss 3.2.1 does NOT have these services. Can you explain what you meant by this specifically?
Also, I just wanted to be clear, in the singleton pattern, if you have a 3-node partition, one node is master, the other two are sand-by only to be a master node should the current master node fail, correct? They still get used as load balanced nodes, and don't just sit idel doing nothing, correct?
Is there any advantage to the singleton setup you describe as opposed to just putting 3 nodes in a partition using the default cluster setup that comes with jboss?
While we are on this subject, I do have a question regarding farming. I found others having the same problem I was, in that a number of exceptions were thrown when using the ./farm dir, specifically related to oracl-ds.xml cant be found, etc. I see Sacha posted saying he has fixed this and if I am reading the bug correctly, it is closed, and there is now a 3.2.2beta in the CVS files at sourceforge. I have yet to try it. However, from what I read in the docs, if I deploy my .ear in the ./deploy folder of /all, does it not get picked up and deployed on other nodes? I understand that this is what farming does, but I am not sure why I read, I believe in the July 2002 doc, that it is done in the /deploy dir. Best guess is that the article preceded the idea of a ./farm dir? Anyway, in regards to this, what all should go into the ./farm dir? Just the .ear? Or should I be placing my oracle-ds.xml, mail-service.xml and so forth that I want on all the servers? Will mail-service.xml, login-conf.xml and so forth get moved over and used properly? Our login-conf.xml goes in the /conf dir, so I am guessing I have to set up each and every jboss /all dir the same and only deploy the .ear file. But it would be great if we modified some services .xml files to have those farmed as well. If this could be explained a little bit more that would be great as well.
Lastly, any further documents in the works regarding 3.2.2 and possibly 4.0 clustering?
Regarding the singleton service:
The master node is the one that has the startSingleton() method invoked.
There are no further presumptions regarding the work that the master or slave nodes do.
Since they are all MBeans deployed independently, they can decide how to behave based on whether each individual MBean is master or slave.
Load balancing is not part of the singleton service' responsibilities.
For example if you had a lot of small tasks that can be distributed, you would use the master/slave service to ensure that only one node would distribute the load and then you would have the master node use JBoss RMI to invoke the calls to individual task handlers. Since JBoss RMI is load balanced, it will ensure that the work is farely distributed among all nodes.
Hope this helps.
I will let the field experts to cover the rest of your questions.
Thanks Ivelin for the reply. I apologize but I am a bit confused in something you have brought up. You mention the master determines how to distribute the load I assume to other servers. Is this correct? If so, this has me confused in that it seems quite the opposite of what I considered a cluster. I was under the impression that our swing client, by setting 2 (or 3) jndi properties, would automaticlly load balance between them, round robin fashion. In fact, in my other post later on, we got this working. We have two test servers, identical in jboss setup with our application .jar deployed. They both use the oracle-ds.xml service and from what we were able to see via the jmx-console, both are deploying, pooling connections, etc. The only caveat is that we are not seeing the round robin occur, but we do see the client hit both servers, then it seems to "stick" to one (so far the one with the highest ip address, although not sure if that has any bearing on why this occurs this way). If we shut that server down, the next request then gets bounced over to the one remaining server. Obviously through the JavaGroups multi-cast stuff, both servers assume the other is down (we pulled out the network cable, not shut one down).
So I guess I am not sure then if we would even want to use the singleton. From my experience, the best setup is a partition with 3 nodes giving maximum load estimates to 2 nodes so that with 3 of them, load is around 65% or so each, and should one fail, the two nodes can still handle the 100% load factor. If more load is needed, add another partition with 3 more nodes. Because we are completely stateless and we are not web based (swing client direct to EJB through RMI), we simply need to balance the load between two or three nodes per partition to handle more load, that is all. Even if we were stateful httpsession or ejbsession, we'd still balance load in the same manner, but add a lot more memory than normally needed for each server for in-memory state replication.
Does this sound about right, or am I completely missing the boat here?
Sounds like your needs are completely satisfied by the smart RMI proxies, which do the load balancing and fail over.
You do not need the singleton service.
The JBoss Clustering book describes in depth the behavior of the RMI proxy. Hopefully Sacha will be able to respond to your other concerns.