JBossMQ is not clustered. It means that one cluster should host the JBossMQ server.
Those guys are currently working on a new version which might be clustered (Ivelein has made a port of Joram - the JMS implementation used in Jonas - to JBoss)
I appriciate your answer, but perhaps I should be more specific.
We use multiple nodes in a JBoss cluster. Each node runs the same set of queues and MDBs and HAJNDI provides round-robin load balancing across the nodes. Obviously there is no fail-over support, thus it is not trully a JBossMQ cluster. However, it provides what we need.
Each node is using its own persistant store for the queues by starting with a netboot style start that stores the hypersonic database in an location specific to that node.
We would like to use a more robust persistant store. I am quite sure it would work fine to run MySQL on each node in the cluster. However, we have a large Oracle installation and I am wondering what would happen if all nodes were pointed to the same database, instead of each having its own. Since the JMS SQL statement to create the transaction id is performing a select max (txid), I would think this is possible. Has anyone tried this or can anyone predict the outcome? Thanks.
Ehm, if one of your cluster goes down, then the messages stored on that node does not get consumed and processed ...
Well why don't you use a "single" MySQL with a master - slaves architecture?
With the design of this system, lost messages are acceptable. We have small client programs that can recreate the message and insert it into the proper queue. The messages themselves do not contain any substancial data. They are mostly triggers to start processing at different points in a computational pipeline. All data is stored on an NFS share while it is processed.
What we cannot afford to do stop processing entirely. We are processing roughly 1TB of data/week and getting behind is not a good situation as it can take long periods of time to catch up. As such, setting up a single JBossMQ server with multiple processing nodes listening to its queues would create a single point of failure which could potencially bring the whole system down. This, in combination with the fact that although not desireable, we can live with lost messages, pushed us to multiple JBossMQ servers in a load-balancing arrangement via HAJNDI.
Now, my goal is to have a single configuration for all nodes, where I can store messages preferably in an Oracle database. I can certainly do this now by giving each JBossMQ instance its own Oracle user/schema etc, but then I need a different configuration per node. This problem is solved at the moment because each node uses its own HyperSonic database. I am sure I can also use a MySQL instance on each node with the configuration pointing at localhost.
What I do not know is if I can use a single set of tables in Oracle that all JBossMQ nodes would be writing messages to. Once I do so, I believe the concept of which node is handling the message is lost. Which I suspect would cause problems.
I do not understand your statement of master-slaves. Are you talking about database replication? Or are you taking about a single JBossMQ instance that is backed by a single MySQL database, where the slave nodes are listening to the queues? If it is the latter, I think I have explained why I do not want to do this above.
Thanks again for your help.
Yes I was talking about replication in the case the MySQL master goes down.
Now you can perfectly install a MySQL box on each cluster and uses a DS pointing to localhost.