I have successfully been able to setup a domain and cluster 2 main server groups.
Feature-wise, it doesn't matter if you are using domain mode or standalone mode, it is only about managment.
i. On what port should incoming requests come through? .80 or .10001?
Commonly, you will use port 80 (HTTP default) for user requests. You need to make sure, that the virtual host you have on the port 10001 will only accept requests from within internal network, from the WildFly servers and that its not exposed to the outside world.
ii. How do I configure these 2 load_balancer groups in httpd.conf so that some requests are routed to the first load balancer group and other requests are routed to the second load balancer group? Do I configure seperate virtual hosts to achieve this?
Hm, not entirely sure what you mean. For example, if you have 4 nodes composed of 2 clusters of 2 nodes, you would setup the load-balancing-group per cluster. This way, if a failure happens, the failover will happen within the LBG. Only if the are no more servers left in the cluster a different LBG will be used. So everything will be using just one virtual host.
I really hope someone posts an answer or atleast suggestions this time as I have left several posts in the last 1 week without anyone replying.
You can point me to those and I can reply on there too.
You've been of great help and I'm actually just happy someone finally replied to my question.
As regards the question posted, you answer is most appropriate.
Though I must say I posted the question virtually a week ago. As a result I've had time to do more research and play around things a little bit more so I had figured most of it out already.
However I'm now faced with a different challenge.
I find that once I create a new node for a cluster in a different physical server, I keep getting transaction issues. It seems as though both application instances on the 2 different nodes (on 2 different physical servers) are trying to process the same JMS message at the same time. As a result I get the following Error: SynchronizationCallbackCoordinatorNonTrackingImpl:183 - HHH000346: Error during managed flush [Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)
This error happens atleast once in every 4 JMS messages processed. I'm using an MDB as a message listener for the queue.
Please understand that this was not happening when I ran the application as a standalone deployment without any sort of clustering. I thus feel this is still related to the posted question.
Your advise or recommendations would be appreciated!
Sorry, I can't help with JMS, but HHH look more of an JPA/Hibernate issue (pessimistic locking and 2 nodes accessing the same data?).
1 of 1 people found this helpful
Seems to me this is a concurrency issue (not direct related to cluster)
Could it be that the JMS message process change a i.e. a DB record with different messages.
As long as the messages are in sequence everything is fine, but if you have more instances message #1 and message #n are processed in parallel so the DB record can be read, but the other process write an update before this one and this is detected by hibernate (there are different so called "optimistic lock" policies).
Does that makes sense?
If you find more information I would recommend to create a new thread within the wildfly forum for this and post the relevant information.
You are right regarding the optimistic-locking on hibernate. I was using a hibernate library which had optimistic-locking enforced by default and so I couldnt even play around with this after I read some posts on optimistic vs pessimistic locking in this regard.
I had to refactor my codes a bit to be more sensitive to the fact that I had multiple threads running in parallel. Also had to discard my former hibernate library and switch to JPA entityManager with criteria queries API. Everything seems to be working fine now regarding concurrency!
However I have a new problem with Infinispan. I noticed once I add more than 2 nodes in any given cluster I get the following execptions:
1. "Failed to start rebalance for cache dist TimeoutException One of the nodes null timed out" on one of the existing 2 nodes which are working fine
2. "Initial state transfer timed out for cache dist" on the new 3rd node I'm trying to add to the cluster.
Are Wildfly clusters limited to just 2 nodes per cluster? Please advise.