What profile is your server-group configured to use?
I am also facing the same problem.
I followed all the steps in this article
I am running Wildfly in domain mode with Master on Windows 7 and Slave on Cent OS 6.5 VM. I am testing with a simple cluter-demo app. The session is not being replicated. I have configured to use full-ha profile on master and slave.
Once one server where the request is going is brought down and the request goes to other server in the cluster it prints the session attribute null. I am using haproxy as load balancer which is hosted on the salve machine.
When the slave starts I see this message only in the master server log -
[Server:server-three] 11:57:43,607 INFO [stdout] (ServerService Thread Pool -- 71) GMS: address=master:server-three/web, cluster=web, physical address=x.x.x.x:55450
[Host Controller] 11:57:31,455 INFO [org.jboss.as.domain] (Host Controller Service Threads - 28) JBAS010918: Registered remote slave host "slave", WildFly 8.2.0.Final "Tweek"
Please help me to understand why session replication does not work? What I am missing.
I'm going to need a lot more details about your configuration.
- Which profile are you using?
- Does your web.xml include <distributable/>?
- Do you see cluster formation messages in your log?
Thanks for you response. I am running Wildfly in domain mode and using full-ha profile
Yes the web.xml includes <distributable/> tag.
I see cluster information message in the log like this - Received new cluster view: [node1/web|0] (1) [node1/web]
node1 = master node
node 2 = slave node
the above log is from master server.log. I see the same like - Received new cluster view: [node2/web|0] (1) [node2/web] in slave server log at the start up and deployment of the cluster-demo app.
I think this message should show both the nodes in the cluster. Not sure what I am missing.
That tells me that your nodes cannot see each other (i.e. neither node has the other in its cluster view).
Are you using the default JGroups subsystem configuration? The default configuration relies on UDP multicasting. For this to work, it is imperative that multicasting is enabled/allowed on your network, and that each node uses the same mcast_addr and mcast_port.
Yes I am using default jgroups configuration. Multicast is enabled on the network because I tested it using McastReceiverTest and McastSenderTest
When you say each node should use same mcast_addr and mcast_port which configuration is that? I am starting both master and server using -u as same multi cast address for both which I used for testing the McastReceiverTest and McastSenderTest.
Is there a problem because master runs on windows and slave on centos?