you are running standalone-ha.xml or standalone-full-ha.xml for the backend nodes?
as configuring undertow to do load balancing between two nodes, doesn't imply proper cluster replication between that two nodes.
I just ran them with domain.bat. Can the distributed deployment work without replication working?
It indeed appears to be some problem with the replication since running the put.jsp directly on master and then the get.jsp on the slave, I also get "null".
Strange. The web.xml has the distributable tag and deploying the application from the master console deployed it also to the slave. And I see the slave and master connecting. And in domain.xml I see
<server-group name="other-server-group" profile="full-ha">
<heap size="64m" max-size="512m"/>
<deployment name="cluster-demo.war" runtime-name="cluster-demo.war"/>
To debug, you can run the app we use to test the server, checkout
deploy and when you hit
you should see the members for the cache:
Session ID: eh5srCB4fxlhI3GLwwMthGhFzWoybp3EGe6PqYd-
Current time: Fri May 20 12:04:43 CEST 2016
Node name: node1
Members: [node1, node2]
Physical addresses: 127.0.0.1:55200; 127.0.0.1:55300;
I dono if that would be useful, but I cannot vouch for the app you are using what it does.
I have three servers on master and three servers on slave. If I hit any of the servers on the master, I see the three nodes on the master and the same thing with the slave host but for some reason they are not all visible in the same server group. The output of the debug servlet config this, if I use "put.jsp" on any on the master nodes, "get.jsp" on another node will see the value but no servers on the slave node will see the value. However, looking at the server group on the master web console will show all six nodes in the group...
Ok and while accessing the /debug servlet, you see:
- counter increasing by 1 with every invocation
- session ID stays the same
- members always include all expected members