-
1. Re: Infinispan cache is not updated properly
rvansa Dec 12, 2014 2:52 AM (in response to sakthiprabhu)1 of 1 people found this helpfulInfinispan 6 does not support node reconnect - this is called 'split brain' or 'partition' handling. Upgrade to Infinispan 7 and enable partition handling in configuration.
See http://blog.infinispan.org/2014/08/partitioned-clusters-tell-no-lies.html
-
2. Re: Infinispan cache is not updated properly
sakthiprabhu Dec 15, 2014 9:23 AM (in response to rvansa)Thanks Radim.
Does Infinispan 7 works with WildFly 8.1 ?.
-
3. Re: Infinispan cache is not updated properly
sakthiprabhu Dec 16, 2014 1:19 AM (in response to rvansa)Thanks Radim.
Here is my set-up :
NODE 1 (MASTER)
----------------------------
Server Cache : NODE01
Server Nodes : NODE01/server
EJB Cache : NODE01
EJB Nodes : NODE01/ejb
Later I add NODE 2 to NODE 1 :-
NODE 1 (MASTER)
-----------------------------
Received new cluster view: [NODE01/server|1] (2) [NODE01/server, NODE02/server]
Server Cache : NODE01, NODE2
Server Nodes : NODE01/server, NODE02/server
EJB Cache : NODE01, NODE02
EJB Nodes : NODE01/ejb, NODE02/ejb
NODE 2
------------
Server Cache : NODE01, NODE2
Server Nodes : NODE01/server, NODE02/server
EJB Cache : NODE01, NODE02
EJB Nodes : NODE01/ejb, NODE02/ejb
Then I disconnect the NODE 2 from the Cluster :-
Here both the nodes acts as a separate Singleton nodes.
NODE 1 (MASTER)
----------------------------
Received new cluster view: [NODE01/server|2] (1) [NODE01/server]
Server Cache : NODE01
Server Nodes : NODE01/server
EJB Cache : NODE01
EJB Nodes : NODE01/ejb
NODE 2 (MASTER)
----------------------------
Received new cluster view: [NODE02/server|2] (1) [NODE02/server]
Server Cache : NODE02
Server Nodes : NODE02/server
EJB Cache : NODE02
EJB Nodes : NODE02/ejb
Later I connects back the NODE 2 to NODE 1 :-
Here Both the nodes receives MERGE VIEW, but the caches does not updates accordingly.
NODE 1 (MASTER)
----------------------------
Received new, MERGED cluster view: MergeView::[NODE01/ejb|3] (2) [NODE01/ejb, NODE02/ejb], 2 subgroups: [NODE01/ejb|2] (1) [NODE01/ejb], [NODE02/ejb|2] (1) [NODE02/ejb]
Server Cache : NODE01
Server Nodes : NODE01/server, NODE02/server
EJB Cache : NODE01
EJB Nodes : NODE01/ejb, NODE02/ejb
NODE 2 (MASTER)
-----------------------------
Received new, MERGED cluster view: MergeView::[NODE01/ejb|3] (2) [NODE01/ejb, NODE02/ejb], 2 subgroups: [NODE01/ejb|2] (1) [NODE01/ejb], [NODE02/ejb|2] (1) [NODE02/ejb]
Server Cache : NODE02
Server Nodes : NODE01/server, NODE02/server
EJB Cache : NODE02
EJB Nodes : NODE01/ejb, NODE02/ejb
I found that, NODE 2 receives VIEW as [Server2, Server1] even though NODE 1 is MASTER, and as per MERGE3 Server2 (NODE 2 - first node of the VIEW TreeSet) becoming the MERGE LEADER.
I am new to Infinispan. I am unable to track the issue. Please guide me if I am wrong.
Thanks,
Sathya Prabhu R.
-
4. Re: Infinispan cache is not updated properly
rvansa Dec 16, 2014 4:53 AM (in response to sakthiprabhu)I am not sure what are you asking for. As I've said, Infinispan 6 does not support split brain scenario, you cannot solve your problem with this version. Infinispan 7 should work inside WildFly when using it in library mode (embedded into your WAR), but I am not sure how could that work for session replication. Quick google showed me http://stackoverflow.com/questions/27031481/use-infinispan-7-for-wildfly-8-1-0
-
5. Re: Infinispan cache is not updated properly
sakthiprabhu Dec 16, 2014 9:53 AM (in response to rvansa)Radim,
Infinispan cache is not getting updated accordingly to the MERGE VIEW. I am facing the same issue in infinispan 7 too.
-
6. Re: Infinispan cache is not updated properly
rvansa Dec 16, 2014 10:06 AM (in response to sakthiprabhu)The data cannot be simply "merged" (see CAP theorem). JGroups merges are different issue.
You need to enable partition handling in your standalone.xml in order to deny any writes to a broken cluster. See configuration schema for urn:infinispan:server:core:7.0 (found in server's docs/schema/jboss-infinispan-core_7.0.xsd), here you will find element partition-handling which you have to set with enabled=true in each clustered cache configuration.