1 Reply Latest reply on Mar 4, 2016 10:24 AM by galder.zamarreno

    Configuring Remerge of simple master slave replication cache after split brain

    eliot.clingman

      This simplified topology might make my question simpler (and more focused)  then the question I asked yesterday.

       

      I have two embedded nodes, set up for async replication. Lets call them "cloud node" located in the cloud, and "customer slave node" located at the customer physical location. There is only one cache, named "candy:1" where 1 is the customer id. The "customer slave node"  only supports reads... the application owning the "customer slave node" only gets an object from the cache, it never puts an object. The "cloud node" has the ability to keep its data totally accurate. By the way, both "cloud node" and "customer slave node" have single file store persistence turned on.

       

      The latter node  physically resides at the customer's building, so network partition (split brain syndrome) needs to be handled. In the event of split brain  the "customer slave node" will gradually get stale, but that is not a disaster.... the customer's application will continue to read from that cache. On the other hand, the "Cloud node" will stay totally up to date.

       

      How do I configure things so that when the partitions re-merge, Infinispan always elects "cloud node" as the coordinator and its side as the "winning partition" and thus copy over "cloud node" state to "customer slave node"?