2 Replies Latest reply on Oct 28, 2017 4:15 PM by cd cd

    Cluster startup with Partition

    cd cd Newbie

      Hello:

       

      I just recently enabled partition handling via '<partition-handling enabled="true"/>' to my distributed cache (cross-site configuration v8.2.8). It is also configured for owners="2". When I start my domain controller, followed by a single Infinispan server (1 JVM server), I am able to successfully create a cache entry. Is that how it should work? I thought it wouldn't be able to create an entry given there is only a single instance of the server... or is it really because this is not a 'split-brain' case it allows creating the entry?

       

      My cache definition:

       

                          <distributed-cache name="default" mode="ASYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER">

                              <locking acquire-timeout="30000" concurrency-level="1000" striping="false"/>

                              <transaction mode="NONE"/>

                              <backups>

                                <backup site="PROD1" strategy="ASYNC" failure-policy="WARN" enabled="true" />

                              </backups>

                              <partition-handling enabled="true"/>

                          </distributed-cache>

       

       

      I also spun up 6 nodes (3 per site with cross replication). I would put a cache entry then gracefully shutdown a single node, then perform a get of the entry (using hotrod client) until I was left one node where I was continually able to put and get entries. Is that the correct behaviour?

       

       

      Kind Regards,

      cd

        • 1. Re: Cluster startup with Partition
          Ryan Emerson Newbie

          Hi! Partition handling does not affect the availability of a cache on startup if num owners don't exist. Instead, you should use the transport property "initial-cluster-size" to ensure that your cache does not become available until at least "initial-cluster-size" nodes exist within your cluster. e.g:

           

                              <distributed-cache name="default" mode="ASYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER">

                                  <transport initial-cluster-size="2"/>

                                  <locking acquire-timeout="30000" concurrency-level="1000" striping="false"/>

                                  <transaction mode="NONE"/>

                                  <backups>

                                    <backup site="PROD1" strategy="ASYNC" failure-policy="WARN" enabled="true" />

                                  </backups>

                                  <partition-handling enabled="true"/>

                              </distributed-cache>

           

          I also spun up 6 nodes (3 per site with cross replication). I would put a cache entry then gracefully shutdown a single node, then perform a get of the entry (using hotrod client) until I was left one node where I was continually able to put and get entries. Is that the correct behaviour?

          Yes this is the correct behaviour as a rebalance will be triggered when each node gracefully shutdown. So the final single node should always contain the previously put values, assuming this node has the capacity to hold the entire contents of the cache and that sufficient time elapsed between nodes leaving for a rebalance to complete.

          • 2. Re: Cluster startup with Partition
            cd cd Newbie

            Thanks Ryan, this is just what I needed.

             

            cd.