5 Replies Latest reply on Sep 15, 2004 4:37 AM by prese

    Cache reconciliation

    prese

      Hi Guys,

      I´m trying to solve the reconciliation for a clustered tree cache.
      I´m using the Jboss 3.2.6.

      Bela suggested sometimes ago to implmented my own TreeCache listener in to put the reconciliation in viewAccepted() method.

      I have tried th following scenario:
      - at the begining node A, B, C are on the same cluster.
      - node B is deconnected, I wait some time and reconnected it back

      I want to find out that the node B is reconnecteing so I have to ovveride it´s state from the other cluster members.

      I expected to be notified with a mergeView ... but is not the case ... I receive a simple view with all the 3 nodes when B is reconnected ... sometimes is true that a merge view is sent across the cluster ... but this seams for me to be unpredictable ....

      Have anyone an idea to solve the following mystery :
      - a cluster node can it find out if it is rejoing the cluster ....

      Any help is welcome.

      Thanks
      Sebi

        • 1. Problem with persistence cache in 1.1
          prese

          hi,
          I'm trying to get the persistence cache working. The problem I am encountering is the JBossCache seem to be in a endless loop when trying to archive 160k+ instances of a simple object containing a few attributes. The data size for the 160k+ objects is about 10mb as CSV, but in the archive form as persisted by the CacheLoader it just keep growing. I've tried both the FileCacheLoader & BdbjeCacheLoader, they both shows the same behaviour.

          eddie

          • 2. Re: Cache reconciliation
            belaban

            If you use FD.shun=true and GMS.shun=true, then the following will happen:
            - C is disconnected
            - C will form a view {C}
            - A and B will form a view {A,B}
            - When you reconnect, both partitions will shun each other, causing them to leave and rejoin. There is *never* a merge going on !

            When you disable shun=true, the partitions {C} and {A,B} will merge back into {A,B,C}, so there you *will* get a MergeView.

            Bela

            • 3. Re: Cache reconciliation
              prese

              Hi Bela,

              Thanks for the answer, but i still have some problems :((..

              Node C is the coordinator.
              If I remove B when I reconnect back B I have the following situation:
              - node A and B received a merge view with members A, B
              - node C is declared as suspect and does not receive any view ...

              I put it on shun=false both the GMS and FD.
              I´m using for Cluster Partition on channel configuration and for Cache clustering a different channnel configuration ... can be this the cause for the problem ?

              What other cluster parameters should I check?

              Thanks
              Sebi

              • 4. Re: Cache reconciliation
                belaban

                You have to use the exact same config for A, B and C.

                I don't understand the rest of your post...

                Bela

                • 5. Re: Cache reconciliation
                  prese

                  I´m using exactlly the same cluster configuration for all the nodes but I still have the problem:

                  Consider the following:
                  - node A is the coordinator of the clustered cache
                  - there are also node B and C on the cluster
                  - the node B is disconnected for a while
                  - I reconnect it back
                  - I receive merge view only on nodes B and C ... the merge view contains those 2 nodes
                  - Node A (the old coordinator) is declared as suspected and he does not receive any merge view ....

                  I´m using the shun=false for both GMS and FD.

                  I have no idea why the node A is declared as suspected and why the mergeing view does not contain all the nodes? ...

                  Thanks
                  Sebi