2 Replies Latest reply on Nov 16, 2010 6:12 AM by Galder Zamarreño

    Keys not being invalidated on older node when new node joins

    Abhishek Gupta Newbie

      I am using infinispan-4.1.0.FINAL.


      Point 6-11 on the http://community.jboss.org/wiki/DIST-distributedcachemode page says that the joiner broadcasts an invalidate message to all other nodes of the keys that it "owns". I dont see this to be working correctly.

       

      Here's how I have set up my cache.
      * I have a cache in distributed mode with numOwners="1" (L1 disabled) on 2 JVMs. The one on JVM A is configured with a fileStore loader and the one on B does not have any loader configured
      The fileStore has a 100 entries. I bring up JVM A and it loads the 100 entries in the store, then I bring up B and it gets some 'n' entries from A based on the hash of the keys. Now even though B has 'n' entries, A continues to have all 100 entries.
      In A
      cache.size() return 100
      In B
      cache.size() return n ( n<100 e.g. 42)

       

      Should A not loose the entries once they go to 'B' since numOwners="1" ?   Can this lead to stale entries in 'A' when the entry on B is updated?

       

      If there is something obvious here that I do not realize I'd appreciate being pointed to any page that would explain this better.

       

      Thanks,

      Abhi

       

       

       

      P.S. Configuration details -

      Node A:

      <default>
            <clustering mode="distribution">
              <l1 enabled="false" lifespan="60000"/>  
               <hash numOwners="1" rehashRpcTimeout="120000"/>
               <sync/>
            </clustering>
            <loaders passivation="false" shared="true" preload="true">
               <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true" purgerThreads="3" purgeSynchronously="true" ignoreModifications="false" >
                  <properties>
                     <property name="location" value="V:\TEMP\cache-data.dat"/>
                  </properties>
               </loader>
            </loaders>
           
         </default>

       

      Node B:

      <default>
            <clustering mode="distribution">
              <l1 enabled="false" lifespan="60000"/>  
               <hash numOwners="1" rehashRpcTimeout="120000"/>
               <sync/>
            </clustering>
         </default>

        • 1. Re: Keys not being invalidated on older node when new node joins
          Abhishek Gupta Newbie

          To add to the above, I've done some further testing that proves there is a problem.  I set ignoreModifications="true" for the loader on node A.

          Once node A loads all the data from the fileStore I start node B. Node B get 'n' entries from Node A. I modify one of the 'n' entries in node 'B'. Then I print the values of all the entries in node 'A'. Node 'A' does not see this change as suspected above.

          Further, if an entry that is not local to 'B' is updated, then the change is reflected in A.

           

          I would really appreciate a comment on this issue?

           

          Thanks,

          Abhi

          • 2. Re: Keys not being invalidated on older node when new node joins
            Galder Zamarreño Master

            Wrt the first comment, that looks like a bug. Could you try with latest 4.2.0.BETA1 to see if the issue is still present? If still present, please open a JIRA in https://jira.jboss.org/browse/ISPN so that we can investigate it.

             

            Wrt the 2nd comment, note that retrieving all entries, when doing so against a distributed cache, it only brings the entries that are local to the cache. It does not attempt to bring anything from B. So, it's prob right that you don't see the change. If the entry is not local to B, it's logical that the update is present in A, cos the entry is local to it and hence it should have been updated.

             

            The only thing that looks odd here is that A has an entry that belongs to B when L1 is disabled and numOwners=1.