4 Replies Latest reply on Aug 18, 2014 12:03 AM by Phillip Atkinson

    Reading dirty / updated entries,  simple replication, misuse or shall I use different configuration?

    Paris Apostolopoulos Newbie

      Hello, I am using infinispan in library mode, with the setting REPLICATED _ SYNC (non transactional) . It works like a charm. I have some simple questions regarding replicated sync mode and pass by reference feature of Infinispan

       

      Question1:

      I have inherited a code base, that was not cluster aware, this code base was using a cache data holder object, to group all the cached data that it needed, between calls. So when I migrated to Infinispan i created a new Cache that would hold String as a Key and value this bag or properties.

       

      Cache<String,SessionDataHolder>
      
      

       

      Eventually different calls on nodes using the same Key (session Id), want to get a reference on a SessionDataHolder instance and change a property.

       

      In the above mode, if  i do a get()  from the Cache  get a reference on  Node 1.

      NODE1

      SessionDataHolder cached = myCache.get(sessionId);
      cached.setName("aNewName");
      cached.setVATID("aBew Vat ID")
      myCache.replace(sessionId,cached);
      
      

       

      QUESTION: Since currently I am not implementing a DeltaAware interface as I have red in a related post, my way on updating this property and make the cache distributes the change is to get reference of the object, set a new value for a certain property and then call the replace() method to notify the cache that the overall entry was updated and distribute changes? Is there a different in this case for put and replace?

       

       

      Question2:

      In non transaction, replicated sync mode, if at the same moment 2 Nodes hold on a 'updated' cache element, but their calls take different amount of time, when each of them completes it needs to update a property in the Cache, the chages are going to be put on top of the previous cache commits? Example

       

      Node 1: Time 12:00

      SessionDataHolder cached = myCache.get(sessionId);
      
      

      Node 2 Time 12:00

      SessionDataHolder cached = myCache.get(sessionId);
      
      

       

      Node 1: Time 12:05

      cached.setName("aNewName");
      myCache.replace(sessionId,cached)
      
      

      Node 2 : Time 12:10

      cached.setName("ANode2Name");
      myCache.replace(sessionId,cached)
      
      

       

      At 12:05, Node1 is going to issue to the cluster a replace on the object with the specific ID and the entry will be replaced, BUT Node2 will be holding a reference from the local cache since it did a get at 12:00 and it might take some time until it is on the state to update the cache. When it is ready it is going to replace a older version of the cache value with it's change, overwritting the overall changes on Node1. Is my understanding correct?

       

      Is there any way on the above mode, to make my code always check for changes on remote cache? it is should be implemented using JTA transactions?

       

      Many thanks for your help or tips?

        • 1. Re: Reading dirty / updated entries,  simple replication, misuse or shall I use different configuration?
          Phillip Atkinson Newbie

          I'm having almost exactly the same issues as Question 2, and would like to know if it's some configuration / code issue. I've set the cache isolation level to SERIALIZABLE and to use batching, and so my code will do something like:

           

          Cache<String, List> cache = cacheContainer.getCache();

          cache.startBatch();

          List data = cache.get(key);

          data.add(some value);

          cache.put(key, data);

          cache.endBatch(true);

           

          Try as I might, if 2 nodes call this same block at the same time, every so often the scenario in Question 2 still happens, where old data is "overwritten" by the newer data.  I've also tried using an AdvancedCache, where my code would first do this to try to get the transaction lock as early as possible:

           

          Cache<String, List> cache = cacheContainer.getCache();

          AdvancedCache<String, List> advancedCache =

                      cache.getAdvancedCache().withFlags(Flag.FORCE_WRITE_LOCK);  // javadocs on this flag seem to indicate this is a good idea if doing a get-update-put

           

          Is there some code or configuration I'm missing or that is incorrect?

           

          (I've also played around with setting the transaction mode, but I don't think that matters if I'm not actually using big-T Transactions?)

           

          *edit* sorry, my versions are Infinispan 'Brahma' 5.1.8.Final, JBoss EAP 6.0.1.GA (AS 7.1.3.Final-redhat-4)

          • 2. Re: Re: Reading dirty / updated entries,  simple replication, misuse or shall I use different configuration?
            William Burns Expert

            I will try to answer both of these posts here:

             

            Paris Apostolopoulos wrote:

             

            QUESTION: Since currently I am not implementing a DeltaAware interface as I have red in a related post, my way on updating this property and make the cache distributes the change is to get reference of the object, set a new value for a certain property and then call the replace() method to notify the cache that the overall entry was updated and distribute changes? Is there a different in this case for put and replace?

            First I can't quite comment on how DeltaAware would behave, but you are correct in saying you would need to use replace, however there are a few issues in the code you posted.  The first is there are 2 replace methods.  One that does a replace only if there is a value and one that does the replace if there is an equal value in the Map.  The one you need is the latter http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ConcurrentHashMap.html#replace(K,%20V,%20V).  This so we can ensure that there is no other concurrent call to update this map that we would ignore.  So this means that we require 2 instances of this same object.  So unfortunately what is required is to clone or create a new instance so you can retain the original value and have a new "updated" value.  An example of this is

             

            boolean success = false;
            while (!success) {
               SessionDataHolder cached = myCache.get(sessionId);  
               SessionDataHolder cachedOrig = cached.clone(); // Or just make a new one with same values
            
               cached.setName("aNewName");  
               cached.setVATID("aBew Vat ID")  
            
               success = myCache.replace(sessionId, cachedOrig, cached));
            }
            

             

            So the important thing to remember is the replace needs 2 distinct instances that have differing values that would effect equality.  Then the fact that the replace can fail and you must either retry by retrieving the value again as my example or you throw some kind of error or whatever processing you need.

             

            This should also answer your second question.  The key being the conditional methods on the ConcurrentMap interface.

             

             

            Phillip Atkinson wrote:

            I'm having almost exactly the same issues as Question 2, and would like to know if it's some configuration / code issue. I've set the cache isolation level to SERIALIZABLE

            Serializable is not currently supported, this just falls back to Repetable Read.

             

            Unfortunately batching by itself would have the same issue as explained above.

             

            Phillip Atkinson wrote:

            I've also tried using an AdvancedCache, where my code would first do this to try to get the transaction lock as early as possible:

            This approach should work, but you need to enable pessimistic transactions otherwise it wouldn't acquire the lock.  Also you should look out for [ISPN-3266] Pessimistic Force Write Lock doesn't acquire remote lock - JBoss Issue Tracker which I found while working on some other pieces.  My guess is this bug has been around since Infinispan 5.1.0.

            1 of 1 people found this helpful
            • 3. Re: Reading dirty / updated entries,  simple replication, misuse or shall I use different configuration?
              Paris Apostolopoulos Newbie

              Hello William, many many thanks for your answer. I will definitely check my replace semantics. one small clarification. Using put wouldn't produce the same effect? Eventually what I actually want is by using the same Key, to overwrite on top a new value. I guess I need to use a copy constructor or anyway create a new similar instance.

              • 4. Re: Reading dirty / updated entries,  simple replication, misuse or shall I use different configuration?
                Phillip Atkinson Newbie

                Thanks William! I've since upgraded to v6.0.2Final and it seems to fix the issue.