3 Replies Latest reply on Apr 15, 2014 10:06 AM by ajcmartins

    Why cache misses get higher over time?

    ajcmartins Newbie


      i have a cache where i'am experiencing a weird behavior where i start getting too much cache misses over time in ISPN 6.0.1. Some details:

      1. The cache is configured with a JDBC store
      2. Is in replicated mode but i have been testing with only one node in the cluster
      3. Queries to the cache are being done through infinispan-query because the entries are indexed by Lucene
      4. The test consists in executing the exact same query in loop
      5. Cache configuration:
          <namedCache name="cache1">
              <eviction maxEntries="10000" strategy="LRU" />
              <expiration lifespan="43200000" />


      When i start the cache and load data into it everything is ok and i get a high hit ratio. The thing is that over time (over 1-2 days) when new entries/updates are added to the cache performance starts degrading badly. Observing the cache statistics i can see that the hit ratio gets lower and lower with most of the access hitting the datastore. The number of entries in the cache is lower than the configured maxEntries but that shouldn't matter anyway..


      I have already tried the LRU and LIRS always getting the same behaviour.


      Anyone experienced this or have a clue to what may be wrong?



        • 1. Re: Why cache misses get higher over time?
          ajcmartins Newbie

          I just noticed that even lowering the maxEntries value to something low as 100 the cache never goes over 64 entries. It's like it stops bringing objects to memory after a certain % of the maxEntries values.. :/

          I never really took attention to this. Is this the normal behavior?



          • 2. Re: Why cache misses get higher over time?
            William Burns Expert

            The reason for less entries than expected is explained here:  http://infinispan.org/docs/7.0.x/faqs/faqs.html#_cache_s_number_of_entries_never_reaches_configured_maxentries_why_is_th…  LRU and LIRS should both give you the same amount of entries in memory since they both use the segments, however LIRS should hopefully get a better hit ratio since it keeps values that are more frequently accessed for longer.  I am guessing the hashCode for your objects is somewhat distributed?

            1 of 1 people found this helpful
            • 3. Re: Re: Why cache misses get higher over time?
              ajcmartins Newbie

              Hello William,

              It's weird that after so much time searching i never landed in that page. That does indeed explain the reason for less entries, and there is no problem there then. Thank you!


              My main issue persists though.. ISPN accessing the cache store when (from my knowledge) it shouldn't. Meanwhile i have been trying to limit the scope of the issue by updating ISPN to the 6.0.2 version, make the cache local, add eviction/load listeners to the cache and remove Lucene from the equation. I now am at a point where i observe the following:

              1. I have a few thousand entries in the cache store
              2. When i start the cache it's empty as expected
              3. I then issue an access to objects with keys A and B several times
              4. Listeners log for each request are always:
              Loaded object: A
              Evicted object: B
              Returning object A
              Loaded object: B
              Evicted object: A
              Returning object B


              1. If i repeat the access to only same key A or B no access to the store is made.
              2. The statistics present a Hit ratio of 1 and 0 cache misses what is incorrect although the cache loader stats present the correct number of loads.
                • I can't be 100% sure at this point but i think this started after the upgrade to 6.0.2


              At this point the cache statistics say that there is only 1 entry in the cache. So my question is, having a maxEntries=100 why is ISPN cycling on the load/eviction with only two entries?

              Now the weird part is that if i replace one of the keys access A/B for a 3rd one lets say C i get:

              Loaded Object: A
              Returning Object: A
              Loaded Object: C
              Returning Object: C
              Returning Object: A
              Returning Object: C
              Returning Object: A
              Returning Object: C

              This is the behaviour that i would expect everytime.

              Does this extra info rings any bells to anyone?