3 Replies Latest reply on Feb 9, 2016 10:27 AM by rvansa

    Do I need Tombstones?

    bill.burke

      I have Infinispan in front of a custom data store.  We manually perform cache invalidation ourselves.  If a write to the DB happens, we remove the entry from the cache.  I'm seeing this problem

       

      1. Thread 1 does cache lookup for "key1"

      2. Thread 1 does not find "key1" in cache

      3. Thread 1 loads data for "key1" from DB into memory

      4. Thread 2 updates "key1" in DB

      5. Thread 2 removes "key1" from cache

      6. Thread 1 saves stale "key1" into cache

       

      I tried using write skew checks with repeatable read isolation, but the write skew check doesn't fire because the entry is not in the cache yet in step #2 and #3.

       

      Here's my config:

       

          ConfigurationBuilder invalidationConfigBuilder = new ConfigurationBuilder();
          invalidationConfigBuilder
                              .transaction().transactionManagerLookup(new DummyTransactionManagerLookup())
                                .locking().isolationLevel(IsolationLevel.REPEATABLE_READ).writeSkewCheck(true).versioning().enable().scheme(VersioningScheme.SIMPLE);
        • 1. Re: Do I need Tombstones?
          rvansa

          Which version do you use? PersistenceUtil.loadAndStoreInDataContainer performs the cache store load & data container save under lock.

          • 2. Re: Do I need Tombstones?
            bill.burke

            We do not use a CacheLoader.

            • 3. Re: Do I need Tombstones?
              rvansa

              By 'in front of a custom data store' I though you're implementing CacheLoader/CacheWriter SPI (usually referred to as cache store). So you update the cache yourselves, using the Cache API? In that case you really have to implement some sort of tombstones, or do the locking yourselves. Note that tombstones are generally troublesome if you want to use eviction, as ATM there's no way to say 'ignore eviction on these entries'. But expiration works quite fine.

               

              I had quite similar requirements for Hibernate ORM's second level cache, so you can check out the implementation in ORM 5.1 to see how this can be accomplished. For efficient operation, you'll probably need to write your own interceptors, though, you could try different approach with functional API (ORM implementation was written against Infinispan 7.2 which lacked this functional API).