8 Replies Latest reply on Sep 21, 2018 5:23 AM by nadirx

    Using RocksDB Data Store shows "malformed WriteBatch" error

    edh

      Hi.  I've been using Inifinispan Embedded Caches in my application, and recently I've started using a RocksDB data store for one of my Infinispan Caches.  This is in a Spring Boot application.  My configuration looks like this:

       

      @Bean
      public org.infinispan.configuration.cache.Configuration persistentCacheTemplate() {

           String cacheDirPath = deriveCachePath();

           if (cacheDirPath == null) {

                return null;

           }

           logger.info("Using following directory to write cache data stores: {}", cacheDirPath);

       

           return new ConfigurationBuilder()

                .locking()

                     .lockAcquisitionTimeout(2, TimeUnit.MINUTES)

                .persistence()

                     .passivation(false)

                     .addStore(RocksDBStoreConfigurationBuilder.class)

                          .location(cacheDirPath + "/data-")

                          .expiredLocation(cacheDirPath + "/expired-")

                          .shared(false)

                          .async()

                               .enable() // Write-Behind
                .build();

      }

       

      @Bean
      public InfinispanCacheConfigurer prodStatsCacheConfigurer(

         @Nullable org.infinispan.configuration.cache.Configuration persistentCacheTemplate

      ) {

         if (persistentCacheTemplate == null) {

           return null;

        }

       

         return manager -> {

              org.infinispan.configuration.cache.Configuration cacheConfig = new ConfigurationBuilder()

                .read(persistentCacheTemplate)

                .build();

       

                manager.defineConfiguration("prodStatsCache", cacheConfig);

        };

      }

       

      I've defined a common template where I could re-use the same basic definition for a number of caches.  This seems to work for a while, but then I will get these exceptions

       

      org.infinispan.persistence.spi.PersistenceException: org.rocksdb.RocksDBException: malformed WriteBatch (too small)

           at org.infinispan.persistence.rocksdb.RocksDBStore.writeBatch(RocksDBStore.java:412)

           at org.infinispan.persistence.async.AsyncCacheWriter.applyModificationsSync(AsyncCacheWriter.java:226)

           at org.infinispan.persistence.async.AsyncCacheWriter$AsyncStoreProcessor.retryWork(AsyncCacheWriter.java:463)

           at org.infinispan.persistence.async.AsyncCacheWriter$AsyncStoreProcessor.run(AsyncCacheWriter.java:423)

           at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)

           at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)

           at java.base/java.lang.Thread.run(Thread.java:844)

      Caused by: org.rocksdb.RocksDBException: malformed WriteBatch (too small)

           at org.rocksdb.RocksDB.write0(Native Method)

           at org.rocksdb.RocksDB.write(RocksDB.java:602)

           at org.infinispan.persistence.rocksdb.RocksDBStore.writeBatch(RocksDBStore.java:422)

           at org.infinispan.persistence.rocksdb.RocksDBStore.writeBatch(RocksDBStore.java:403)

           ... 6 common frames omitted

       

      I've checked with RocksDB, and it sounds like the underlying buffer being written is smaller than the header (the header is 12 characters).  I've tried to work my way through the code, but it's not clear to me what is being persisted. I can see Modifications, and the cache in question will be a ConcurrentMap using Java Objects as key/value pairs (java.lang.UUID as the key, and a value Java Object... all Serializable).

       

      Would this be happening when the value in the Map exists but doesn't have much data in its attributes and the resulting bytestream is less that 12 characters (or bytes)?  It seems a bit of a stretch, but it's the only thing I can think of.

       

      If there some configuration that ensures that the modifications are only written when they reach a minimum size?  I know the doco references blockSize and cacheSize.  Would configuring a block_size help with this?  Although, again, according to the doco, block_size defaults at 4KB, so I don't if I'm chasing shadows here.

       

      Any help/insight would be appreciated.  Thanks!