1 Reply Latest reply on Apr 7, 2017 10:38 AM by ryanemerson

    Reinitialize cluster with CacheStore

    suenda

      We are using a cluster of infinispan nodes (v8.2.1 embedded inside a tomcat container) configured with jdbc-store persistence in synchronised mode. The nodes write well to the database when we put the data into the cache. We want to be able to restart our cluster and initialize it with the data from the persistence. When we restart the cluster with the persistence containing data, it seems the nodes do not actually fetch data into the memory when restarted, the JMX metrics of the cache show 0 entries. However, on querying the cache, it does return the data fetched from the persistence. On the other hand, after invoking several Cache.values(), the node becomes unresponsive and as a result requires a whole cluster restart (only when the cluster is started with persistence containing data).

       

      Besides, we have also noticed a big latency in response time of Cache.values() when the cluster is configured with persistence compared to the one without (20 seconds vs 3 sec for 10k objects (20 MB) in a cluster of three nodes).

       

      the config :

       

      <?xml version="1.0" encoding="UTF-8"?>

      <infinispan>

         <jgroups>

          <stack-file name="tcp2" path="jgroups.xml"/>

         </jgroups>

       

        <cache-container default-cache="dist-default" name="CacheContainer" statistics="true">

        <transport cluster="My_Cluster" stack="tcp2" />

       

        <distributed-cache

        name="dist-default"

        mode="SYNC"

        segments="60"

        owners="2"

        l1-lifespan="300000"

        l1-cleanup-interval="60000"

        remote-timeout="15000"

        statistics="true"

        >

       

        <state-transfer enabled="true"

        timeout="240000"

        await-initial-transfer="true"/>

       

        <persistence>

        <string-keyed-jdbc-store

        fetch-state="false"

        read-only="false"

        purge="false"

        shared="true"

        key-to-string-mapper="com.example.mapper.UUID2StringTwoWayMapper">

        <connection-pool

        connection-url="jdbc:h2:tcp://h2host:9092/D:/temp/infinispan-persistence"

        username="" driver="org.h2.Driver" />

        <string-keyed-table

        drop-on-exit="false"

        create-on-start="true"

        prefix="ISPN">

        <id-column name="KEY" type="VARCHAR(36)" />

        <data-column name="VALUE" type="BINARY" />

        <timestamp-column name="TIMESTAMP" type="BIGINT" />

        </string-keyed-table>

        </string-keyed-jdbc-store>

        </persistence>

        <partition-handling enabled="true"/>

        </distributed-cache>

        </cache-container>

      </infinispan>

       

       

      Thanks in advance for any hint we could use to fix this issue.

        • 1. Re: Reinitialize cluster with CacheStore
          ryanemerson

          In order for entries to be loaded into the cache upon restart, you will need to set the "preload" attribute on your string-keyed-jdbc-store to be true. Here's the relevant paragraph from the user guide:

           

          "preload (false by default) if true, when the cache starts, data stored in the cache loader will be pre-loaded into memory. This is particularly useful when data in the cache loader is needed immediately after startup and you want to avoid cache operations being delayed as a result of loading this data lazily. Can be used to provide a 'warm-cache' on startup, however there is a performance penalty as startup time is affected by this process. Note that preloading is done in a local fashion, so any data loaded is only stored locally in the node. No replication or distribution of the preloaded data happens. Also, Infinispan only preloads up to the maximum configured number of entries in eviction."

           

          Regarding the performance of Cache::values, this is inherently an expensive operation, however it is possible to only fetch entries from in-memory by utilising the SKIP_CACHE_LOAD flag which in your case might be appropriate.

          1 of 1 people found this helpful