3 Replies Latest reply on Sep 22, 2015 3:47 AM by rvansa

    Cluster with 5 nodes and invalidation mode

    natw

      Hello, Infinispan community.

       

      We are using Infinispan 7.2 for our application with the following set up:

      - cluster with 5 nodes on 2 servers

      - Infinispan configured with invalidation mode and udp transport (default config file). All nodes have the same configuration:

      <?xml version="1.0" encoding="UTF-8"?>
      <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="urn:infinispan:config:7.2 http://www.infinispan.org/schemas/infinispan-config-7.2.xsd"
          xmlns="urn:infinispan:config:7.2">
      
          <jgroups transport="org.infinispan.remoting.transport.jgroups.JGroupsTransport">
              <stack-file name="defaultUdp" path="default-configs/default-jgroups-udp.xml" />
          </jgroups>
      
          <cache-container statistics="true">
              <jmx domain="org.infinispan" duplicate-domains="true" >
                  <property name="cacheManagerName">CacheManager</property>
                  <property name="enabled">true</property>
              </jmx>
              <transport stack="defaultUdp"/>
              <invalidation-cache mode="ASYNC" name="cachedValues" statistics="true"/>
      

      ...

       

      The cluster seems to build up successfully. At least we get the

      "o.i.r.t.jgroups.JGroupsTransport - ISPN000094: Received new cluster view for channel" messages containing all nodes.

       

      However on cache eviction on one machine, it seems that the invalidation messages are not sent to the other machines and caches are not invalidated on the other nodes.

       

      We also tried the setup with only 2 machines on 2 different servers. Here the invalidation works fine.

      Furthermore we tried the provided Receiver/Sender Test from jgroups, which works.

       

      The logs do not show any error messages. The only strange message are these:

      org.jgroups.protocols.UNICAST3 - pgd02-01e-37791: removing expired connection for pgd02-02e-1304 (10002 ms old) from send_table

       

      Could it be that the cluster is losing it's nodes after startup?

      Does anyone have experience with this setup and give us some hints regarding the correct configuration?

        • 1. Re: Cluster with 5 nodes and invalidation mode
          rvansa

          It seems you're triggering the eviction manually, calling the cache.evict(), right? This command works only locally, it is not replicated to another nodes (does not cause invalidation message to be sent). If you want to remove the entity on another nodes as well, simply use cache.remove() (you don't have cache store configured, so the eviction should have similar effect). Not sure why you see that eviction happened in the other setup.

          • 2. Re: Cluster with 5 nodes and invalidation mode
            natw

            Thanks for the replay. Actually we are using infinispan as a spring cache provider. With that the @CacheEvict annotation is used.

            The SpringCache.evict() method is executing

            this.nativeCache.remove(key);
            

             

            Should be the correct eviction, right?

            • 3. Re: Cluster with 5 nodes and invalidation mode
              rvansa

              There's no @CacheEvict in Infinispan, there's deprecated @CacheEntryEvicted (not called anymore afaik) and then @CacheEntriesEvicted. Neither one is not the right one; if you call cache.remove(), @CacheEntryRemoved will be triggered on originator, and on remote nodes @CacheEntryInvalidated (in invalidation mode the remote nodes receive invalidation after any modification).