Cluster with 5 nodes and invalidation mode
natw Sep 18, 2015 8:44 AMHello, Infinispan community.
We are using Infinispan 7.2 for our application with the following set up:
- cluster with 5 nodes on 2 servers
- Infinispan configured with invalidation mode and udp transport (default config file). All nodes have the same configuration:
<?xml version="1.0" encoding="UTF-8"?> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:7.2 http://www.infinispan.org/schemas/infinispan-config-7.2.xsd" xmlns="urn:infinispan:config:7.2"> <jgroups transport="org.infinispan.remoting.transport.jgroups.JGroupsTransport"> <stack-file name="defaultUdp" path="default-configs/default-jgroups-udp.xml" /> </jgroups> <cache-container statistics="true"> <jmx domain="org.infinispan" duplicate-domains="true" > <property name="cacheManagerName">CacheManager</property> <property name="enabled">true</property> </jmx> <transport stack="defaultUdp"/> <invalidation-cache mode="ASYNC" name="cachedValues" statistics="true"/>
...
The cluster seems to build up successfully. At least we get the
"o.i.r.t.jgroups.JGroupsTransport - ISPN000094: Received new cluster view for channel" messages containing all nodes.
However on cache eviction on one machine, it seems that the invalidation messages are not sent to the other machines and caches are not invalidated on the other nodes.
We also tried the setup with only 2 machines on 2 different servers. Here the invalidation works fine.
Furthermore we tried the provided Receiver/Sender Test from jgroups, which works.
The logs do not show any error messages. The only strange message are these:
org.jgroups.protocols.UNICAST3 - pgd02-01e-37791: removing expired connection for pgd02-02e-1304 (10002 ms old) from send_table
Could it be that the cluster is losing it's nodes after startup?
Does anyone have experience with this setup and give us some hints regarding the correct configuration?