Wildfly 10 - Infinispan replicated cluster cache not starting
coenenpe Jul 28, 2016 9:13 AMHi,
I'm trying to setup a Full-HA-Cluster (2 nodes) in Wildfly 10.0.0.Final with a shared/replicated Infinispan cache, but it doesn't seem to work.
Configuration
I created an own cache-container:
CLI:
/profile=full-ha/subsystem=infinispan/cache-container=myclustercache/:add(default-cache=default,jndi-name=java:jboss/infinispan/container/myclustercache) /profile=full-ha/subsystem=infinispan/cache-container=myclustercache/replicated-cache=default/:add(mode=SYNC,queue-flush-interval=10,queue-size=0,remote-timeout=17500,indexing=NONE)
XML:
<cache-container name="myclustercache" default-cache="default" jndi-name="java:jboss/infinispan/container/myclustercache"> <replicated-cache name="default" mode="SYNC" queue-flush-interval="10" queue-size="0" remote-timeout="17500" /> </cache-container>
In my code I'm using the following cacheProducer to fetch the cache
public class CacheContainerProducer { private static final Logger LOG = LoggerFactory.getLogger(CacheContainerProducer.class); @Resource(lookup = "java:jboss/infinispan/container/myclustercache") private EmbeddedCacheManager cacheManager; @Produces public Cache<String, MyObject> createCache() { Cache<String, MyObject> cache = cacheManager.getCache("default"); LOG.debug("Fetched cache with name: {}, Version: {} , Status: {}", cache.getName(), cache.getVersion(), cache.getStatus().toString()); return cache; } }
I defined dependencies in the manifest file via maven:
<manifestEntries> <Dependencies>org.infinispan export</Dependencies> <Dependencies>org.infinispan, org.infinispan.commons, org.jboss.as.clustering.infinispan export</Dependencies> </manifestEntries>
Jgroups:
I'm using the default JGroups, I only altered the UDP to TCP.
#Configure jgroups to use tcp protocol /profile=full-ha/subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcp) /profile=full-ha/subsystem=jgroups/:write-attribute(name=default-stack,value=tcp)
Maven dependencies:
I have the following dependency because I'm using some methods from the cache-api.
<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> <version>8.1.0.Final</version> <scope>provided</scope> </dependency>
Problem
The problem is that the cache is not being replicated through my cluster. Both nodes seem to have an own version of the cache.
Also, when reading from the cache I sometimes have ClasCastExceptions. It says that it can't cast a MyClass to MyClass. So I assume that there's a classloader problem somewhere.
On multiple forums I found that these kind of problems can be caused by Wildfly not having started the cache within the container, but I can't find how to start it. In previous version there was a 'Start' attribute which could be set on EAGER but this is not supported anymore.
I also tried injecting the cache directly like below but in this case it throws an nameNotFound exception. This solution is presented here by pferraro
@Resource(lookup = "java:jboss/infinispan/cache/myclustercache/default") Cache<String, MyObject> cache;
The strange thing is that even when I define a custom cache within my container, it's never started/shown in the jndi view of the wildfly server.
Log:
I see my container using it's JGroup channel :
ISPN000078: Starting JGroups channel myclustercache
I see the server default cache being started. But I never see the 'myclustercache' default cache being started.
WFLYCLINF0002: Started default cache from server container
Other options I tried are:
- Running with Wildfly 10.1 latest snapshot.
- Definining the cluster cache programmatically, without using Wildfly (but this causes some lib and naming conflicts)
- Defining in the application.xml descriptior file of the ear as follows:
<resource-ref> <res-ref-name>infinispan/cache/myclustercache/default</res-ref-name> <lookup-name>java:jboss/infinispan/cache/myclustercache/default</lookup-name> </resource-ref>
Could anyone please advice me what to do? I'm stuck with this problems for days and I don't see anymore options I can try.
Many thanks in advance!
Peter