HotRod client behavior
justcono Oct 12, 2017 8:23 PMHi,
My Infinispan 8 environment consists of a cross-site configuration running in domain mode.
192.168.11.237 is my server running on PROD1 site and 192.168.11.239 is my server running in PROD2 site
When I run my hotrod client, server 192.168.11.239 is NOT running.
hotrod client knows that 192.168.11.239 is down and is not in the cluster, but then for some reason it adds as a new server to the pool. Is that right? I would imagine hotrod issuing some informational messages, but as long as there is some surviving server it should work. I can put entries onto the cache using the RESTFul interface without any problems. I'm sure it's something I've done incorrectly... but if someone can point me out to where I should look or what to think about I would REALLY appreciate it.
I get the following exception on cache.put:
Oct 12, 2017 7:59:58 PM org.infinispan.client.hotrod.impl.protocol.Codec20 readNewTopologyAndHash
INFO: ISPN004006: 192.168.11.237:11222 sent new topology view (id=1, age=0) containing 1 addresses: [192.168.11.237:11222]
Oct 12, 2017 7:59:58 PM org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory updateTopologyInfo
INFO: ISPN004016: Server not in cluster anymore(192.168.11.239:11222), removing from the pool.
Oct 12, 2017 7:59:58 PM org.infinispan.client.hotrod.RemoteCacheManager start
INFO: ISPN004021: Infinispan version: 8.2.8.Final
Oct 12, 2017 7:59:58 PM org.infinispan.client.hotrod.impl.protocol.Codec20 readNewTopologyAndHash
INFO: ISPN004006: 192.168.11.237:11222 sent new topology view (id=1, age=0) containing 1 addresses: [192.168.11.237:11222]
Oct 12, 2017 7:59:58 PM org.infinispan.client.hotrod.RemoteCacheManager lambda$warnAboutUberJarDuplicates$0
WARN: ISPN004065: Classpath does not look correct. Make sure you are not mixing uber and jars
Oct 12, 2017 8:00:00 PM org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory updateTopologyInfo
INFO: ISPN004014: New server added(192.168.11.239:11222), adding to the pool.
Oct 12, 2017 8:00:02 PM org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation logTransportErrorAndThrowExceptionIfNeeded
ERROR: ISPN004007: Exception encountered. Retry 10 out of 10
org.infinispan.client.hotrod.exceptions.TransportException:: Could not fetch transport
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:409)
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.getTransport(TcpTransportFactory.java:249)
at org.infinispan.client.hotrod.impl.operations.AbstractKeyOperation.getTransport(AbstractKeyOperation.java:43)
at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:53)
at org.infinispan.client.hotrod.impl.RemoteCacheImpl.put(RemoteCacheImpl.java:328)
at org.infinispan.client.hotrod.impl.RemoteCacheSupport.put(RemoteCacheSupport.java:79)
at HotrodClient.main(HotrodClient.java:24)
Caused by: org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: 192.168.11.239:11222
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:82)
at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:37)
at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:16)
at infinispan.org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220)
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:404)
... 6 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:111)
at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:72)
... 10 more
Process finished with exit code 0
My config file
infinispan.client.hotrod.ping_on_startup = true;
infinispan.client.hotrod.server_list = 192.168.11.237:11222;192.168.11.239:11222;
infinispan.client.hotrod.socket_timeout = 500
infinispan.client.hotrod.connect_timeout = 10
## below is connection pooling config
maxActive=-1
maxTotal = -1
maxIdle = -1
whenExhaustedAction = 1
timeBetweenEvictionRunsMillis=120000
minEvictableIdleTimeMillis=1800000
testWhileIdle = true
minIdle = 1
My VERY naive code (just running a sample):
import java.net.URL;
import java.util.Map;
import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.client.hotrod.ServerStatistics;
/**
*
*/
public class HotrodClient {
public static void main(String[] args) {
RemoteCacheManager cacheContainer = new RemoteCacheManager(true);
//obtain a handle to the remote default cache
RemoteCache cache = cacheContainer.getCache("default");
//now add something to the cache and make sure it is there
cache.put("car", "ferrari");
if (cache.get("car").equals("ferrari")) {
System.out.println("Cache Hit!");
} else {
System.out.println("Cache Miss!");
}
//remove the data
cache.remove("car");
cacheContainer.stop();
}
}