2 Replies Latest reply on Aug 9, 2012 11:13 AM by safetytrick

    RemoteCacheManager(props) fails if one member isn't started

    rhbillmeyer

      Hi all,

       

      I have three hotrod servers configured on the same host, different ports (10001, 10002, 10003).  I only start the first server (10001) and leave the rest down.  The server comes up clean, no issues.

       

      If I go to connect to the grid using the RemoteCacheManager() config as follows:

       

                Properties props = new Properties();           props.put(ConfigurationProperties.SERVER_LIST, "mbp1:10001;mbp1:10002;mbp1:10003");           props.put(ConfigurationProperties.REQUEST_BALANCING_STRATEGY, RoundRobinBalancingStrategy.class.getName());           props.put("maxActive", 10);           System.out.println("Connecting to the cache...");           CacheContainer cacheContainer = new RemoteCacheManager(props);           Cache cache = cacheContainer.getCache();           RemoteCache c = (RemoteCache)cache;

       

      I get a series of exceptions:

       

       

      Exception in thread "main" org.infinispan.client.hotrod.exceptions.TransportException:: Could not fetch transport      at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:247)      at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.getTransport(TcpTransportFactory.java:136)      at org.infinispan.client.hotrod.RemoteCacheManager.ping(RemoteCacheManager.java:511)      at org.infinispan.client.hotrod.RemoteCacheManager.createRemoteCache(RemoteCacheManager.java:493)      at org.infinispan.client.hotrod.RemoteCacheManager.getCache(RemoteCacheManager.java:435)      at org.infinispan.client.hotrod.RemoteCacheManager.getCache(RemoteCacheManager.java:430)      at org.infinispan.client.hotrod.RemoteCacheManager.getCache(RemoteCacheManager.java:147)      at com.sample.DistributedCacheQuickstart.main(DistributedCacheQuickstart.java:25) Caused by: org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: mbp1/192.168.2.184:10002      at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.(TcpTransport.java:84)      at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:54)      at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1179)      at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:243)      ... 7 more Caused by: java.net.ConnectException: Connection refused      at sun.nio.ch.Net.connect(Native Method)      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:500)      at java.nio.channels.SocketChannel.open(SocketChannel.java:146)      at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.(TcpTransport.java:74)      ... 10 more

       

      Appears to be looking for all members specified in the SERVER_LIST to be available before connecting to any of them.  In this example, its failing on the first member that is down (10002).  Clearly this isn't fault tolerant so what am I doing wrong???

       

      I want to be able to specify at least three nodes to safeguard against any one of my initial connection points from being down and having my client fail.

       

      Thanks!

       

      Bill