6 Replies Latest reply on Oct 28, 2017 4:11 PM by justcono

    HotRod client behavior

    justcono

      Hi,

       

      My Infinispan 8 environment consists of a cross-site configuration running in domain mode.

      192.168.11.237 is my server running on PROD1 site and 192.168.11.239 is my server running in PROD2 site

      When I run my hotrod client, server 192.168.11.239 is NOT running.

       

      hotrod client knows that 192.168.11.239 is down and is not in the cluster, but then for some reason it adds as a new server to the pool. Is that right? I would imagine hotrod issuing some informational messages, but as long as there is some surviving server it should work. I can put entries onto the cache using the RESTFul interface without any problems. I'm sure it's something I've done incorrectly... but if someone can point me out to where I should look or what to think about I would REALLY appreciate it.

       

       

      I get the following exception on cache.put:

       

      Oct 12, 2017 7:59:58 PM org.infinispan.client.hotrod.impl.protocol.Codec20 readNewTopologyAndHash

      INFO: ISPN004006: 192.168.11.237:11222 sent new topology view (id=1, age=0) containing 1 addresses: [192.168.11.237:11222]

      Oct 12, 2017 7:59:58 PM org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory updateTopologyInfo

      INFO: ISPN004016: Server not in cluster anymore(192.168.11.239:11222), removing from the pool.

      Oct 12, 2017 7:59:58 PM org.infinispan.client.hotrod.RemoteCacheManager start

      INFO: ISPN004021: Infinispan version: 8.2.8.Final

      Oct 12, 2017 7:59:58 PM org.infinispan.client.hotrod.impl.protocol.Codec20 readNewTopologyAndHash

      INFO: ISPN004006: 192.168.11.237:11222 sent new topology view (id=1, age=0) containing 1 addresses: [192.168.11.237:11222]

      Oct 12, 2017 7:59:58 PM org.infinispan.client.hotrod.RemoteCacheManager lambda$warnAboutUberJarDuplicates$0

      WARN: ISPN004065: Classpath does not look correct. Make sure you are not mixing uber and jars

      Oct 12, 2017 8:00:00 PM org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory updateTopologyInfo

      INFO: ISPN004014: New server added(192.168.11.239:11222), adding to the pool.

      Oct 12, 2017 8:00:02 PM org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation logTransportErrorAndThrowExceptionIfNeeded

      ERROR: ISPN004007: Exception encountered. Retry 10 out of 10

      org.infinispan.client.hotrod.exceptions.TransportException:: Could not fetch transport

      at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:409)

      at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.getTransport(TcpTransportFactory.java:249)

      at org.infinispan.client.hotrod.impl.operations.AbstractKeyOperation.getTransport(AbstractKeyOperation.java:43)

      at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:53)

      at org.infinispan.client.hotrod.impl.RemoteCacheImpl.put(RemoteCacheImpl.java:328)

      at org.infinispan.client.hotrod.impl.RemoteCacheSupport.put(RemoteCacheSupport.java:79)

      at HotrodClient.main(HotrodClient.java:24)

      Caused by: org.infinispan.client.hotrod.exceptions.TransportException:: Could not connect to server: 192.168.11.239:11222

      at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:82)

      at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:37)

      at org.infinispan.client.hotrod.impl.transport.tcp.TransportObjectFactory.makeObject(TransportObjectFactory.java:16)

      at infinispan.org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220)

      at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory.borrowTransportFromPool(TcpTransportFactory.java:404)

      ... 6 more

      Caused by: java.net.ConnectException: Connection refused

      at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

      at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)

      at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:111)

      at org.infinispan.client.hotrod.impl.transport.tcp.TcpTransport.<init>(TcpTransport.java:72)

      ... 10 more

       

      Process finished with exit code 0

       

      My config file

       

      infinispan.client.hotrod.ping_on_startup = true;
      infinispan.client.hotrod.server_list = 192.168.11.237:11222;192.168.11.239:11222;
      infinispan.client.hotrod.socket_timeout = 500
      infinispan.client.hotrod.connect_timeout = 10
      ## below is connection pooling config
      maxActive=-1
      maxTotal = -1
      maxIdle = -1
      whenExhaustedAction = 1
      timeBetweenEvictionRunsMillis=120000
      minEvictableIdleTimeMillis=1800000
      testWhileIdle = true
      minIdle = 1

       

      My VERY naive code (just running a sample):

       

      import java.net.URL;
      import java.util.Map;

      import org.infinispan.client.hotrod.RemoteCache;
      import org.infinispan.client.hotrod.RemoteCacheManager;
      import org.infinispan.client.hotrod.ServerStatistics;

      /**
      *
      */
      public class HotrodClient {

         public static void main(String[] args) {

       

         RemoteCacheManager cacheContainer = new RemoteCacheManager(true);



         //obtain a handle to the remote default cache
         RemoteCache cache = cacheContainer.getCache("default");

         //now add something to the cache and make sure it is there
         cache.put("car", "ferrari");

        if (cache.get("car").equals("ferrari")) {
        System.out.println("Cache Hit!");
        } else {
        System.out.println("Cache Miss!");
        }

        //remove the data
        cache.remove("car");

        cacheContainer.stop();

        }

      }

        • 1. Re: HotRod client behavior
          justcono

          I made some progress.

           

          In my cache definition I had

           

          <backup site="PROD2" strategy="SYNC" failure-policy="WARN" enabled="true" />

           

          and changed it to the following:

          <backup site="PROD2" strategy="ASYNC" failure-policy="WARN" enabled="true" />

           

          This cleared the error. I think having strategy as SYNC and failure-policy as IGNORE would have resolved the issue as well.

          • 2. Re: HotRod client behavior
            galder.zamarreno

            If the nodes you are trying to failover between are in different sites, you need extra configuration:

            http://infinispan.org/docs/stable/user_guide/user_guide.html#site_cluster_failover

            • 3. Re: HotRod client behavior
              galder.zamarreno

              Essentially what happens is that the server list is the list of servers in the main cluster to which the client connects.

               

              To add server info of nodes in other sites, you have to use the clusters configuration and 1 or more server information there.

              • 4. Re: HotRod client behavior
                justcono

                Hi Galder,

                 

                Thanks for the response. We have 2 sites (cross-site replication enabled both ways), 3 nodes per site, occurs=2. Each site has a Virutal IP, all six nodes are fronted by the vIP. The clients on site1 use the higher priority nodes local to that site and the other 3 nodes on the secondary site are only used if the 3 local nodes to site1 are down... same flow vice versa. The hotrod configuration will only ever have the single IP which is the virtual IP.

                 

                Is there any value at this point in still needing to use the addCluster() as the load balancer will always have a member available at either site? If we just use .addServer() does it make a difference in this scenario, would it affect client behavior e.g: cluster hash details would be unknown to the client?

                 

                Kind Regards,

                Cono

                • 5. Re: HotRod client behavior
                  galder.zamarreno

                  Not sure how this virtual IP thing will work. Even if you initially use the virtual IP, when the first request replies it will send back the real IP addresses from the server, so that the client can distributed requests for each key to the server that owns it.

                   

                  I also don't know how site failover will work in the case of using virtual IPs to switch things around. You'll have to try that yourself and see if it works.

                  • 6. Re: HotRod client behavior
                    justcono

                    Hi Galder,

                     

                    Thanks for your response. I won't have load balancers for a few months so I'll update this post with my findings then. I think I'll use the .addCluster() using the vIP for each site as part of the hotrod client config.

                     

                    Kind Regards,

                    cd