9 Replies Latest reply on Feb 15, 2012 9:07 AM by Andrei Deftu

    Round robin vs. consistent hash in distribution mode

    Andrei Deftu Newbie

      Hi,

       

      I have the following configuration file used by 2 Infinispan Hotrod servers:

       

      <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                  xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
                  xmlns="urn:infinispan:config:5.1">
          <global>
              <transport>
                  <properties>
                      <property name="configurationFile" value="jgroups.xml"/>
                  </properties>
              </transport>
          </global>
      
          <namedCache name="cache1">
              <clustering mode="distribution">
                  <sync/>
                  <hash numOwners="1"/>
              </clustering>
          </namedCache>
      
          <namedCache name="cache2">
              <clustering mode="distribution">
                  <sync/>
                  <hash numOwners="1"/>
              </clustering>
          </namedCache>
      </infinispan>
      
      

       

      So, as it can be seen, I want to use these two distributed caches in a Hotrod cluster of servers. Therefore:

       

      ./startServer.sh -r hotrod -c infinispan.xml -l 127.0.0.1 -p 11222
      ./startServer.sh -r hotrod -c infinispan.xml -l 127.0.0.1 -p 11223
      

       

      And then I start the client using the following properties:

       

      Properties properties = new Properties();
      properties.put("infinispan.client.hotrod.server_list", "127.0.0.1:11222");
      properties.put("infinispan.client.hotrod.hash_function_impl.2", "org.infinispan.client.hotrod.impl.consistenthash.ConsistentHashV2");
      RemoteCacheManager cacheManager = new RemoteCacheManager(properties);
      

       

      However, Infinispan is using RoundRobinBalancingStrategy for choosing a server when doing a client request:

       

      TRACE [pool-4-thread-1] (ConsistentHashFactory.java:63) - Processing consistent hash: infinispan.client.hotrod.hash_function_impl.2
      TRACE [pool-4-thread-1] (ConsistentHashFactory.java:69) - Added consistent hash version 2: org.infinispan.client.hotrod.impl.consistenthash.ConsistentHashV2
      ...
      DEBUG [pool-4-thread-1] (TcpTransportFactory.java:90) - Load balancer class: org.infinispan.client.hotrod.impl.transport.tcp.RoundRobinBalancingStrategy
      ...
      TRACE [pool-4-thread-1] (Codec11.java:62) - Not using a consistent hash function (hash function version == 0)
      

       

      Therefore, instead of the client using the hash function to compute the server node, it chooses it based on a round robin approach, resulting in many useless hops in the network.

       

      My question is: how do I correctly configure topology aware clients which use consistent hashing to select servers in the cluster and to directly send the request to?

       

      Thanks.

       

      EDIT: Using Infinispan-5.1.0-FINAL

        • 1. Re: Round robin vs. consistent hash in distribution mode
          Dan Berindei Expert

          Andrei, you haven't shown the code that is accessing the remote caches. Are you sure you're not accessing the default cache at all?

           

          If you have something like cacheManager.getCache().put(k, v) you are accessing the default cache, and the default cache is in local mode - you have only configured cache1 and cache2 in distributed mode.

          • 2. Re: Round robin vs. consistent hash in distribution mode
            Andrei Deftu Newbie

            I think I'm using it correctly:

             

            RemoteCache<String, String> cache1 = cacheManager.getCache("cache1");
            RemoteCache<String, String> cache2 = cacheManager.getCache("cache2");
            

             

            And all the get() and put() methods are called for these objects.

            • 3. Re: Round robin vs. consistent hash in distribution mode
              Dan Berindei Expert

              Andrei, it looks like your log messages come from an automatic ping request that the HotRod client's connection pool is doing a on the default cache. You can disable it with a line like this:

               

                 properties.put("infinispan.client.hotrod.ping_on_startup", "false");
              

               

              Your program will work the same with or without the ping, that is requests for cache1 and cache2 will always go to the key owner.

              1 of 1 people found this helpful
              • 4. Re: Round robin vs. consistent hash in distribution mode
                Andrei Deftu Newbie

                Dan, thanks for you help. It seems that using this property, the client is now using the hash function to select servers. In the begining, without this property set to false, I had:

                 

                TRACE [pool-5-thread-2] (TcpTransportFactory.java:157) - Using the balancer for determining the server: /127.0.0.1:11222
                

                 

                Now:

                 

                TRACE [pool-4-thread-2] (Codec11.java:69) - Topology change request: newTopologyId=1, numKeyOwners=1, hashFunctionVersion=2, hashSpaceSize=2147483647, clusterSize=2, numVirtualNodes=1
                ...
                TRACE [pool-4-thread-2] (ConsistentHashV1.java:70) - Positions (2 entries) are: {221100605=/127.0.0.1:11223, 1770528460=/127.0.0.1:11222}
                ...
                TRACE [pool-4-thread-2] (ConsistentHashV1.java:83) - Found possible candidates: {221100605=/127.0.0.1:11223, 1770528460=/127.0.0.1:11222}
                TRACE [pool-4-thread-2] (ConsistentHashV1.java:96) - Found candidate: /127.0.0.1:11223
                TRACE [pool-4-thread-2] (TcpTransportFactory.java:152) - Using consistent hash for determining the server: /127.0.0.1:11223
                

                 

                It is strange that I have to set this property (infinispan.client.hotrod.ping_on_startup) which from its name seems to have no correlation with using a hash function or a load balancer... Anyway, I have two questions now:

                 

                1. Why does it use ConsistentHashV1 and not ConsistentHashV2, given that

                properties.put("infinispan.client.hotrod.hash_function_impl.2", "org.infinispan.client.hotrod.impl.consistenthash.ConsistentHashV2");
                

                 

                2. If I run with just one server (127.0.0.1:11222), with a few hundred thousands requests (put, get) for each cache I get roughly 18s runtime. With the same number of requests, if I run with 2 servers (127.0.0.1:11222 and 127.0.0.1:11223), I get 20s runtime. With 4 servers I get still 20s. The client sends the requests in parallel. Shouldn't it be faster with more servers? Now, if I use 8 servers, I get the following exception in the client:

                 

                WARN [Thread-16] (Codec10.java:152) - ISPN004005: Error received from the server: org.infinispan.util.concurrent.TimeoutException: Timed out waiting for 15 seconds for valid responses from any of [Sender{address=e0f77b40-be03-41e9-f71e-b2bc94184e94, responded=false}].
                <function0>: caught org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for message id[137714] returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Timed out waiting for 15 seconds for valid responses from any of [Sender{address=e0f77b40-be03-41e9-f71e-b2bc94184e94, responded=false}].
                org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for message id[137714] returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Timed out waiting for 15 seconds for valid responses from any of [Sender{address=e0f77b40-be03-41e9-f71e-b2bc94184e94, responded=false}].
                

                 

                Is this expected?

                • 5. Re: Round robin vs. consistent hash in distribution mode
                  Dan Berindei Expert

                  Andrei Deftu wrote:

                   

                  It is strange that I have to set this property (infinispan.client.hotrod.ping_on_startup) which from its name seems to have no correlation with using a hash function or a load balancer...

                   

                   

                  This is a bug in the client - I have created https://issues.jboss.org/browse/ISPN-1855.

                   

                  1. Why does it use ConsistentHashV1 and not ConsistentHashV2, given that
                  properties.put("infinispan.client.hotrod.hash_function_impl.2", "org.infinispan.client.hotrod.impl.consistenthash.ConsistentHashV2");
                  

                   

                  The consistent hash version is determined by the server, not the client. The client property you used only configures the CH class to use if the server requests version 2 - and you shouldn't use it unless you also installed a custom CH on the server.

                   

                  The log messages you see on the client are a bit misleading because ConsistentHashV2 extends ConsistentHashV1, so the category name in the logs appears as ConsistentHashV1.

                   

                   

                  2. If I run with just one server (127.0.0.1:11222), with a few hundred thousands requests (put, get) for each cache I get roughly 18s runtime. With the same number of requests, if I run with 2 servers (127.0.0.1:11222 and 127.0.0.1:11223), I get 20s runtime. With 4 servers I get still 20s. The client sends the requests in parallel. Shouldn't it be faster with more servers? Now, if I use 8 servers, I get the following exception in the client:

                   

                  WARN [Thread-16] (Codec10.java:152) - ISPN004005: Error received from the server: org.infinispan.util.concurrent.TimeoutException: Timed out waiting for 15 seconds for valid responses from any of [Sender{address=e0f77b40-be03-41e9-f71e-b2bc94184e94, responded=false}].
                  <function0>: caught org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for message id[137714] returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Timed out waiting for 15 seconds for valid responses from any of [Sender{address=e0f77b40-be03-41e9-f71e-b2bc94184e94, responded=false}].
                  org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for message id[137714] returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Timed out waiting for 15 seconds for valid responses from any of [Sender{address=e0f77b40-be03-41e9-f71e-b2bc94184e94, responded=false}].
                  

                   

                  Is this expected?

                   

                  It's not expected behaviour in the sense that it can happen any time, but you weren't very specific about your test: how many client machines, how many threads on each machine, how many different keys you're using, or how big the keys and the values are.

                   

                  Attaching a runnable test project here would be even better.

                  1 of 1 people found this helpful
                  • 6. Re: Round robin vs. consistent hash in distribution mode
                    Andrei Deftu Newbie

                    2. If I run with just one server (127.0.0.1:11222), with a few hundred thousands requests (put, get) for each cache I get roughly 18s runtime. With the same number of requests, if I run with 2 servers (127.0.0.1:11222 and 127.0.0.1:11223), I get 20s runtime. With 4 servers I get still 20s. The client sends the requests in parallel. Shouldn't it be faster with more servers? Now, if I use 8 servers, I get the following exception in the client:

                     

                    WARN [Thread-16] (Codec10.java:152) - ISPN004005: Error received from the server: org.infinispan.util.concurrent.TimeoutException: Timed out waiting for 15 seconds for valid responses from any of [Sender{address=e0f77b40-be03-41e9-f71e-b2bc94184e94, responded=false}]. <function0>: caught org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for message id[137714] returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Timed out waiting for 15 seconds for valid responses from any of [Sender{address=e0f77b40-be03-41e9-f71e-b2bc94184e94, responded=false}]. org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for message id[137714] returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Timed out waiting for 15 seconds for valid responses from any of [Sender{address=e0f77b40-be03-41e9-f71e-b2bc94184e94, responded=false}]. 

                     

                    Is this expected?

                     

                    It's not expected behaviour in the sense that it can happen any time, but you weren't very specific about your test: how many client machines, how many threads on each machine, how many different keys you're using, or how big the keys and the values are.

                     

                    Attaching a runnable test project here would be even better.

                     

                    I run the test (both the servers and the client) on just one machine. The client uses a fork-join strategy to select threads from a 8-threads pool. But I don't think that matters. I tried running single-threaded and got the same result: the more servers I add, the slower it gets. I also ran the client on one machine and all the servers on another. Same result. The keys are each time generated randomly and are between 1 and 8 bytes long. The values are 32 bytes long.

                     

                    I cannot attach the project because it's quite complex.

                    • 7. Re: Round robin vs. consistent hash in distribution mode
                      Andrei Deftu Newbie

                      I am posting below a simple benchmark using the default cache in distribution mode. It's written in Scala, but should be quite readable.

                       

                       

                      val properties = new Properties
                      properties.put("infinispan.client.hotrod.server_list", "127.0.0.1:11222")
                      properties.put("infinispan.client.hotrod.hash_function_impl.2", "org.infinispan.client.hotrod.impl.consistenthash.ConsistentHashV2")
                      properties.put("infinispan.client.hotrod.ping_on_startup", "false")
                      cacheManager = new RemoteCacheManager(properties)
                      cache = cacheManager.getCache()
                      
                      def put() = {
                        val key = new Array[Byte](32)
                        random.nextBytes(key)
                        val value = new Array[Byte](32)
                        random.nextBytes(value)
                        cache.put(key, value)
                      }
                      
                      def get() = {
                        val key = new Array[Byte](32)
                        random.nextBytes(key)
                        cache.get(key)
                      }
                      
                      
                      val startTimePut = System.nanoTime 
                      val putTasks = new ArrayBuffer[scala.actors.Future[Any]]
                      for (i <- 0 until 100000) putTasks += scala.actors.Futures.future { put() }
                      putTasks.foreach(_())
                      println("[PUT] Elapsed time: %.2f (ms)" format (System.nanoTime - startTimePut)/1000000.0)
                      
                      val startTimeGet = System.nanoTime
                      val getTasks = new ArrayBuffer[scala.actors.Future[Any]]
                      for (i <- 0 until 100000) getTasks += scala.actors.Futures.future { get() }
                      getTasks.foreach(_())
                      println("[GET] Elapsed time: %.2f (ms)" format (System.nanoTime - startTimeGet)/1000000.0)
                      
                      

                       

                      The requests are sent in parallel using the default fork-join actor model from Scala. But this is not important, because as I said in an earlier post, the same happens for single threaded client. The problem is that when running with one server on the same machine, I get 8.8s for all put() and 3.6s for all get(). While when running with 2 servers (also on the same machine), I get 12.1s for put() and 3.7s for get(). With more servers, even slower. It seems strange to me that the performance decreases when increasing the number of servers. The same, when running the servers on a separate machine.

                      • 8. Re: Round robin vs. consistent hash in distribution mode
                        Dan Berindei Expert

                        Thanks Andrei, I have converted your test to Java (https://gist.github.com/1789279) and I have partially confirmed your findings. I did not see the TimeoutException that you've seen.

                         

                        First off, with only 8 client threads, a single HotRod server is perfectly capable of handling all the client requests simultaneously, so I would not expect better performance with more than one server. Adding another server might be worth it once your number of requests is > 2 * number of cores, but only if the new server is on a different machine. The only reason I can think of for starting more than one server on the same machine is to keep the individual heaps small and limit the duration of GCs.

                         

                        On the other hand there are certain overheads when you run multiple servers on the same machine: more threads, more context switching etc. The consistent hash implementation also has to do more work to search for the owner (obviously) so I would expect the performance to get slightly worse as the number of servers increases.

                         

                        You reported a huge performance drop instead, so I looked deeper and I think I have an explanation: with more servers, each server receives a smaller share of the requests, so it takes longer for the JIT to warm up and compile the HotRod server code. The keys are also not evenly balanced, so in some runs it takes longer to warm up all servers - ISPN-1801 fixes that in 5.1.1.FINAL.

                         

                        I ran the test with 1000000 keys and the times are significantly closer. The only big difference is with 8 servers, but that gap should also get narrower with an even higher number of requests:

                         

                        Running test with 1000000 loops, 8 client threads, 1 servers
                        [PUT] Elapsed time: 50413.07 (ms)
                        [GET] Elapsed time: 29963.79 (ms)
                        
                        Running test with 1000000 loops, 8 client threads, 2 servers
                        [PUT] Elapsed time: 53000.35 (ms)
                        [GET] Elapsed time: 31202.55 (ms)
                        
                        Running test with 1000000 loops, 8 client threads, 4 servers
                        [PUT] Elapsed time: 50426.42 (ms)
                        [GET] Elapsed time: 33072.42 (ms)
                        
                        Running test with 1000000 loops, 8 client threads, 8 servers
                        [PUT] Elapsed time: 52049.07 (ms)
                        [GET] Elapsed time: 46405.90 (ms)
                        
                        • 9. Re: Round robin vs. consistent hash in distribution mode
                          Andrei Deftu Newbie

                          Thanks, Dan. Your answer was very helpful.

                          However, I tested with the client on one machine and the server on another and got:

                           

                          [PUT] Elapsed time: 67486 (ms)
                          [GET] Elapsed time: 59544 (ms)
                          
                          

                           

                          Then, I added another server to the cluster (also on a different machine):

                           

                          [PUT] Elapsed time: 78480 (ms)
                          [GET] Elapsed time: 67866 (ms)
                          

                           

                          How is it possible that the performance decreased?

                           

                          I used the following input for the client:

                          * 'put' operations: 500k

                          * 'get' operations: 500k

                          * key size: 8B

                          * value size: 32B

                           

                          The client is multithreaded with a work-pool of 4 threads.