1 2 Previous Next 15 Replies Latest reply on Oct 18, 2011 2:54 AM by zvrablik79

    ISPN 5.0.0 hotrod client

    zvrablik79

      Hi,

       

      I have tested infinispan 4.2.1 and 5.0.0 with hotrod client. I would like cache larger objects in infinispan.

       

      I have created small test program to test infinispan/hotrod performance with Object containing big array.

      My env:

      2 laptops with Debian (64bit)

      java 6 update 26

      Infinispan 5.0.0

      steps to reproduce:

       

      1. start infinispan server(without any modifications, just unzipped): bin/startServer.sh --host=<IPAddress> -r hotrod

      2. run this code:

      ...

       

      <pre>

        Cache<String, Car> cache = cacheContainer.getCache();

          Car newCar = new Car(new Timestamp(System.currentTimeMillis()), "zdenek", "ferrari1");

         

         

          int size = new Random().nextInt(10) + 40 * 1024 * 1024;

          int[] carData = new int[size];

          for ( int i=0; i < size; i++){

            carData[i]=i;  

          }

          newCar.setCarData(carData);

         

          long estimate = new MemoryCounter().estimate(newCar);

          System.out.println("NewCar size in MB: " + ( estimate * 1.0 / (1024*1024)) );

         

          System.out.println("before get \t" + new Timestamp(System.currentTimeMillis()));

          Car carFromCache = cache.get("car");

          System.out.println("after get \t" + new Timestamp(System.currentTimeMillis()));

        //now add something to the cache and make sure it is there

        if ( carFromCache == null || carFromCache.getCarDataLength() != newCar.getCarDataLength()){

          System.out.println("before put \t" + new Timestamp(System.currentTimeMillis()));

          cache.put("car", newCar);

          System.out.println("after put \t" + new Timestamp(System.currentTimeMillis()));

        }

       

        

        System.out.println("before get \t" + new Timestamp(System.currentTimeMillis()));

        Car carFromCache2 = cache.get("car");

        System.out.println("after get \t" + new Timestamp(System.currentTimeMillis()));

        System.out.println("should be car : \t" + carFromCache2);

      </pre>

      ...

       

       

      details see attached project.

       

      When I run the code locally it works fine, but when I move the server to another machine in local network I am getting exception during calling second time get operation:   Car carFromCache2 = cache.get("car");

       

      NewCar size in MB: 160.00008392333984

      before get           2011-08-09 13:51:58.119

      after get           2011-08-09 13:51:58.248

      before put           2011-08-09 13:51:58.248

      after put           2011-08-09 13:52:15.798

      before get           2011-08-09 13:52:15.799

      Exception in thread "main" org.infinispan.client.hotrod.exceptions.InvalidResponseException:: Invalid message id. Expected 5 and received 0

                at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:124)

                at org.infinispan.client.hotrod.impl.operations.AbstractKeyOperation.sendKeyOperation(AbstractKeyOperation.java:70)

                at org.infinispan.client.hotrod.impl.operations.GetOperation.executeOperation(GetOperation.java:48)

                at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:62)

                at org.infinispan.client.hotrod.impl.RemoteCacheImpl.get(RemoteCacheImpl.java:320)

                at org.vrablik.test.infinispan.cluster.hotrod.DistributedDirectoryHotRod.main(DistributedDirectoryHotRod.java:52)

       

      When I run the test code again the put operation causes that server is throwing these exceptions ( multiple):

       

      2011-08-09 15:22:19,447 ERROR (HotRodServerWorker-1-2) [org.infinispan.server.hotrod.HotRodDecoder] ISPN005003: Exception reported

      org.infinispan.server.hotrod.InvalidMagicIdException: Error reading magic byte or message id: 3

              at org.infinispan.server.hotrod.HotRodDecoder.readHeader(HotRodDecoder.scala:63)

              at org.infinispan.server.core.AbstractProtocolDecoder.decodeHeader(AbstractProtocolDecoder.scala:91)

              at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:68)

              at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:45)

              at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:471)

              at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:444)

              at org.infinispan.server.core.AbstractProtocolDecoder.messageReceived(AbstractProtocolDecoder.scala:347)

              at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)

              at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:545)

              at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:540)

              at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274)

              at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261)

              at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:350)

              at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281)

              at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201)

              at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

              at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)

              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

              at java.lang.Thread.run(Thread.java:662)

       

       

       

      ISPN 4.2.1 works fine but it is very slow.

       

      Has the default settings changed between 4.2.1 and 5.0.0 ?

      Do I have to change default server/hotrod client configuration in 5.0.0 when the serialized objects are big?

      Is it bug?

       

      Thank you for any suggestions.


      Regards,
      Zdenek

        • 1. Re: ISPN 5.0.0 hotrod client
          zvrablik79

          the code:

          int port = 11222;

              if (args.length >= 2 && args[1] != null){

                  port = Integer.parseInt(args[1]);

              }

              System.out.println("server port: " + port);

             

              CacheContainer cacheContainer = new RemoteCacheManager(serverIpAddress,port);

              //obtain a handle to the remote default cache

              Cache<String, Car> cache = cacheContainer.getCache();

              Car newCar = new Car(new Timestamp(System.currentTimeMillis()), "zdenek", "ferrari1");

             

             

              int size = new Random().nextInt(10) + 40 * 1024 * 1024;

              int[] carData = new int[size];

              for ( int i=0; i < size; i++){

                carData[i]=i;  

              }

              newCar.setCarData(carData);

             

              long estimate = new MemoryCounter().estimate(newCar);

              System.out.println("NewCar size in MB: " + ( estimate * 1.0 / (1024*1024)) );

             

              System.out.println("before get \t" + new Timestamp(System.currentTimeMillis()));

              Car carFromCache = cache.get("car");

              System.out.println("after get \t" + new Timestamp(System.currentTimeMillis()));

            //now add something to the cache and make sure it is there

            if ( carFromCache == null || carFromCache.getCarDataLength() != newCar.getCarDataLength()){

              System.out.println("before put \t" + new Timestamp(System.currentTimeMillis()));

              cache.put("car", newCar);

              System.out.println("after put \t" + new Timestamp(System.currentTimeMillis()));

            }

           

            

            System.out.println("before get \t" + new Timestamp(System.currentTimeMillis()));

            Car carFromCache2 = cache.get("car");

            System.out.println("after get \t" + new Timestamp(System.currentTimeMillis()));

            System.out.println("should be car : \t" + carFromCache2);

          • 2. Re: ISPN 5.0.0 hotrod client
            galder.zamarreno

            Zdenek, sounds like a bug. Could you please repeat the test starting the server and client from scratch and enabling TRACE logging on org.infinispan package for both the client and the server? You can use https://github.com/infinispan/infinispan/blob/master/core/src/test/resources/log4j.xml as base tweaking it accordingly (i.e. get rid of all categories and have just one for org.infinispan with TRACE).

            • 3. Re: ISPN 5.0.0 hotrod client
              zvrablik79

              Server and client logs are attached.

               

              infinispanHotRodServer.log.tar.gz and infinispanHotRodClient.log.tar.gz which contains log after first run of the client application.

               

               

              I put Thread.sleep(20000) before  Car carFromCache2 = cache.get("car");  and the exception didn't appear till 280Mb object size.

              Seems like the object hasn't been loaded into the cache before next get request?

               

               

              I have run the test hotrod client application again after the application finished with exception: Invalid message id. Expected 5 and received 0 exception.

              It has never finished. I had to kill the hotrod server. I have log files, jstack and heap dump. It is too big to attache here.

              Let me know if you are interested in these files too.


              Regards,
              Zdenek

              • 4. Re: ISPN 5.0.0 hotrod client
                galder.zamarreno

                I suspect what's happening here is that the server is running out of memory when trying to retrieve back what you stored and so it's not able to write the full get response properly. The server logs show:

                 

                2011-08-10 10:39:54,458 TRACE (HotRodServerWorker-1-1) [org.infinispan.server.hotrod.HotRodEncoder$] Encode msg ErrorResponse{messageId=0, operation=ErrorResponse, status=ServerError, msg=java.lang.OutOfMemoryError: Java heap space}

                 

                What memory settings are you giving the server? Are these memory settings the same you used with 4.2.1?

                 

                It'd be very useful if you could stick your heap dump into an FTP or somewhere like that and put the link here. That way we can inspect it and see if anything odd is there.

                • 5. Re: ISPN 5.0.0 hotrod client
                  zvrablik79

                  Heap dump and logs are here: http://zdenek.vrablik.org/tmp/ispn/

                   

                  I started server with default memory settings, both 4.2.1 and 5.0.0. According JVisualVM that server could allocate 1gb ram. I didn't insert any other data into server than the 160Mb object.

                  • 6. Re: ISPN 5.0.0 hotrod client
                    galder.zamarreno

                    Zdenek, to clarify, what Infinispan version was being tested in the files you uploaded there?

                     

                    And what's the difference between: hotrodserverKilledDuringNextRun.jmap.tar.gz and infinispanHotRodServerKilledDuringNextRun.log.tar.gz files?

                    • 7. Re: ISPN 5.0.0 hotrod client
                      galder.zamarreno

                      Btw, I'm mostly interested in the behaivour of 5.0.0.Final, which is the most recent version.

                      • 8. Re: ISPN 5.0.0 hotrod client
                        zvrablik79

                        to produce all log files and dump files I was using 5.0.0 Final version

                        the log file contains all logs before I have created the memory heap dump.

                        • 9. Re: ISPN 5.0.0 hotrod client
                          galder.zamarreno

                          Zdenek, apologies for the delay. I'm currently investigating the issue. The data you've provided is pretty valuable. I'll get back to you asap.

                          • 10. Re: ISPN 5.0.0 hotrod client
                            zvrablik79

                            Hi Galder,

                            no problem. I am currently using embedded infinispan. I was just evaluating hot rod limits when I run into this issue.

                            Let me know if you need test this issue in different env ...

                            • 11. Re: ISPN 5.0.0 hotrod client
                              galder.zamarreno

                              Thanks Zdenek. I've spotted a couple of issues in the dump you attached. See ISPN-1383 for more information.

                              • 12. Re: ISPN 5.0.0 hotrod client
                                galder.zamarreno

                                Zdenek, could you post your configuration?

                                • 13. Re: ISPN 5.0.0 hotrod client
                                  zvrablik79

                                  I am sorry for long delay. Just forgot to reply.

                                   

                                  I was using default settins shipped with ISPN 5.0.0 and started server without any other parameters.

                                   

                                  Regards,
                                  Zdenek

                                  • 14. Re: ISPN 5.0.0 hotrod client
                                    galder.zamarreno

                                    Ok - The issue's sorted now and will be included in 5.1.0.BETA2 to be released next week

                                    1 2 Previous Next