1 2 Previous Next 15 Replies Latest reply on Oct 18, 2011 2:54 AM by Zdenek Henek

    ISPN 5.0.0 hotrod client

    Zdenek Henek Newbie

      Hi,

       

      I have tested infinispan 4.2.1 and 5.0.0 with hotrod client. I would like cache larger objects in infinispan.

       

      I have created small test program to test infinispan/hotrod performance with Object containing big array.

      My env:

      2 laptops with Debian (64bit)

      java 6 update 26

      Infinispan 5.0.0

      steps to reproduce:

       

      1. start infinispan server(without any modifications, just unzipped): bin/startServer.sh --host=<IPAddress> -r hotrod

      2. run this code:

      ...

       

      <pre>

        Cache<String, Car> cache = cacheContainer.getCache();

          Car newCar = new Car(new Timestamp(System.currentTimeMillis()), "zdenek", "ferrari1");

         

         

          int size = new Random().nextInt(10) + 40 * 1024 * 1024;

          int[] carData = new int[size];

          for ( int i=0; i < size; i++){

            carData[i]=i;  

          }

          newCar.setCarData(carData);

         

          long estimate = new MemoryCounter().estimate(newCar);

          System.out.println("NewCar size in MB: " + ( estimate * 1.0 / (1024*1024)) );

         

          System.out.println("before get \t" + new Timestamp(System.currentTimeMillis()));

          Car carFromCache = cache.get("car");

          System.out.println("after get \t" + new Timestamp(System.currentTimeMillis()));

        //now add something to the cache and make sure it is there

        if ( carFromCache == null || carFromCache.getCarDataLength() != newCar.getCarDataLength()){

          System.out.println("before put \t" + new Timestamp(System.currentTimeMillis()));

          cache.put("car", newCar);

          System.out.println("after put \t" + new Timestamp(System.currentTimeMillis()));

        }

       

        

        System.out.println("before get \t" + new Timestamp(System.currentTimeMillis()));

        Car carFromCache2 = cache.get("car");

        System.out.println("after get \t" + new Timestamp(System.currentTimeMillis()));

        System.out.println("should be car : \t" + carFromCache2);

      </pre>

      ...

       

       

      details see attached project.

       

      When I run the code locally it works fine, but when I move the server to another machine in local network I am getting exception during calling second time get operation:   Car carFromCache2 = cache.get("car");

       

      NewCar size in MB: 160.00008392333984

      before get           2011-08-09 13:51:58.119

      after get           2011-08-09 13:51:58.248

      before put           2011-08-09 13:51:58.248

      after put           2011-08-09 13:52:15.798

      before get           2011-08-09 13:52:15.799

      Exception in thread "main" org.infinispan.client.hotrod.exceptions.InvalidResponseException:: Invalid message id. Expected 5 and received 0

                at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:124)

                at org.infinispan.client.hotrod.impl.operations.AbstractKeyOperation.sendKeyOperation(AbstractKeyOperation.java:70)

                at org.infinispan.client.hotrod.impl.operations.GetOperation.executeOperation(GetOperation.java:48)

                at org.infinispan.client.hotrod.impl.operations.RetryOnFailureOperation.execute(RetryOnFailureOperation.java:62)

                at org.infinispan.client.hotrod.impl.RemoteCacheImpl.get(RemoteCacheImpl.java:320)

                at org.vrablik.test.infinispan.cluster.hotrod.DistributedDirectoryHotRod.main(DistributedDirectoryHotRod.java:52)

       

      When I run the test code again the put operation causes that server is throwing these exceptions ( multiple):

       

      2011-08-09 15:22:19,447 ERROR (HotRodServerWorker-1-2) [org.infinispan.server.hotrod.HotRodDecoder] ISPN005003: Exception reported

      org.infinispan.server.hotrod.InvalidMagicIdException: Error reading magic byte or message id: 3

              at org.infinispan.server.hotrod.HotRodDecoder.readHeader(HotRodDecoder.scala:63)

              at org.infinispan.server.core.AbstractProtocolDecoder.decodeHeader(AbstractProtocolDecoder.scala:91)

              at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:68)

              at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:45)

              at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:471)

              at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:444)

              at org.infinispan.server.core.AbstractProtocolDecoder.messageReceived(AbstractProtocolDecoder.scala:347)

              at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80)

              at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:545)

              at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:540)

              at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274)

              at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261)

              at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:350)

              at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281)

              at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201)

              at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

              at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)

              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

              at java.lang.Thread.run(Thread.java:662)

       

       

       

      ISPN 4.2.1 works fine but it is very slow.

       

      Has the default settings changed between 4.2.1 and 5.0.0 ?

      Do I have to change default server/hotrod client configuration in 5.0.0 when the serialized objects are big?

      Is it bug?

       

      Thank you for any suggestions.


      Regards,
      Zdenek

        1 2 Previous Next