7 Replies Latest reply on Oct 19, 2010 5:08 AM by mircea.markus

    Using Infinspan without local cache

    ntsankov

      Our goal is to have Infinispan cluster of dedicated servers, and use a light client to connect to the cache from our application servers. We want to avoid caching in the app. server VMs, so we don't have to deal with  large heaps and long GC pauses.

       

      After some reading, I understand that the recommended  way of doing this is to run Hot Rod servers and use Hot Rod client in the application server to access the cache. This is suboptimal solution for us, as we want the possibility to use AtomicMap for fine-grained replication of our shared state and Hot Rod doesn't support AtomicMap (yet?).

       

      So here is my idea: extend DefaultConsistentHash to "skip" some of the nodes in the cluster when mapping keys to addresses. So I have the "client" nodes have a special nodeName attribute in the <clustering> conf. and my class will remove them from the List<Address> before setting it to DefaultConsistentHash. Also the client nodes have L1 disabled.

       

      Here is the initial code:

       

      public class KerriganHash extends DefaultConsistentHash implements Serializable {

       

      List<Address> allCaches;
         
          @Override
          public void setCaches(List<Address> caches) {
      System.out.println("KerriganHash.setCaches() : " + caches);
      List<Address> restricted = new ArrayList<Address>(caches.size());
      List<Address> noStorage = new ArrayList<Address>();
      for (Address address : caches) {
         if (!address.toString().startsWith("noStorage")) {
      restricted.add(address);
         } else {
      noStorage.add(address);
         }
      }
      System.out.println("KerriganHash caches after removing: " + restricted);
              super.setCaches(restricted);
              allCaches = new ArrayList<Address>(restricted.size() + noStorage.size());
              allCaches.addAll(addresses);
              allCaches.addAll(noStorage);
          }
         
          @Override
          public List<Address> getCaches() {
              return allCaches;
          }

          List<Address> noStorageNodes = Collections.emptyList();

       

          @Override

          public void setCaches(List<Address> caches) {

      log.info("KerriganHash.setCaches() : " + caches);

      List<Address> storageNodes = new ArrayList<Address>();

      noStorageNodes = new ArrayList<Address>();

      for (Address address : caches) {

          if (String.valueOf(address).startsWith(PREFIX_NO_STORAGE)) {

      noStorageNodes.add(address);

          } else {

      storageNodes.add(address);

          }

      }

      super.setCaches(storageNodes);

          }

       

          @Override

          public List<Address> getCaches() { //have to return the full list here or I get errors

              List<Address> allCaches = new ArrayList(super.getCaches());

       

              allCaches.addAll(noStorageNodes);

       

          }

       

      }

       

      This approach somewhat works, but I get NullpointerException on one of the server nodes if I connect a second client node to the cluster.

       

      at org.infinispan.distribution.DefaultConsistentHash.locate(DefaultConsistentHash.java:66)

              at org.infinispan.commands.control.RehashControlCommand.shouldTransferOwnershipToJoinNode(RehashControlCommand.java:217)

              at org.infinispan.commands.control.RehashControlCommand.pullStateForJoin(RehashControlCommand.java:147)

              at org.infinispan.commands.control.RehashControlCommand.perform(RehashControlCommand.java:127)

              at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:76)

              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:176)

              at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:148)

              at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:575)

      But if I stick to one client, it works fine, having 0 entries locally and everything is spread out on the server nodes.

      Any comments, suggestions and ideas will be greatly appreciated! There is probably a better way to do this, but I'm new to Infinispan.

      Thanks

        • 1. Re: Using Infinspan without local cache
          ntsankov

          There seems to be a problem with extending DefaultConsistentHash. I tested with a simple class:

           

          public class ExtendedHash extends DefaultConsistentHash implements Serializable {
          
              @Override
              public List<Address> locate(Object key, int replCount) {
                  try {
                   return super.locate(key, replCount);
               } catch (RuntimeException e) {
                   e.printStackTrace();
                   throw e;
               }
              }
          }
          

           

          Started a cluster of 3 or 4 nodes (on different ports) and after stopping one of the nodes I get this:

           

           

          2010-10-01 15:34:15,714 ERROR [org.infinispan.remoting.rpc.RpcManagerImpl] (Rehasher-nts-50584) unexpected error while replicating
          java.lang.NullPointerException
                  at org.infinispan.distribution.DefaultConsistentHash.locate(DefaultConsistentHash.java:66)
                  at com.mb.kerrigan.hash.DHash.locate(DHash.java:21)
                  at org.infinispan.commands.control.RehashControlCommand.shouldTransferOwnershipFromLeftNodes(RehashControlCommand.java:191)
                  at org.infinispan.commands.control.RehashControlCommand.pullStateForLeave(RehashControlCommand.java:169)
                  at org.infinispan.commands.control.RehashControlCommand.perform(RehashControlCommand.java:129)
                  at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:76)
                  at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:176)
                  at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:148)
                  at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:575)
                  at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:486)
                  at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:362)
                  at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:771)
                  at org.jgroups.JChannel.up(JChannel.java:1453)
                  at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:887)
                  at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:435)
                  at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:265)
                  at org.jgroups.protocols.FRAG2.up(FRAG2.java:188)
                  at org.jgroups.protocols.FC.up(FC.java:494)
                  at org.jgroups.protocols.pbcast.GMS.up(GMS.java:888)
                  at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)
                  at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:576)
                  at org.jgroups.protocols.UNICAST.up(UNICAST.java:294)
                  at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:707)
                  at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)
                  at org.jgroups.protocols.FD.up(FD.java:266)
                  at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:270)
                  at org.jgroups.protocols.MERGE2.up(MERGE2.java:210)
                  at org.jgroups.protocols.Discovery.up(Discovery.java:281)
                  at org.jgroups.protocols.TP.passMessageUp(TP.java:1009)
                  at org.jgroups.protocols.TP.access$100(TP.java:56)
                  at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1549)
                  at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1531)
                  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
                  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
                  at java.lang.Thread.run(Thread.java:619)

          2010-10-01 15:34:15,714 ERROR [org.infinispan.remoting.rpc.RpcManagerImpl] (Rehasher-nts-50584) unexpected error while replicating

          java.lang.NullPointerException

                  at org.infinispan.distribution.DefaultConsistentHash.locate(DefaultConsistentHash.java:66)

                  at test.ExtendedHash.locate(ExtendedHash.java:12)

                  at org.infinispan.commands.control.RehashControlCommand.shouldTransferOwnershipFromLeftNodes(RehashControlCommand.java:191)

                  at org.infinispan.commands.control.RehashControlCommand.pullStateForLeave(RehashControlCommand.java:169)

                  at org.infinispan.commands.control.RehashControlCommand.perform(RehashControlCommand.java:129)

                  at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:76)

                  at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:176)

                  at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:148)

                  at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:575)

                  at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:486)

                  at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:362)

                  at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:771)

                  at org.jgroups.JChannel.up(JChannel.java:1453)

                  at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:887)

                  at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:435)

                  at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:265)

                  at org.jgroups.protocols.FRAG2.up(FRAG2.java:188)

                  at org.jgroups.protocols.FC.up(FC.java:494)

                  at org.jgroups.protocols.pbcast.GMS.up(GMS.java:888)

                  at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)

                  at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:576)

                  at org.jgroups.protocols.UNICAST.up(UNICAST.java:294)

                  at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:707)

                  at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:132)

                  at org.jgroups.protocols.FD.up(FD.java:266)

                  at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:270)

                  at org.jgroups.protocols.MERGE2.up(MERGE2.java:210)

                  at org.jgroups.protocols.Discovery.up(Discovery.java:281)

                  at org.jgroups.protocols.TP.passMessageUp(TP.java:1009)

                  at org.jgroups.protocols.TP.access$100(TP.java:56)

                  at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1549)

                  at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1531)

                  at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                  at java.lang.Thread.run(Thread.java:619)

           

           

          I saw that DefaultConsistentHash is not Serializable, but using an Externalizer class. User code can't adopt the same approach, as the class name must be registered in ConstantObjectTable.MARSHALLABLES at compile time.

          Please provide some comments, is it possible to extend DefaultConsistentHash without running into those serialization problems?

          • 2. Re: Using Infinspan without local cache
            eboily

            Hi,

             

            I actually have a very similar requirement down the road.

             

            I also have to implement a light client of the grid and it cannot act as an actual caching node.

             

            Looking forward to see how this is resolved.

             

            - Edouard

            • 3. Re: Using Infinspan without local cache
              mircea.markus

              Sorry for the late answer. This looks like a bug: mind creating a JIRA with a unit test for this?

              • 4. Re: Using Infinspan without local cache
                galder.zamarreno

                Nikolay, just cause DefaultConsistentHash uses an Externalizer, it doesn't mean that you can't extend it and provide your own serialization via either: implements Serializable or Externalizable. The latter is preferred, but you'll have to make sure you write everything that's needed.

                 

                Wrt Hot Rod and AtomicMap: Hot Rod treats keys and values as byte[], so fine grained replication is not supported. By treating key/values as byte[], we enable interoperability between diff envs.

                • 5. Re: Using Infinspan without local cache
                  mircea.markus

                  @Galder - the DefaultConsistentHash in the example is Serializable. Doesn't look to me as a serialisation issue - am I wrong?

                  • 6. Re: Using Infinspan without local cache
                    ntsankov

                    The NPE happens because the "addresses" field of DefaultConsistentHash is null. I assumed this is due to deserialization gone wrong.

                    • 7. Re: Using Infinspan without local cache
                      mircea.markus

                      "adress" is local to the node where the call is made and seems like it doesn't get initialized. Not clear to me why DCH needs to be serializable anyway.