6 Replies Latest reply on May 14, 2007 2:31 PM by jamieqho

    Cache startup problems

    jamieqho

      Hi,

      Does anyone know what the exception below means? I have one particular machine that doesn't seem to work unless I start it up first before any of the other machines in my cluster. Order seems to matter. If I don't start up this machine first, I get the error below.

      Thanks,
      Jamie


      09 May 2007 10:50:12 [main] INFO org.jgroups.protocols.UDP - sockets will use
      interface 10.133.192.106
      09 May 2007 10:50:12 [main] INFO org.jgroups.protocols.UDP - socket informatio
      n:
      local_addr=10.133.192.106:3327, mcast_addr=238.10.10.10:45599, bind_addr=/10.133
      .192.106, ttl=2
      sock: bound to 10.133.192.106:3327, receive buffer size=20000000, send buffer si
      ze=640000
      mcast_recv_sock: bound to 10.133.192.106:45599, send buffer size=640000, receive
      buffer size=25000000
      mcast_send_sock: bound to 10.133.192.106:3328, send buffer size=640000, receive
      buffer size=25000000

      -------------------------------------------------------
      GMS: address is 10.133.192.106:3327
      -------------------------------------------------------
      09 May 2007 10:50:14 [main] INFO org.jboss.cache.CacheImpl.JBossCache-Cluster
      - viewAccepted(): [10.133.192.183:1154|26] [10.133.192.183:1154, 10.133.192.170:
      1167, 10.133.192.106:3327]
      09 May 2007 10:50:15 [main] INFO org.jboss.cache.CacheImpl.JBossCache-Cluster
      - CacheImpl local address is 10.133.192.106:3327
      09 May 2007 10:50:15 [main] INFO org.jgroups.protocols.pbcast.STATE_TRANSFER -
      Successful flush at 10.133.192.106:3327
      09 May 2007 10:50:15 [Incoming Thread] INFO org.jboss.cache.statetransfer.State
      TransferManager - starting state integration at node UnversionedNode[ / data=[]
      RL]
      09 May 2007 10:50:15 [Incoming Thread] WARN org.jboss.cache.statetransfer.Defau
      ltStateTransferIntegrator - transient state integration failed, removing all ch
      ildren of UnversionedNode[ / data=[] RL]
      09 May 2007 10:50:15 [Incoming Thread] ERROR org.jboss.cache.marshall.VersionAwa
      reMarshaller - Unable to read version id from first two bytes of stream, barfin
      g.
      09 May 2007 10:50:15 [Incoming Thread] ERROR org.jboss.cache.CacheImpl.JBossCach
      e-Cluster - failed setting state
      java.io.EOFException
      at java.io.DataInputStream.readShort(DataInputStream.java:298)
      at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputS
      tream.java:2750)
      at java.io.ObjectInputStream.readShort(ObjectInputStream.java:928)
      at org.jboss.cache.marshall.VersionAwareMarshaller.objectFromObjectStrea
      m(VersionAwareMarshaller.java:223)
      at org.jboss.cache.statetransfer.DefaultStateTransferIntegrator.integrat
      eAssociatedState(DefaultStateTransferIntegrator.java:116)
      at org.jboss.cache.statetransfer.DefaultStateTransferIntegrator.integrat
      eState(DefaultStateTransferIntegrator.java:63)
      at org.jboss.cache.statetransfer.StateTransferManager.setState(StateTran
      sferManager.java:201)
      at org.jboss.cache.statetransfer.StateTransferManager.setState(StateTran
      sferManager.java:152)
      at org.jboss.cache.CacheImpl$MessageListenerAdaptor.setState(CacheImpl.j
      ava:3304)
      at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.handleUpEvent(Me
      ssageDispatcher.java:636)
      at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispat
      cher.java:722)
      at org.jgroups.JChannel.up(JChannel.java:991)
      at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:326)
      at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:509)
      at org.jgroups.protocols.pbcast.STATE_TRANSFER.handleStateRsp(STATE_TRAN
      SFER.java:432)
      at org.jgroups.protocols.pbcast.STATE_TRANSFER.up(STATE_TRANSFER.java:13
      2)
      at org.jgroups.protocols.FRAG2.up(FRAG2.java:197)
      at org.jgroups.protocols.pbcast.GMS.up(GMS.java:717)
      at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:226)
      at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:535)
      at org.jgroups.protocols.UNICAST.up(UNICAST.java:214)
      at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:577)
      at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:154)
      at org.jgroups.protocols.FD.up(FD.java:328)
      at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:301)
      at org.jgroups.protocols.MERGE2.up(MERGE2.java:145)
      at org.jgroups.protocols.Discovery.up(Discovery.java:224)
      at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1541)

      at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1495)
      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExec
      utor.java:885)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
      .java:907)

        • 1. Re: Cache startup problems
          jamieqho

          I am having better luck if every machine in the cluster run the exact same version of Java. Does anyone know why?

          Thanks,
          Jamie

          • 2. Re: Cache startup problems
            genman

            The TTL in your config is set to 2, which AFAIK is either milliseconds or seconds. Either one is quite short. Have you or are you using an example or modified config?

            • 3. Re: Cache startup problems
              brian.stansberry

              TTL is the number of network hops before the packet can be dropped; not time based.

              I expect there is an issue with the way the ObjectOutputStream is encoding messages. What are the Java versions you are using when it fails?

              • 4. Re: Cache startup problems
                jamieqho

                I get this exception when using Java 1.6.0. I seem to have better luck with 1.5.0_06 but I am now seeing this new exception intermittently during startup:

                java.lang.IllegalAccessError: tried to access class java.util.AbstractMap$SimpleEntry from class org.jboss.cache.util.MapCopy
                at org.jboss.cache.util.MapCopy.(MapCopy.java:43)
                at org.jboss.cache.UnversionedNode.getDataDirect(UnversionedNode.java:208)
                at org.jboss.cache.CacheImpl._put(CacheImpl.java:2197)
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
                at java.lang.reflect.Method.invoke(Method.java:585)
                at org.jgroups.blocks.MethodCall.invoke(MethodCall.java:330)
                at org.jboss.cache.interceptors.CallInterceptor.invoke(CallInterceptor.java:49)
                at org.jboss.cache.interceptors.Interceptor.invoke(Interceptor.java:75)
                at org.jboss.cache.interceptors.EvictionInterceptor.invoke(EvictionInterceptor.java:88)
                at org.jboss.cache.interceptors.Interceptor.invoke(Interceptor.java:75)
                at org.jboss.cache.interceptors.UnlockInterceptor.invoke(UnlockInterceptor.java:33)

                I am using the example config.

                Thanks,
                Jamie

                • 5. Re: Cache startup problems
                  genman

                  Looks like in 1.6 there is a new public "SimpleEntry" class in AbstractMap. I wrote my own "SimpleEntry" for MapCopy, since 1.5 did not have this exposed.

                  It seems there might be a compatibility issue if JBoss Cache was compiled for 1.6 and later run on 1.5. I assume this is not the case, and instead there's some sort of Java bug.

                  How was your Cache 2.0 built?

                  • 6. Re: Cache startup problems
                    jamieqho

                    Thanks for the help! The JDK version was the issue. The machine that compiled using JDK 1.6 and ran using 1.5 had issues. Everything works fine if I just compile and run using 1.5.

                    Thanks,
                    Jamie