5 Replies Latest reply on Oct 23, 2012 5:38 PM by vblagojevic

    InvalidMagicIdException with Infinispan 5.1.6FINAL

    jacob1111

      Hi,

       

      We want to run Infinispan for clustered nodes using TCP only and

      I am trying to run it using TCPPING.

       

      If I commented out TCPPING section in the jgroups-tcp.xml file,

      the clustered nodes work and the application works as well.

       

      However, if I use TCPPING like

         <TCPPING timeout="3000"

                  initial_hosts="10.100.9.28[7900],10.100.9.9[7900]"

                  port_range="1"

                  num_initial_members="2"

                  ergonomics="false"

              />

       

      the first HotRod server produces exception after it starts (before

      the application starts and before the second server starts)

       

      /home/jnikom/Infinispan/dev/TcpRemoteCache_516>runHotRodServer.sh

      2012-10-15 16:44:54,224 ERROR [HotRodDecoder] (HotRodClientMaster-1) ISPN005003: Exception reported

      org.infinispan.server.hotrod.InvalidMagicIdException: Error reading magic byte or message id: 98

              at org.infinispan.server.hotrod.HotRodDecoder.readHeader(HotRodDecoder.scala:62)

              at org.infinispan.server.hotrod.HotRodDecoder.readHeader(HotRodDecoder.scala:46)

              at org.infinispan.server.core.AbstractProtocolDecoder.decodeHeader(AbstractProtocolDecoder.scala:92)

              at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:68)

              at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:45)

              at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:561)

              at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:450)

              at org.infinispan.server.core.AbstractProtocolDecoder.messageReceived(AbstractProtocolDecoder.scala:369)

              at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)

              at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

              at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)

              at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)

              at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)

              at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)

              at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:372)

              at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:246)

              at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)

              at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)

              at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

              at java.lang.Thread.run(Thread.java:662)

       

      Here is my HotRod server startup script:

      ===============================================================

      cd /home/jnikom/Infinispan/infinispan-5.1.6.FINAL/bin

      startServer.sh \

      --port=7900 \

      --host=10.100.9.28 \

      -r hotrod \

      -c /home/jnikom/Infinispan/dev/TcpRemoteCache_516/cluster.xml \

      -Djava.net.preferIPv4Stack=true \

      -Djgroups.bind_addr=10.100.9.28 \

      -Djgroups.start_port=7900 \

      -Djgroups.tcpiping.initial_hosts=10.100.9.28[7900],10.100.9.9[7900]

      ===============================================================

       

      Here is my Infinispan configuration file cluster.xml:

       

      /home/jnikom/Infinispan/dev/TcpRemoteCache_516>more cluster.xml

      ===============================================================

      <infinispan>

          <global>

              <transport nodeName="Infinispan-Node1" clusterName="infinispan-cluster" >

                  <properties>

                     <property name="configurationFile" value="/home/jnikom/Infinispan/infinispan-5.1.6.FINAL/etc/jgroups-tcp.xml">

                  </property></properties>

              </transport>

          </global>

       

          <default>

            <jmxStatistics enabled="true"> </jmxStatistics>

            <clustering mode="replication">

                    <sync></sync>

            </clustering>

          </default>

       

          <namedCache name="remoteCache_9_9">

       

              <clustering mode="replication">

          

                   <stateTransfer

                      chunkSize="0"

                      fetchInMemoryState="false"

                      timeout="240000">

                   </stateTransfer>

         

                   <sync replTimeout="20000"/>

          

                </clustering>

       

          </namedCache>

       

      </infinispan>

      ===============================================================

       

      Here is my Infinispan JGroups configuration file jgroups-tcp.xml:

       

      /home/jnikom/Infinispan/infinispan-5.1.6.FINAL/etc> more jgroups-tcp.xml

      ===============================================================

      <!--

        ~ JBoss, Home of Professional Open Source

        ~ Copyright 2010 Red Hat Inc. and/or its affiliates and other

        ~ contributors as indicated by the @author tags. All rights reserved.

        ~ See the copyright.txt in the distribution for a full listing of

        ~ individual contributors.

        ~

        ~ This is free software; you can redistribute it and/or modify it

        ~ under the terms of the GNU Lesser General Public License as

        ~ published by the Free Software Foundation; either version 2.1 of

        ~ the License, or (at your option) any later version.

        ~

        ~ This software is distributed in the hope that it will be useful,

        ~ but WITHOUT ANY WARRANTY; without even the implied warranty of

        ~ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU

        ~ Lesser General Public License for more details.

        ~

        ~ You should have received a copy of the GNU Lesser General Public

        ~ License along with this software; if not, write to the Free

        ~ Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA

        ~ 02110-1301 USA, or see the FSF site: http://www.fsf.org.

        -->

      <config xmlns="urn:org:jgroups"

              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

              xsi:schemaLocation="urn:org:jgroups file:schema/JGroups-3.0.xsd">

         <TCP

              bind_addr="${jgroups.tcp.address:127.0.0.1}"

              bind_port="${jgroups.tcp.port:7800}"

              loopback="true"

              port_range="30"

              recv_buf_size="20m"

              send_buf_size="640k"

              discard_incompatible_packets="true"

              max_bundle_size="64000"

              max_bundle_timeout="30"

              enable_bundling="true"

              use_send_queues="true"

              enable_diagnostics="false"

              bundler_type="old"

       

              thread_naming_pattern="pl"

       

              thread_pool.enabled="true"

              thread_pool.min_threads="2"

              thread_pool.max_threads="30"

              thread_pool.keep_alive_time="60000"

              thread_pool.queue_enabled="true"

              thread_pool.queue_max_size="100"

              thread_pool.rejection_policy="Discard"

       

              oob_thread_pool.enabled="true"

              oob_thread_pool.min_threads="2"

              oob_thread_pool.max_threads="30"

              oob_thread_pool.keep_alive_time="60000"

              oob_thread_pool.queue_enabled="false"

              oob_thread_pool.queue_max_size="100"

              oob_thread_pool.rejection_policy="Discard"       

               />

       

         <!-- Ergonomics, new in JGroups 2.11, are disabled by default in TCPPING until JGRP-1253 is resolved -->

         <TCPPING timeout="3000"

                  initial_hosts="10.100.9.28[7900],10.100.9.9[7900]"

                  port_range="1"

                  num_initial_members="2"

                  ergonomics="false"

              />

       

         <!--

         <MPING bind_addr="${jgroups.bind_addr:127.0.0.1}" break_on_coord_rsp="true"

            mcast_addr="${jgroups.mping.mcast_addr:228.2.4.6}"

            mcast_port="${jgroups.mping.mcast_port:43366}"

            ip_ttl="${jgroups.udp.ip_ttl:2}"

            num_initial_members="3"/>

         -->

       

         <MERGE2 max_interval="30000" min_interval="10000"/>

         <FD_SOCK/>

         <FD timeout="3000" max_tries="3"/>

         <VERIFY_SUSPECT timeout="1500"/>

         <pbcast.NAKACK

               use_mcast_xmit="false"

               retransmit_timeout="300,600,1200,2400,4800"

               discard_delivered_msgs="false"/>

         <UNICAST2 timeout="300,600,1200"

                   stable_interval="5000"

                   max_bytes="1m"/>

         <pbcast.STABLE stability_delay="500" desired_avg_gossip="5000" max_bytes="1m"/>

         <pbcast.GMS print_local_addr="false" join_timeout="3000" view_bundling="true"/>

         <UFC max_credits="200k" min_threshold="0.20"/>

         <MFC max_credits="200k" min_threshold="0.20"/>

         <FRAG2 frag_size="60000"/>

         <RSVP timeout="60000" resend_interval="500" ack_on_delivery="false" />

      </config>

      ===============================================================

       

      Why if I comment out TCPPING section everything works, but does not

      work with TCPPING uncommented?

       

      Thank you,

       

      Jacob Nikom

        • 1. Re: InvalidMagicIdException with Infinispan 5.1.6FINAL
          vblagojevic

          Jacob, I would recommend you take Chat demo from JGroups with this stack and try to get it working there first. It will be easier to troubleshoot!

          • 2. Re: InvalidMagicIdException with Infinispan 5.1.6FINAL
            jacob1111

            Hi Vladimir,

             

            Thank you for your reply.

             

            I successfully implemented the SimpleChat application from JGroups tutorial.

            It runs on both nodes very well.

             

            What should I do now?

             

            How I can use SimpleChat application to debug my TCPPING problem?

             

            I also would like to tell you a little more about my environment. It is pretty standard.

            I use JDK 1.60_35, CentOS 5.5, JGroups 3.11, ISPN 5.1.6

             

            Best regards,

             

            Jacob Nikom

            • 3. Re: InvalidMagicIdException with Infinispan 5.1.6FINAL
              vblagojevic

              Jacob,

               

              Well, I was thinking that you try your Infinispan setup with JGroups configuration file used in SimpleChat. Have you tried that?

               

              Regards,

              Vladimir

              1 of 1 people found this helpful
              • 4. Re: InvalidMagicIdException with Infinispan 5.1.6FINAL
                jacob1111

                Hi Vladimir,

                 

                Thank you for your help.

                 

                I did not do it because for me SimpleChat.java client is very different from

                my TwoNodes1RemoteBench.java client.

                 

                SimpleChat.java does not have RemoteCacheManager, it does not have

                RemoteCache object and mosy important it does not have GHotRod server

                that reads cluster.xml configuration file that in turn uses jgroups-tcp.xml file.

                 

                jgroups-3.2.0.CR2.jar file contains multiple XML files like:

                .................................

                flush-tcp.xml

                flush-udp.xml

                jg-magic-map.xml

                jg-messages.properties

                jg-messages_de.properties

                jg-protocol-ids.xml

                mping.xml

                relay1.xml

                relay2.xml

                sequencer.xml

                settings.xml

                tcp-nio.xml

                tcp.xml

                tcpgossip.xml

                 

                Which particular one is used for the communication?

                Which one I should replace with my jgroups-tcp.xml file?

                I assume I have to rename my file into this replaced file, too.

                Is it correct?

                 

                I even don't know whether SimpleChat.java uses TCP or UDP

                protocol - there is nothing about it in the tutorial.

                 

                The documentation is so limited that I feel like a blind man.

                For example, how I can start SimpleChat with parameters like that:

                 

                --port=7900 \

                --host=10.100.9.28 \

                -r hotrod \

                -c /home/jnikom/Kiva/Downloads/Java/Infinispan/dev/TcpRemoteCache_516/cluster.xml \

                -Djava.net.preferIPv4Stack=true \

                -Djgroups.bind_addr=10.100.9.28 \

                -Djgroups.start_port=7900 \

                -Djgroups.tcpiping.initial_hosts=10.100.9.28[7900],10.100.9.9[7900]

                 

                Best regards,

                 

                Jacob Nikom

                • 5. Re: InvalidMagicIdException with Infinispan 5.1.6FINAL
                  vblagojevic

                  Jacob, apologies I meant DrawDemo. If I recall correctly you can select jgroups file by passing --props parameter. Check the sources of DrawDemo. For all HotRod server questions ping Galder :-)

                   

                  Regards,

                  Vladimir