6 Replies Latest reply on Dec 5, 2013 9:29 PM by Justin Bertram

    Warning message in the log if shutting down one clustered server

    mike just Master

      I have configured two servers clustered in one machine. This error happens in the second server log when the first one is shut down.

       

      2013-12-02 15:43:06,792;[Thread-0 (HornetQ-client-global-threads-2087096917)];WARN ;org.hornetq.core.server;HQ222141: Connection failed with failedOver=false: HornetQException[errorType=INTERNAL_ERROR message=HQ119015: Exception in Netty transport]
        at org.hornetq.core.remoting.impl.netty.HornetQChannelHandler.exceptionCaught(HornetQChannelHandler.java:107)
        at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:130)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.exceptionCaught(SimpleChannelUpstreamHandler.java:153)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:555)
        at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:525)
        at org.jboss.netty.channel.socket.oio.AbstractOioWorker.run(AbstractOioWorker.java:77)
        at org.jboss.netty.channel.socket.oio.OioWorker.run(OioWorker.java:51)
        at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at org.jboss.netty.util.VirtualExecutorService$ChildExecutorRunnable.run(VirtualExecutorService.java:175)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
        at java.lang.Thread.run(Thread.java:619)
      Caused by: java.net.SocketException: Connection reset
        at java.net.SocketInputStream.read(SocketInputStream.java:168)
        at java.net.SocketInputStream.read(SocketInputStream.java:182)
        at java.io.FilterInputStream.read(FilterInputStream.java:66)
        at java.io.PushbackInputStream.read(PushbackInputStream.java:122)
        at org.jboss.netty.channel.socket.oio.OioWorker.process(OioWorker.java:64)
        at org.jboss.netty.channel.socket.oio.AbstractOioWorker.run(AbstractOioWorker.java:73)
        at org.jboss.netty.channel.socket.oio.OioWorker.run(OioWorker.java:51)
        ... 4 more
      
      

       

      What this problem is related? How to elimilate this kind of warning messages?

        • 1. Re: Warning message in the log if shutting down one clustered server
          Justin Bertram Master

          This looks like the cluster bridge between the nodes going down.

          • 3. Re: Re: Warning message in the log if shutting down one clustered server
            mike just Master

            I have checked the document and found related configuration is already in standalone-full-ha.xml like below. But such error still shows when shutting down one server. Anybody has experience on this?

            Below is the piece of cluster configuration under hornetq-server in standalone-full-ha.xml. Most of them are default from standalone-full-ha.xml.

            clusteruser/Password1@c is created by add-user.bat. Anthing wrong?

             

            <clustered>true</clustered>
                 <cluster-user>clusteruser</cluster-user>
                            <cluster-password>Password1@c</cluster-password>
              <failover-on-shutdown>true</failover-on-shutdown>
                            <shared-store>true</shared-store>
                            <persistence-enabled>false</persistence-enabled>
                            <security-enabled>false</security-enabled>
                            <journal-type>NIO</journal-type>
                            <journal-file-size>10485760</journal-file-size>
                            <journal-min-files>10</journal-min-files>
                            <journal-sync-transactional>false</journal-sync-transactional>
                            <journal-sync-non-transactional>false</journal-sync-non-transactional>
                            <connectors>
                                <netty-connector name="netty" socket-binding="messaging"/>
                                <netty-connector name="netty-throughput" socket-binding="messaging-throughput">
                                    <param key="batch-delay" value="50"/>
                                </netty-connector>
                                <in-vm-connector name="in-vm" server-id="0"/>
                            </connectors>
                            <acceptors>
                                <netty-acceptor name="netty" socket-binding="messaging"/>
                                <netty-acceptor name="netty-throughput" socket-binding="messaging-throughput">
                                    <param key="batch-delay" value="50"/>
                                    <param key="direct-deliver" value="false"/>
                                </netty-acceptor>
                                <in-vm-acceptor name="in-vm" server-id="0"/>
                            </acceptors>
                            <broadcast-groups>
                                <broadcast-group name="bg-group1">
                                    <socket-binding>messaging-group</socket-binding>
                                    <broadcast-period>5000</broadcast-period>
                                    <connector-ref>netty</connector-ref>
                                </broadcast-group>
                            </broadcast-groups>
                            <discovery-groups>
                                <discovery-group name="dg-group1">
                                    <socket-binding>messaging-group</socket-binding>
                                    <refresh-timeout>10000</refresh-timeout>
                                </discovery-group>
                            </discovery-groups>
                            <cluster-connections>
                                <cluster-connection name="my-cluster">
                                    <address>jms</address>
                                    <connector-ref>netty</connector-ref>
                                    <discovery-group-ref discovery-group-name="dg-group1"/>
                                </cluster-connection>
                            </cluster-connections>
                            <security-settings>
                                <security-setting match="#">
                                    <permission type="send" roles="guest"/>
                                    <permission type="consume" roles="guest"/>
                                    <permission type="createNonDurableQueue" roles="guest"/>
                                    <permission type="deleteNonDurableQueue" roles="guest"/>
                                </security-setting>
                            </security-settings>
            
            • 4. Re: Re: Warning message in the log if shutting down one clustered server
              Justin Bertram Master

              The config looks fine from a clustering perspective.  Have you noticed any functional impact from this WARN message or can you simply ignore it?

              • 5. Re: Re: Warning message in the log if shutting down one clustered server
                mike just Master

                I think that is not what I expected. I think when one server shut down, the jms request should be forwared to another server and continues handling but it is not. So I am looking for more instructions about how to configure hornetq related stuff. I have followed the hornetq user guide but I am not able to get it working so far, as I asked in another thread.

                Can not forward request to another clustered server

                • 6. Re: Re: Warning message in the log if shutting down one clustered server
                  Justin Bertram Master

                  I think when one server shut down, the jms request should be forwared to another server and continues handling but it is not.

                  As I attempted to explain on that other thread, HornetQ clustering doesn't provide that functionality, but HornetQ HA does.  You need to configure it properly for the behavior you expect.