14 Replies Latest reply on Apr 9, 2013 10:29 PM by vikvis

    Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)

    skidvd

      I am using HornetQ v2.2.14.Final with JBoss 6.1.0.Final (I upgrade the default HornetQ install

      of that environment).

       

      I have defined a NettyServlet (HTTPS servlet based) connector to access HornetQ via HTTPS due

      to firewall restrictions.  I have tested and verified that this connector is functioning properly

      between a single server and client.

       

      My problem is related to clustering and automatic client failover.  We need the client connections

      that are connected via the HTTPS based NettyServlet connector to automatically failover when one of the

      servers in the cluster fails.  Unfortunately, this does not appear to be working as I understand it

      from reading the associated chapters 24. Client Reconnection and Session Reattachment,

      38. Clusters and 39. High Availability and Failover of the HornetQ Users Manual. 

       

      Here is the hornetq-configuration-file for server 1:

       

       

      <!--
        ~ Copyright 2009 Red Hat, Inc.
        ~  Red Hat licenses this file to you under the Apache License, version
        ~  2.0 (the "License"); you may not use this file except in compliance
        ~  with the License.  You may obtain a copy of the License at
        ~     http://www.apache.org/licenses/LICENSE-2.0
        ~  Unless required by applicable law or agreed to in writing, software
        ~  distributed under the License is distributed on an "AS IS" BASIS,
        ~  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
        ~  implied.  See the License for the specific language governing
        ~  permissions and limitations under the License.
        -->
      
      <configuration xmlns="urn:hornetq"
                     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                     xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
         <clustered>true</clustered>
         <cluster-user>user_O</cluster-user>
         <cluster-password>password_O</cluster-password>
      
      
         <failover-on-shutdown>true</failover-on-shutdown>
      
         <!--  Don't change this name.
               This is used by the dependency framework on the deployers,
               to make sure this deployment is done before any other deployment -->
         <name>HornetQ.main.config</name>
      
         <log-delegate-factory-class-name>org.hornetq.integration.logging.Log4jLogDelegateFactory</log-delegate-factory-class-name>
      
         <bindings-directory>${jboss.server.data.dir}/hornetq/bindings</bindings-directory>
      
         <journal-directory>${jboss.server.data.dir}/hornetq/journal</journal-directory>
      
         <!-- Default journal file size is set to 1Mb for faster first boot -->
         <journal-file-size>${hornetq.journal.file.size:1048576}</journal-file-size>
      
         <!-- Default journal min file is 2, increase for higher average msg rates -->
         <journal-min-files>${hornetq.journal.min.files:2}</journal-min-files> 
      
      
         <large-messages-directory>${jboss.server.data.dir}/hornetq/largemessages</large-messages-directory>
      
         <paging-directory>${jboss.server.data.dir}/hornetq/paging</paging-directory>
      
         <connectors>
            <connector name="netty">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
               <param key="host"  value="${jboss.bind.address:localhost}"/>
               <param key="port"  value="${hornetq.remoting.netty.port:5445}"/>
            </connector>
      
            <connector name="netty-throughput">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
               <param key="host"  value="${jboss.bind.address:localhost}"/>
               <param key="port"  value="${hornetq.remoting.netty.batch.port:5455}"/>
               <param key="batch-delay" value="50"/>
            </connector>
      
            <connector name="netty-servlet">
                <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
                <param key="host" value="${jboss.bind.address:127.0.0.1}"/>
                <param key="port" value="443"/>
                <param key="use-servlet" value="true"/>
                <param key="servlet-path" value="/NettyServlet/HornetQServlet"/>
                <param key="ssl-enabled" value="true"/>
                <param key="key-store-path" value="s01.jks"/>
                <param key="key-store-password" value="passwd"/>
            </connector>
      
            <connector name="in-vm">
               <factory-class>org.hornetq.core.remoting.impl.invm.InVMConnectorFactory</factory-class>
               <param key="server-id" value="${hornetq.server-id:0}"/>
            </connector>
      
         </connectors>
      
         <acceptors>   
            <acceptor name="netty">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
               <param key="host"  value="${jboss.bind.address:localhost}"/>
               <param key="port"  value="${hornetq.remoting.netty.port:5445}"/>
            </acceptor>
      
            <acceptor name="netty-throughput">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
               <param key="host"  value="${jboss.bind.address:localhost}"/>
               <param key="port"  value="${hornetq.remoting.netty.batch.port:5455}"/>
               <param key="batch-delay" value="50"/>
               <param key="direct-deliver" value="false"/>
            </acceptor>
      
            <acceptor name="netty-invm">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
               <param key="use-invm" value="true"/>
               <param key="host" value="org.hornetq"/>
            </acceptor>                    
      
            <acceptor name="in-vm">
              <factory-class>org.hornetq.core.remoting.impl.invm.InVMAcceptorFactory</factory-class>
              <param key="server-id" value="0"/>
            </acceptor>
      
         </acceptors>
      
         <!-- Clustering configuration -->
         <broadcast-groups>
            <broadcast-group name="O-broadcast-group">
               <local-bind-address>192.168.1.1</local-bind-address>
               <local-bind-port>5432</local-bind-port>
               <group-address>231.7.7.7</group-address>
               <group-port>9876</group-port>
               <broadcast-period>100</broadcast-period>
               <connector-ref>netty</connector-ref>
            </broadcast-group>
         </broadcast-groups>
      
         <discovery-groups>
            <discovery-group name="O-discovery-group">
               <local-bind-address>192.168.1.1</local-bind-address>
               <group-address>231.7.7.7</group-address>
               <group-port>9876</group-port>
               <refresh-timeout>10000</refresh-timeout>
            </discovery-group>
         </discovery-groups>
      
         <cluster-connections>
            <cluster-connection name="O-cluster">
               <address>jms</address>
               <connector-ref>netty</connector-ref>
               <retry-interval>500</retry-interval>
               <use-duplicate-detection>true</use-duplicate-detection>
               <forward-when-no-consumers>true</forward-when-no-consumers>
               <max-hops>1</max-hops>
               <discovery-group-ref discovery-group-name="O-discovery-group"/>
            </cluster-connection>
         </cluster-connections>
      
         <security-settings>
            <!--
            <security-setting match="#">
               <permission type="createNonDurableQueue" roles="guest"/>
               <permission type="deleteNonDurableQueue" roles="guest"/>
               <permission type="consume" roles="guest"/>
               <permission type="send" roles="guest"/>
            </security-setting>
            -->
      
            <security-setting match="#">
               <permission type="createNonDurableQueue" roles="p1"/>
               <permission type="deleteNonDurableQueue" roles="p1"/>
               <permission type="consume" roles="p1"/>
               <permission type="send" roles="p1"/>
               <permission type="manage" roles="p1"/>
            </security-setting>
         </security-settings>
      
         <address-settings>
            <!--default for catch all-->
            <address-setting match="#">
               <dead-letter-address>jms.queue.DLQ</dead-letter-address>
               <expiry-address>jms.queue.ExpiryQueue</expiry-address>
               <redelivery-delay>0</redelivery-delay>
               <max-size-bytes>10485760</max-size-bytes>       
               <message-counter-history-day-limit>10</message-counter-history-day-limit>
               <address-full-policy>BLOCK</address-full-policy>
            </address-setting>
         </address-settings>
      
      </configuration>
      

       

       

      Here is the hornetq-configuration-file for server 2:

       

       

      <!--
        ~ Copyright 2009 Red Hat, Inc.
        ~  Red Hat licenses this file to you under the Apache License, version
        ~  2.0 (the "License"); you may not use this file except in compliance
        ~  with the License.  You may obtain a copy of the License at
        ~     http://www.apache.org/licenses/LICENSE-2.0
        ~  Unless required by applicable law or agreed to in writing, software
        ~  distributed under the License is distributed on an "AS IS" BASIS,
        ~  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
        ~  implied.  See the License for the specific language governing
        ~  permissions and limitations under the License.
        -->
      
      <configuration xmlns="urn:hornetq"
                     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                     xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
         <clustered>true</clustered>
         <cluster-user>user_O</cluster-user>
         <cluster-password>password_O</cluster-password>
      
         <!--
         -->
         <backup>true</backup>
         <failover-on-shutdown>true</failover-on-shutdown>
      
         <!--  Don't change this name.
               This is used by the dependency framework on the deployers,
               to make sure this deployment is done before any other deployment -->
         <name>HornetQ.main.config</name>
      
         <log-delegate-factory-class-name>org.hornetq.integration.logging.Log4jLogDelegateFactory</log-delegate-factory-class-name>
      
         <bindings-directory>${jboss.server.data.dir}/hornetq/bindings</bindings-directory>
      
         <journal-directory>${jboss.server.data.dir}/hornetq/journal</journal-directory>
      
         <!-- Default journal file size is set to 1Mb for faster first boot -->
         <journal-file-size>${hornetq.journal.file.size:1048576}</journal-file-size>
      
         <!-- Default journal min file is 2, increase for higher average msg rates -->
         <journal-min-files>${hornetq.journal.min.files:2}</journal-min-files> 
      
      
         <large-messages-directory>${jboss.server.data.dir}/hornetq/largemessages</large-messages-directory>
      
         <paging-directory>${jboss.server.data.dir}/hornetq/paging</paging-directory>
      
         <connectors>
            <connector name="netty">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
               <param key="host"  value="${jboss.bind.address:localhost}"/>
               <param key="port"  value="${hornetq.remoting.netty.port:5445}"/>
            </connector>
      
            <connector name="netty-throughput">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
               <param key="host"  value="${jboss.bind.address:localhost}"/>
               <param key="port"  value="${hornetq.remoting.netty.batch.port:5455}"/>
               <param key="batch-delay" value="50"/>
            </connector>
      
            <connector name="netty-servlet">
                <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
                <param key="host" value="${jboss.bind.address:127.0.0.1}"/>
                <param key="port" value="443"/>
                <param key="use-servlet" value="true"/>
                <param key="servlet-path" value="/NettyServlet/HornetQServlet"/>
                <param key="ssl-enabled" value="true"/>
                <param key="key-store-path" value="s02.jks"/>
                <param key="key-store-password" value="passwd"/>
            </connector>
      
            <connector name="in-vm">
               <factory-class>org.hornetq.core.remoting.impl.invm.InVMConnectorFactory</factory-class>
               <param key="server-id" value="${hornetq.server-id:0}"/>
            </connector>
      
         </connectors>
      
         <acceptors>   
            <acceptor name="netty">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
               <param key="host"  value="${jboss.bind.address:localhost}"/>
               <param key="port"  value="${hornetq.remoting.netty.port:5445}"/>
            </acceptor>
      
            <acceptor name="netty-throughput">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
               <param key="host"  value="${jboss.bind.address:localhost}"/>
               <param key="port"  value="${hornetq.remoting.netty.batch.port:5455}"/>
               <param key="batch-delay" value="50"/>
               <param key="direct-deliver" value="false"/>
            </acceptor>
      
            <acceptor name="netty-invm">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
               <param key="use-invm" value="true"/>
               <param key="host" value="org.hornetq"/>
            </acceptor>                    
      
            <acceptor name="in-vm">
              <factory-class>org.hornetq.core.remoting.impl.invm.InVMAcceptorFactory</factory-class>
              <param key="server-id" value="0"/>
            </acceptor>
      
         </acceptors>
      
         <!-- Clustering configuration -->
         <broadcast-groups>
            <broadcast-group name="O-broadcast-group">
               <local-bind-address>192.168.1.2</local-bind-address>
               <local-bind-port>5432</local-bind-port>
               <group-address>231.7.7.7</group-address>
               <group-port>9876</group-port>
               <broadcast-period>100</broadcast-period>
               <connector-ref>netty</connector-ref>
            </broadcast-group>
         </broadcast-groups>
      
         <discovery-groups>
            <discovery-group name="O-discovery-group">
               <local-bind-address>192.168.1.2</local-bind-address>
               <group-address>231.7.7.7</group-address>
               <group-port>9876</group-port>
               <refresh-timeout>10000</refresh-timeout>
            </discovery-group>
         </discovery-groups>
      
         <cluster-connections>
            <cluster-connection name="O-cluster">
               <address>jms</address>
               <connector-ref>netty</connector-ref>
               <retry-interval>500</retry-interval>
               <use-duplicate-detection>true</use-duplicate-detection>
               <forward-when-no-consumers>true</forward-when-no-consumers>
               <max-hops>1</max-hops>
               <discovery-group-ref discovery-group-name="O-discovery-group"/>
            </cluster-connection>
         </cluster-connections>
      
         <security-settings>
            <!--
            <security-setting match="#">
               <permission type="createNonDurableQueue" roles="guest"/>
               <permission type="deleteNonDurableQueue" roles="guest"/>
               <permission type="consume" roles="guest"/>
               <permission type="send" roles="guest"/>
            </security-setting>
            -->
      
            <security-setting match="#">
               <permission type="createNonDurableQueue" roles="p1"/>
               <permission type="deleteNonDurableQueue" roles="p1"/>
               <permission type="consume" roles="p1"/>
               <permission type="send" roles="p1"/>
               <permission type="manage" roles="p1"/>
            </security-setting>
         </security-settings>
      
         <address-settings>
            <!--default for catch all-->
            <address-setting match="#">
               <dead-letter-address>jms.queue.DLQ</dead-letter-address>
               <expiry-address>jms.queue.ExpiryQueue</expiry-address>
               <redelivery-delay>0</redelivery-delay>
               <max-size-bytes>10485760</max-size-bytes>       
               <message-counter-history-day-limit>10</message-counter-history-day-limit>
               <address-full-policy>BLOCK</address-full-policy>
            </address-setting>
         </address-settings>
      
      </configuration>
      

       

      Here is how I establish the connection in the client code.  Note that I am using core as JNDI is not

      available across the firewall:

       

       

      HashMap<String, Object> connectionParams1 = new HashMap<String, Object>();
      connectionParams1.put( TransportConstants.HOST_PROP_NAME, HOST_NAME );
      connectionParams1.put( TransportConstants.PORT_PROP_NAME, 443 );
      connectionParams1.put( TransportConstants.USE_SERVLET_PROP_NAME, true );
      connectionParams1.put( TransportConstants.SERVLET_PATH, "/NettyServlet/HornetQServlet" );
      connectionParams1.put( TransportConstants.SSL_ENABLED_PROP_NAME, true );            
      connectionParams1.put( TransportConstants.KEYSTORE_PATH_PROP_NAME, "s01.jks" );
      connectionParams1.put( TransportConstants.KEYSTORE_PASSWORD_PROP_NAME, "passwd" );
      
      TransportConfiguration transportConfiguration1 = new TransportConfiguration( org.hornetq.core.remoting.impl.netty.NettyConnectorFactory.class.getName(), connectionParams1 );
      
      HashMap<String, Object> connectionParams2 = new HashMap<String, Object>();
      connectionParams2.put( TransportConstants.HOST_PROP_NAME, HOST_NAME_2 );
      connectionParams2.put( TransportConstants.PORT_PROP_NAME, 443 );
      connectionParams2.put( TransportConstants.USE_SERVLET_PROP_NAME, true );
      connectionParams2.put( TransportConstants.SERVLET_PATH, "/NettyServlet/HornetQServlet" );
      connectionParams2.put( TransportConstants.SSL_ENABLED_PROP_NAME, true );            
      connectionParams2.put( TransportConstants.KEYSTORE_PATH_PROP_NAME, "s02.jks" );
      connectionParams2.put( TransportConstants.KEYSTORE_PASSWORD_PROP_NAME, "passwd" );
      
      TransportConfiguration transportConfiguration2 = new TransportConfiguration( org.hornetq.core.remoting.impl.netty.NettyConnectorFactory.class.getName(), connectionParams2 );
      
      HornetQConnectionFactory connectionFactory = HornetQJMSClient.createConnectionFactoryWithHA( JMSFactoryType.CF, transportConfiguration1, transportConfiguration2 );
      connectionFactory.setClientFailureCheckPeriod( 2500 );
      connectionFactory.setRetryInterval( 3000 );
      connectionFactory.setReconnectAttempts( 5 );
      connectionFactory.setFailoverOnInitialConnection( true );
      System.out.println( "isHA: " + connectionFactory.isHA() );     // this returns true in the client output
      Connection connection = connectionFactory.createConnection("user", "pswd" );
      
      connection.setExceptionListener( this );
      
      ......
      
      Session session = connection.createSession( false, Session.AUTO_ACKNOWLEDGE );
      ......
      
      Topic topic = HornetQJMSClient.createTopic( TOPIC_NAME );
      
      MessageConsumer subscriber = session.createConsumer( topic );            
      subscriber.setMessageListener( this );
      
      connection.start();
      
      
      
      
      

       

      Both servers start successfully.  I proceed to start publication on the servers and then bring up my

      test client (a subscriber).  The client successfully sees and receives the publication traffic.  I

      then proceed to kill one of the servers in order to test the automatic client connection failover.

      However, this connection does not succeed to failover and the client never receives any further

      publications (despite the fact that the server has successfully failed over to the other server

      - the two servers are also JBoss clustered and the surviving server is the new master).

       

      Here is the error message I receive after the configrued retry attempts on the client side:

       

      Apr 5, 2013 11:53:50 AM org.hornetq.core.logging.impl.JULLogDelegate warn

      WARNING: Tried 5 times to connect. Now giving up on reconnecting it.

      Exception caught: javax.jms.JMSException: HornetQException[errorCode=0 message=Netty exception]

      Apr 5, 2013 11:53:50 AM org.hornetq.core.logging.impl.JULLogDelegate warn

      WARNING: Failed to connect to server.

      javax.jms.JMSException: HornetQException[errorCode=0 message=Netty exception]

          at org.hornetq.jms.client.HornetQConnection$JMSFailureListener.connectionFailed(HornetQConnection.java:643)

          at org.hornetq.core.client.impl.ClientSessionFactoryImpl.callFailureListeners(ClientSessionFactoryImpl.java:906)

          at org.hornetq.core.client.impl.ClientSessionFactoryImpl.failoverOrReconnect(ClientSessionFactoryImpl.java:691)

          at org.hornetq.core.client.impl.ClientSessionFactoryImpl.handleConnectionFailure(ClientSessionFactoryImpl.java:557)

          at org.hornetq.core.client.impl.ClientSessionFactoryImpl.connectionException(ClientSessionFactoryImpl.java:395)

          at org.hornetq.core.remoting.impl.netty.NettyConnector$Listener$2.run(NettyConnector.java:728)

          at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:100)

          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

          at java.lang.Thread.run(Thread.java:662)

      Caused by: HornetQException[errorCode=0 message=Netty exception]

          at org.hornetq.core.remoting.impl.netty.HornetQChannelHandler.exceptionCaught(HornetQChannelHandler.java:108)

          at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:142)

          at org.jboss.netty.channel.StaticChannelPipeline.sendUpstream(StaticChannelPipeline.java:372)

          at org.jboss.netty.channel.StaticChannelPipeline$StaticChannelHandlerContext.sendUpstream(StaticChannelPipeline.java:534)

          at org.jboss.netty.channel.SimpleChannelUpstreamHandler.exceptionCaught(SimpleChannelUpstreamHandler.java:148)

          at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:122)

          at org.jboss.netty.channel.StaticChannelPipeline.sendUpstream(StaticChannelPipeline.java:372)

          at org.jboss.netty.channel.StaticChannelPipeline.sendUpstream(StaticChannelPipeline.java:367)

          at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:432)

          at org.jboss.netty.channel.socket.http.HttpTunnelingClientSocketChannel$ServletChannelHandler.exceptionCaught(HttpTunnelingClientSocketChannel.java:398)

          at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:122)

          at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

          at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:783)

          at org.jboss.netty.handler.codec.replay.ReplayingDecoder.exceptionCaught(ReplayingDecoder.java:461)

          at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:122)

          at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

          at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:783)

          at org.jboss.netty.handler.ssl.SslHandler.exceptionCaught(SslHandler.java:510)

          at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:122)

          at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

          at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)

          at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:432)

          at org.jboss.netty.channel.socket.oio.OioWorker.run(OioWorker.java:85)

          at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

          at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44)

          at org.jboss.netty.util.VirtualExecutorService$ChildExecutorRunnable.run(VirtualExecutorService.java:181)

          ... 3 more

      Caused by: java.net.SocketException: Connection reset

          at java.net.SocketInputStream.read(SocketInputStream.java:168)

          at java.net.SocketInputStream.read(SocketInputStream.java:182)

          at java.io.FilterInputStream.read(FilterInputStream.java:66)

          at java.io.PushbackInputStream.read(PushbackInputStream.java:122)

          at org.jboss.netty.channel.socket.oio.OioWorker.run(OioWorker.java:76)

          ... 4 more

       

      Lastly, here is the error message I receive on the survining server (roughly coincident with failing the first server):

       

       

      11:53:28,000 INFO  [STDOUT] Removing clusterName=5e43d7f8-ca51-4e13-b1c5-6f159df975c81d3d60fc-9dfb-11e2-b73f-a75fdceaebbb

      on ClusterConnectionImpl@25086162[nodeUUID=59e24865-9dfb-11e2-8503-d1877f948bed, connector=org-hornetq-core-remoting-i

      mpl-netty-NettyConnectorFactory?port=5445&host=s02, address=jms, server=HornetQServerImpl::server

      UUID=59e24865-9dfb-11e2-8503-d1877f948bed]

      11:53:33,162 WARN  [org.hornetq.core.server.cluster.impl.BridgeImpl] ClusterConnectionBridge@729ba9 [name=sf.O

      -cluster.1d3d60fc-9dfb-11e2-b73f-a75fdceaebbb, queue=QueueImpl[name=sf.O_-cluster.1d3d60fc-9dfb-11e2-b73f-a75fd

      ceaebbb, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=59e24865-9dfb-11e2-8503-d1877f948bed]]@49a3a9 t

      argetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@729ba9 [name=sf.O

      -cluster.1d3d60fc-9dfb-11e2-b73f-a75fdceaebbb, queue=QueueImpl[name=sf.O-cluster.1d3d60fc-9dfb-11e2-b73f-a75

      fdceaebbb, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=59e24865-9dfb-11e2-8503-d1877f948bed]]@49a3a9

      targetConnector=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5

      445&host=s01], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@25086162[nodeUUID=59e248

      65-9dfb-11e2-8503-d1877f948bed, connector=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=

      s02, address=jms, server=HornetQServerImpl::serverUUID=59e24865-9dfb-11e2-8503-d1877f948bed])) [initi

      alConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=s01], dis

      coveryGroupConfiguration=null]]::Connection failed with failedOver=false-HornetQException[errorCode=0 message=Netty exce

      ption]: HornetQException[errorCode=0 message=Netty exception]

              at org.hornetq.core.remoting.impl.netty.HornetQChannelHandler.exceptionCaught(HornetQChannelHandler.java:108) [:

      ]

              at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:142) [:]

              at org.jboss.netty.channel.StaticChannelPipeline.sendUpstream(StaticChannelPipeline.java:372) [:]

              at org.jboss.netty.channel.StaticChannelPipeline$StaticChannelHandlerContext.sendUpstream(StaticChannelPipeline.

      java:534) [:]

              at org.jboss.netty.channel.SimpleChannelUpstreamHandler.exceptionCaught(SimpleChannelUpstreamHandler.java:148) [

      :]

              at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:122) [:

      ]

              at org.jboss.netty.channel.StaticChannelPipeline.sendUpstream(StaticChannelPipeline.java:372) [:]

              at org.jboss.netty.channel.StaticChannelPipeline.sendUpstream(StaticChannelPipeline.java:367) [:]

              at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:432) [:]

              at org.jboss.netty.channel.socket.oio.OioWorker.run(OioWorker.java:85) [:]

              at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [:]

              at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44) [:]

              at org.jboss.netty.util.VirtualExecutorService$ChildExecutorRunnable.run(VirtualExecutorService.java:181) [:]

              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) [:1.6.0_23]

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [:1.6.0_23]

              at java.lang.Thread.run(Unknown Source) [:1.6.0_23]

      Caused by: java.net.SocketException: Connection reset

              at java.net.SocketInputStream.read(Unknown Source) [:1.6.0_23]

              at java.net.SocketInputStream.read(Unknown Source) [:1.6.0_23]

              at java.io.FilterInputStream.read(Unknown Source) [:1.6.0_23]

              at java.io.PushbackInputStream.read(Unknown Source) [:1.6.0_23]

              at org.jboss.netty.channel.socket.oio.OioWorker.run(OioWorker.java:76) [:]

              ... 4 more

       

      I'd greatly appreciate your assistance with this.  I'm sure I must be missing a critical step,

      but cannot seem to locate it from the documentation and/or examples.

       

      Thanks!

        • 1. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
          jbertram

          Are the two servers actually sharing the same physical journal?

          • 2. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
            skidvd

            No.  I do not want to use shared storage and I am not worried about message recovery/loss as our application logic already handles that separately.  As the two servers are separate physical servers without any shared disk, this is not an option.  I just need the client connections to survive the failover.

            • 3. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
              jbertram

              In the version of HornetQ you are using you can't have HA functionality (i.e. fail-over) without a shared journal.

              • 4. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
                skidvd

                Hmm, that was not my understang from the docs.  However, it would not be the first time I was wrong .  The client reconnection sections all seem to indicate that this can be accomplished regardless of shared-storage.

                 

                Is there another version (not a Beta) that provides this capability and will work with JBoss 6?

                • 5. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
                  jbertram

                  Fail-over != reconnection.  Fail-over is a function of HA as described here.  At this point, reconnection is for intermittent network failures, quick server restarts, etc.  Reconnection tries to get back to the original server; it won't try other servers in the cluster.

                   

                  In your code you're invoking org.hornetq.api.jms.HornetQJMSClient.createConnectionFactoryWithHA(JMSFactoryType, TransportConfiguration...).  The JavaDoc for this method states:

                   

                      * Create a HornetQConnectionFactory which will receive cluster topology updates from the cluster as servers leave or join and new backups are appointed or removed.

                      * The initial list of servers supplied in this method is simply to make an initial connection to the cluster, once that connection is made, up to date

                      * cluster topology information is downloaded and automatically updated whenever the cluster topology changes. If the topology includes backup servers

                      * that information is also propagated to the client so that it can know which server to failover onto in case of live server failure.

                      * @param initialServers The initial set of servers used to make a connection to the cluster. Each one is tried in turn until a successful connection is made. Once

                      * a connection is made, the cluster topology is downloaded and the rest of the list is ignored.

                   

                  No version of HornetQ currently supports the functionality you're looking for.  However, you can open a JIRA and request that we implement it.  I believe we've talked about implementing this exact feature in the past.

                   

                  At this point I'd recommend you put logic in your ExceptionListener to reconnect.

                  • 6. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
                    skidvd

                    Justin,

                     

                    Thanks for your reply.  I guess I am confused by some of the wording.  For example, the title of section 39.2.1from the HA link you reference is "Automatic Client Failover".  Additionally, from the Javadoc you listsed ".... is also propagated to the client so that it can know which server to failover onto in case of live server failure".  Both of these seem indicate the behavior I am looking for to me.  However, as I said, I have been wrong before.

                     

                    Based upon what you are saying, it sounds like I should not even bother to cluster the two HornetQ instances - correct?  I will create some reconnection logic in the exception handler and see how that goes.

                     

                    FYI, JIRA feature request added: HORNETQ-1171 - Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)

                    • 7. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
                      jbertram

                      Thanks for your reply.  I guess I am confused by some of the wording.  For example, the title of section 39.2.1from the HA link you reference is "Automatic Client Failover".  Additionally, from the Javadoc you listsed ".... is also propagated to the client so that it can know which server to failover onto in case of live server failure".  Both of these seem indicate the behavior I am looking for to me.  However, as I said, I have been wrong before.

                      As I said before, fail-over is a function of HA (i.e. live/backup server configuration), but you have not configured HA in your environment so you won't get fail-over.  Is this still unclear?

                       

                       

                      Based upon what you are saying, it sounds like I should not even bother to cluster the two HornetQ instances - correct?  I will create some reconnection logic in the exception handler and see how that goes.

                      In general, I don't have enough information to say whether or not you should cluster your servers.  However, I can say that one live server cannot fail-over to another live server so if the only reason you were clustering your servers was to get fail-over functionality then you shouldn't cluster.

                       

                       

                      FYI, JIRA feature request added: HORNETQ-1171 - Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)

                      I tweaked the title and description of your feature request to more accurately reflect what you're requesting.

                      • 8. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
                        skidvd

                        Why do you think I have two live server?  In the hornetq-configuration files above, I do specify the 2nd server as a backup.  Are you saying that it will not successfully become a backup without the shared storage?  Other than shared storage, I believe I have fully configured the cluster including both a live and backup server.  To my reading of the docs, this appears to be what is required to get the functionality I am looking for - but perhaps not. 

                        • 9. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
                          ataylor

                          No.  I do not want to use shared storage and I am not worried about message recovery/loss as our application logic already handles that separately.  As the two servers are separate physical servers without any shared disk, this is not an option.  I just need the client connections to survive the failover.

                          This implies that you have 2 live servers, like Justin said for live/backup you need shared store or use replication in 2.3

                          1 of 1 people found this helpful
                          • 10. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
                            skidvd

                            Thanks for your help!  That helps to explain it.  Has anyone made replication in 2.3 be work with JBoss 6 (I had heard it was JBoss 7 only)?

                             

                            Either way, just so that I am totally clear on two points:

                             

                            1) Despite haveing cluster and backup configuration entries (that are accepted without error during startup) in the above hornetq-configuration files, I really do not have a valid cluster (i.e. no live and backup) without shared storage -  is that the correct understanding?

                             

                            2) Based on #1, it seems pointless to even add the cluster/backup configuration entries without the shared storage.

                            • 11. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
                              jbertram

                              1) Despite haveing cluster and backup configuration entries (that are accepted without error during startup) in the above hornetq-configuration files, I really do not have a valid cluster (i.e. no live and backup) without shared storage -  is that the correct understanding?

                               

                              2) Based on #1, it seems pointless to even add the cluster/backup configuration entries without the shared storage.

                              You seem to be confusing a clustered configuration (which is useful for load-balancing) and an HA configuration (which is useful for fail-over).  These two things are completely independent of one another

                              • You can have a valid cluster without having HA.
                              • You can have a valid HA setup without having a cluster.
                              • You can have both a cluster and HA configuration.

                               

                              In HornetQ 2.2.14 HA requires shared-storage.  A cluster does not required shared-storage.

                              • 12. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
                                skidvd

                                Good point.  I had temporarily forgotten about load balancing.  Thanks again for all of your help!

                                 

                                Has anyone made replication in 2.3 be work with JBoss 6 (I had heard it was JBoss 7 only)?

                                • 13. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
                                  jbertram

                                  Has anyone made replication in 2.3 be work with JBoss 6 (I had heard it was JBoss 7 only)?

                                  I don't know of anyone.

                                  • 14. Re: Automatic Client Connection Failover with NettyServlet (HTTPS servlet based connector)
                                    vikvis

                                    I was exactly thinking like Todd. Glad I read this ..., otherwise I would have wasted my time doing what Todd did.

                                     

                                    In HA, when automatic retry happens to the backup server, does the old connection object on the client side remains valid ?

                                     

                                    I am going to be trying 2.3.0.CR2 failover now and clustering with data replication  using JGroups  in few days.