1 2 Previous Next 24 Replies Latest reply on Mar 16, 2017 11:50 PM by jiayu hu

    Client failover with static clusters in HA mode

    Hyrax Wang Newbie

      Hi,

      I'm using HornetQ hornetq-2.3.0.Final now. I was tring to set up live-backup server using static cluster. What I expect to see is that when I shut down the live server, the client should automatically swtich to the backup server. FYI, I am using jms url (like jms://localhost:5445) to connect client with hornetq server. What I have achieved is that from the logs, I can see the backup server got announced  and became live when I shutdown the live server but the consumer got disconnected as indicated in the log:

      2013-05-28T18:37:22.600 WARNING: Reconnect start failed.

      javax.jms.JMSException: Failed to create session factory

                at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:605)

                at org.hornetq.jms.client.HornetQConnectionFactory.createConnection(HornetQConnectionFactory.java:119)

                at org.hornetq.jms.client.HornetQConnectionFactory.createConnection(HornetQConnectionFactory.java:114)

                at com.xxxxx.jms.JmsMessageCenter.start(JmsMessageCenter.java:120)

                at com.xxxxx.MessageCenter.reconnect(MessageCenter.java:106)

                at com.xxxxx.jms.JmsMessageCenter.reconnect(JmsMessageCenter.java:159)

                at com.xxxxx.jms.JmsMessageCenter$Receiver.run(JmsMessageCenter.java:351)

                at java.lang.Thread.run(Thread.java:722)

      Caused by: HornetQException[errorCode=2 message=Cannot connect to server(s). Tried with all available servers.]

                at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:774)

                at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:601)

                ... 7 more

       

      The key part of configuration files are listed below:

       

      On the live server side:

       

      hornetq-configuration.xml

       

         <clustered>true</clustered>
      
      
         <failover-on-shutdown>true</failover-on-shutdown>
      
      
         <allow-failback>true</allow-failback>
      
      
         <shared-store>true</shared-store>
        
         ....
      
         <connectors>      
            <connector name="netty">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
               <param key="host"  value="localhost"/>
               <param key="port"  value="5445"/>
            </connector>
      
      
            <connector name="netty-backup">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
               <param key="host"  value="localhost"/>
               <param key="port"  value="5446"/>
            </connector>
            
            <connector name="netty-throughput">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
               <param key="host"  value="localhost"/>
               <param key="port"  value="5455"/>
               <param key="batch-delay" value="50"/>
            </connector>
         </connectors>
      
      
         <acceptors>
            <acceptor name="netty">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
               <param key="host"  value="localhost"/>
               <param key="port"  value="5445"/>
            </acceptor>
            
            <acceptor name="netty-throughput">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
               <param key="host"  value="localhost"/>
               <param key="port"  value="5455"/>
               <param key="batch-delay" value="50"/>
               <param key="direct-deliver" value="false"/>
            </acceptor>
         </acceptors>
      ...
         <cluster-connections>
            <cluster-connection name="my-cluster">
               <address>jms</address>  
               <connector-ref>netty</connector-ref>
                    <discovery-group-ref discovery-group-name="dg-group1"/>
            </cluster-connection>
         </cluster-connections>
      ...
      

       

      hornetq-jms.xml

       

      ... 
      <connection-factory name="NettyConnectionFactory">
            <xa>false</xa>
            <connectors>
               <connector-ref connector-name="netty"/>
      <connector-ref connector-name="netty-backup"/> 
           </connectors>
            <entries>
               <entry name="/ConnectionFactory"/>
            </entries>
      
            <ha>true</ha>
      <confirmation-window-size>1</confirmation-window-size>      
            <!-- Pause 1 second between connect attempts -->
            <retry-interval>1000</retry-interval>
      
      
            <!-- Multiply subsequent reconnect pauses by this multiplier. This can be used to
            implement an exponential back-off. For our purposes we just set to 1.0 so each reconnect
            pause is the same length -->
            <retry-interval-multiplier>1.0</retry-interval-multiplier>
      
      
            <!-- Try reconnecting an unlimited number of times (-1 means "unlimited") -->
            <reconnect-attempts>-1</reconnect-attempts>
      
         </connection-factory>
      
      ...
      

       

      On the backup server side:

       

      <clustered>true</clustered>
      
      
         <failover-on-shutdown>true</failover-on-shutdown>
      
      
         <allow-failback>true</allow-failback>
      
      
         <shared-store>true</shared-store>
      
      
         <backup>true</backup>
      ...
         <connectors>      
            <connector name="netty">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
               <param key="host"  value="localhost"/>
               <param key="port"  value="5446"/>
            </connector>
      
      
            <connector name="netty-live">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
               <param key="host"  value="localhost"/>
               <param key="port"  value="5445"/>
            </connector>
            
            <connector name="netty-throughput">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
               <param key="host"  value="localhost"/>
               <param key="port"  value="5455"/>
               <param key="batch-delay" value="50"/>
            </connector>
      
         </connectors>
      
      
         <acceptors>
            <acceptor name="netty">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
               <param key="host"  value="localhost"/>
               <param key="port"  value="5446"/>
            </acceptor>
            
            <acceptor name="netty-throughput">
               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
               <param key="host"  value="localhost"/>
               <param key="port"  value="5455"/>
               <param key="batch-delay" value="50"/>
               <param key="direct-deliver" value="false"/>
            </acceptor>
         </acceptors>
      ...
         <cluster-connections>
            <cluster-connection name="my-cluster">
               <address>jms</address>  
               <connector-ref>netty</connector-ref>
      <retry-interval>500</retry-interval>
          <use-duplicate-detection>true</use-duplicate-detection>
          <forward-when-no-consumers>true</forward-when-no-consumers>
          <max-hops>1</max-hops>
          <static-connectors>
      <connector-ref>netty</connector-ref>      
      <connector-ref>netty-live</connector-ref>
          </static-connectors>
            </cluster-connection>
         </cluster-connections>
      ...
      

       

      hornetq-jms.xml

       

      ...
         <connection-factory name="NettyConnectionFactory">
            <xa>false</xa>
            <connectors>
               <connector-ref connector-name="netty"/>
               <connector-ref connector-name="netty-live"/>
            </connectors>
            <entries>
               <entry name="/ConnectionFactory"/>
            </entries>
      
            <ha>true</ha>
      
      
      <confirmation-window-size>1</confirmation-window-size>
            
            <!-- Pause 1 second between connect attempts -->
            <retry-interval>1000</retry-interval>
      
      
            <!-- Multiply subsequent reconnect pauses by this multiplier. This can be used to
            implement an exponential back-off. For our purposes we just set to 1.0 so each reconnect
            pause is the same length -->
            <retry-interval-multiplier>1.0</retry-interval-multiplier>
      
      
            <!-- Try reconnecting an unlimited number of times (-1 means "unlimited") -->
            <reconnect-attempts>-1</reconnect-attempts>
      
         </connection-factory>
      ...
      

       

      My client side code is java based and it supports two ways of connecting to a hornetq server: by jnp and by jms. In my case, the jnp way (url: jnp://localhost:1099 ) would work but the jms way doesn't.

      To Create a connection factory with url jms://<host>:<port>, we are using code like:

       

      Map<String, Object> connectionParams = new HashMap<>();
                  connectionParams.put(TransportConstants.PORT_PROP_NAME, port);
                  connectionParams.put(TransportConstants.HOST_PROP_NAME, host);
                  TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class.getName(), connectionParams);
                  return HornetQJMSClient.createConnectionFactoryWithHA(JMSFactoryType.CF, transportConfiguration);
      

       

      I guess there's something missing in my client side code,but I can't figure out what it would be during a long time of searching. Also, I don't want to do the same as demonstrated in 38.2.2.2.2. Configuring client discovery using JMS because I'm not allowed to pass the port of backup server to the app.

      Any thought or idea will be appreciated.

      Thanks in advance,

      Hyrax

        • 1. Re: Client failover with static clusters in HA mode
          Andy Taylor Master

          It looks from the stacktrace that you are starting your client aftrer failovr has occured, if so you will need to pass the backups transport configuration as well as the lives

          • 2. Re: Client failover with static clusters in HA mode
            Hyrax Wang Newbie

            Hi Andy,

            Thanks a lot for your reply.

            As a matter of fact, I started my client before failover occured and I saw those exceptions right after shut down the live server.

            But you are right, I might fail to pass backup server's transport configuration to my client successfully, or maybe my client didn't use the correct way to fetch configurations.

            I just provided host and port of the live to client because by setting as above the client could be acknowledged about the cluster from the configurations of the live.

            Could please tell how I can do it right?

            Many many thanks!!!

            Hyrax

            • 3. Re: Client failover with static clusters in HA mode
              Hyrax Wang Newbie

              And to use static cluster, does it mean I can get rid of discovery group?

              Also do I have provide the ports for both live and backup servers to the client?

              Thanks,

              Hyrax

              • 4. Re: Client failover with static clusters in HA mode
                Andy Taylor Master

                And to use static cluster, does it mean I can get rid of discovery group?

                yes

                Also do I have provide the ports for both live and backup servers to the client?

                you need to provide the host and port for at least one server you kno wis live, in your case both live and backup

                • 5. Re: Client failover with static clusters in HA mode
                  Hyrax Wang Newbie

                  Hi Andy,

                  Thanks a lot for your response, it is very helpful. I think I'm very close to what I want!

                  So is there any way I can just provide the host and port of the live server to the client? How can I make the client know about the backup server from the live server?

                  Thanks a lot,

                  Hao

                  • 6. Re: Client failover with static clusters in HA mode
                    Andy Taylor Master

                    I think you are mis understanding the API, the connectors you create the factroy with are "initial" connectors, these are used to locate a server in the cluster. Once connected the client will receive a full list of all available live and backups.

                    • 7. Re: Client failover with static clusters in HA mode
                      Hyrax Wang Newbie

                      Hi Andy,

                      That's exactly how I understood the API I used, it just makes me worried since client can't switch to backup when I shutdown the live. Moreover, to make the client receive a full list of all available live and backups, I add :

                      ... 

                        <connection-factory name="NettyConnectionFactory">   

                         <xa>false</xa>  

                          <connectors>    

                           <connector-ref connector-name="netty"/>     

                          <connector-ref connector-name="netty-backup"/>   

                         </connectors> 

                           <entries>     

                          <entry name="/ConnectionFactory"/> 

                           </entries>   

                          <ha>true</ha> 

                      <confirmation-window-size>1</confirmation-window-size> 

                                 <!-- Pause 1 second between connect attempts -->  

                          <retry-interval>1000</retry-interval>         <!-- Multiply subsequent reconnect pauses by this multiplier. This can be used to       implement an exponential back-off. For our purposes we just set to 1.0 so each reconnect       pause is the same length -->  

                          <retry-interval-multiplier>1.0</retry-interval-multiplier>     

                         <!-- Try reconnecting an unlimited number of times (-1 means "unlimited") -->  

                          <reconnect-attempts>-1</reconnect-attempts>  

                        </connection-factory>

                      ...

                      in live's hornetq-jms.xml where I think

                        <connectors>    

                           <connector-ref connector-name="netty"/>     

                          <connector-ref connector-name="netty-live"/>   

                         </connectors>

                      will do the job, won't they?

                      By the way, netty and netty-live are all defined in live's hornetq-configuration.xml as:

                      <connectors>      


                           <connector name="netty">   

                            <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>    

                           <param key="host"  value="localhost"/>     

                          <param key="port"  value="5445"/>   

                         </connector>    

                          <connector name="netty-backup">     

                          <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>  

                             <param key="host"  value="localhost"/>  

                             <param key="port"  value="5446"/>  

                          </connector>           

                      <connector name="netty-throughput">    

                           <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>   

                            <param key="host"  value="localhost"/>     

                          <param key="port"  value="5455"/>    

                           <param key="batch-delay" value="50"/>   

                         </connector> 

                        </connectors>

                      I also made the corresponding configurations in backup side.

                      Could you please tell me anything else should I set up for letting client be able to switch to backup when shutting down the live.

                      Thanks a lot!

                      Have a good weekend!

                      Hyrax

                      • 8. Re: Client failover with static clusters in HA mode
                        Andy Taylor Master

                        I would start by running the static failover example, if that doesnt work then maybe you have an env problem

                        • 9. Re: Client failover with static clusters in HA mode
                          Hyrax Wang Newbie

                          Thanks for your quick response, Andy. (Sorry for replying late, the forum doesn't allow me to post content more than every 900 seconds......)

                          As a matter of fact, I looked all examples related to failover in example/jms/ :

                          application-layer-failover

                          client-side-failoverlistener

                          multiple-failover

                          multiple-failover-failback

                          non-transaction-failover

                          replicated-multiple-failover

                          replicated-transaction-failover

                          stop-server-failover

                          transaction-failover

                          I can't find any of those use static connectors. I tried to run one of them (actually I have run it before since my configs are from here ) and it succeeds.

                          Could you please which example is closer to what I want to implement?

                          Thanks a lot!

                          • 10. Re: Client failover with static clusters in HA mode
                            Andy Taylor Master

                            copy one of the static clustered examples and tweak one of the eservers to be a backup

                            • 11. Re: Client failover with static clusters in HA mode
                              Andy Taylor Master

                              do you see the backup announcing itself?

                              • 12. Re: Client failover with static clusters in HA mode
                                Hyrax Wang Newbie

                                Yes, I do. I can observed the following logs as below which should be positive:

                                11:35:09,930 INFO  [org.hornetq.core.server] HQ221033: ** got backup lock

                                11:35:10,075 INFO  [org.hornetq.core.server] HQ221013: Using NIO Journal

                                11:35:10,098 WARN  [org.hornetq.core.server] HQ222007: Security risk! HornetQ is running with the default cluster admin user and default password. Please see the HornetQ user guide, cluster chapter, for instructions on how to change this.

                                11:35:10,397 INFO  [org.hornetq.core.server] HQ221109: HornetQ Backup Server version 2.3.0.SNAPSHOT (colonizer, 123) [ac4d2e9a-c7ba-11e2-8792-b90015497439] started, waiting live to fail before it gets active

                                11:35:10,541 INFO  [org.hornetq.core.server] HQ221031: backup announced

                                I assign port 5445 to the live and 5446 to backup and on both sides' hornetq-configurations.xml and hornetq-jms.xml I defined the connectors for both.

                                Could you please tell me how to make it work?

                                Thanks,

                                Hyrax

                                • 13. Re: Client failover with static clusters in HA mode
                                  Hyrax Wang Newbie

                                  Hi Andy,

                                  I'm actually using jms url to connect my client app to hornetQ server, like jms://localhost:5445.  Is that a problem?

                                  Thanks,

                                  Hyrax

                                  • 14. Re: Client failover with static clusters in HA mode
                                    Hyrax Wang Newbie

                                    Hi Andy,

                                    The good news is that after I fixed my configs following the replicated-failback-static example, I can make the failover happen if I'm using jnp url like jnp://localhost:1099.

                                    But still the jms url won't work, but I have to support both ways here.

                                    Any suggestions?

                                    Thanks in advance!

                                    Hyrax

                                    1 2 Previous Next