1 2 3 Previous Next 40 Replies Latest reply on Dec 5, 2011 10:25 AM by sv_srinivaas

    Client to access cluster (possibly hitting backup server)

    jhannah

      As you can likely tell, I'm relatively new to HornetQ configuration.  I suspect this is a straight-forward solution, but I haven't come across the answer as of yet.

       

      I've setup a Live-Backup server configuration, but now I'm attempting to access that cluster from a client.  In the past my client has had a connector with the host & port of my server, however, now that I have a Live-Backup configuration I cannot rely on my server IP being static.  How is the client to be configured to connect to a cluster (which could be hitting a Backup server) instead of just an individual server?

       

      Thanks,

       

      J

        • 1. Re: Client to access cluster (possibly hitting backup server)
          clebert.suconic

          When you connect to that node, the node will send a notification to the client telling what is the backup.

           

          How you are setting up the initial connection? if manually you could setup both connectors on the ServerLocator.

           

           

          Also: what versoin are you using?

          • 2. Re: Client to access cluster (possibly hitting backup server)
            jhannah

            I'm using JBoss 6.1.0.Final, which contains HornetQ 2.2.5.Final.

             

            I've tried a number of configurations, as I'm not sure of the best way to access the Live-Backup configuration.  My initial connection from my JBoss web server is via a connection-factory configured locally to point at two connectors.  Currently my connection-factory within my hornetq-jms.xml looks like:

             

            <connection-factory name="NettyConnectionFactory">
            <xa>true</xa>

            <connectors>
              <connector-ref connector-name="netty"/>
              <connector-ref connector-name="netty-backup"/>
            </connectors>
            <entries>
              <entry name="/ConnectionFactory"/>
              <entry name="/XAConnectionFactory"/>
            </entries>
            <ha>true</ha>
            <!-- Pause 1 second between connect attempts -->
            <retry-interval>1000</retry-interval>

            <!-- Multiply subsequent reconnect pauses by this multiplier. -->
            <retry-interval-multiplier>1.0</retry-interval-multiplier>

            <!-- Try reconnecting an unlimited number of times (-1 means "unlimited") -->
            <reconnect-attempts>-1</reconnect-attempts>
            </connection-factory>


            This is pointing at two connectors within my hornetq-configuration.xml, which look like:

            <connector name="netty">
            <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
            <param key="host"  value="10.90.101.87"/>
            <param key="port"  value="5445"/>
            </connector>

            <connector name="netty-backup">
            <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
            <param key="host"  value="10.90.101.88"/>
            <param key="port"  value="5445"/>
            </connector>

             

            Obviously I'd prefer not to specify the IPs, but it seems to be working (at least until I attempt the fail-over functionality).

             

             

            Could you explain what you mean by ServerLocator, in reference to where the connectors should be setup?

             

            Thanks,

             

            J

            • 3. Re: Client to access cluster (possibly hitting backup server)
              ataylor

              The main thing to check is that your backup is actually connecting to the live to announce itself, once this happens it should work fine, you should see a 'backup announced' message on the backup server, are you seeing this?

              • 4. Re: Client to access cluster (possibly hitting backup server)
                jhannah

                I did have some issues previously, but I have been able to get the backup announcing itself.

                 

                When the Live HornetQ server starts, some of the logs outputted are:
                08:32:45,278 INFO  [HornetQServerImpl] live server is starting with configuration HornetQ Configuration (clustered=true,backup=false,sharedStore=true,journalDirectory=/mnt/jwh1/jhannah/hornetq/journal,bindingsDirectory=/mnt/jwh1/jhannah/hornetq/bindings,largeMessagesDirectory=/mnt/jwh1/jhannah/hornetq/largemessages,pagingDirectory=/mnt/jwh1/jhannah/hornetq/paging)
                08:32:45,302 INFO  [HornetQServerImpl] Waiting to obtain live lock
                08:32:45,393 INFO  [JournalStorageManager] Using NIO Journal
                08:32:45,598 INFO  [FileLockNodeManager] Waiting to obtain live lock
                08:32:45,599 INFO  [FileLockNodeManager] Live Server Obtained live lock
                08:32:46,446 INFO  [NettyAcceptor] Started Netty Acceptor version 3.2.3.Final-r${buildNumber} 10.90.101.87:5445 for CORE protocol
                08:32:46,449 INFO  [NettyAcceptor] Started Netty Acceptor version 3.2.3.Final-r${buildNumber} 10.90.101.87:5446 for CORE protocol
                08:32:46,451 INFO  [NettyAcceptor] Started Netty Acceptor version 3.2.3.Final-r${buildNumber} 0.0.0.0:5455 for CORE protocol
                08:32:46,492 INFO  [HornetQServerImpl] Server is now live
                08:32:46,493 INFO  [HornetQServerImpl] HornetQ Server version 2.2.5.Final (HQ_2_2_5_FINAL_AS7, 121) [069e2c0a-e60c-11e0-915a-005056b50c86] started

                 

                And when the Backup HornetQ server starts, some of the logs outputted are:
                08:33:07,830 INFO  [HornetQServerImpl] backup server is starting with configuration HornetQ Configuration (clustered=true,backup=true,sharedStore=true,journalDirectory=/mnt/jwh1/jhannah/hornetq/journal,bindingsDirectory=/mnt/jwh1/jhannah/hornetq/bindings,largeMessagesDirectory=/mnt/jwh1/jhannah/hornetq/largemessages,pagingDirectory=/mnt/jwh1/jhannah/hornetq/paging)
                08:33:07,889 INFO  [FileLockNodeManager] Waiting to become backup node
                08:33:07,890 INFO  [FileLockNodeManager] ** got backup lock
                08:33:08,025 INFO  [JournalStorageManager] Using NIO Journal
                08:33:08,940 INFO  [ClusterManagerImpl] announcing backup
                08:33:08,943 INFO  [HornetQServerImpl] HornetQ Backup Server version 2.2.5.Final (HQ_2_2_5_FINAL_AS7, 121) [44e7f699-e611-11e0-bf71-005056b50c88] started, waiting live to fail before it gets active

                 

                On startup there are no errors in either of the logs, so I believe everything is setup fine with respect to the Live-Backup configuration.  My issue is that my client doesn't behave well when I shutdown my Live server.  I expect that requests should still be processed as the Backup server is supposed to take over, however, when the Live server goes down and I submit a request from the client my server log states:

                 

                2011-09-27 08:45:07,626 WARN  [org.hornetq.core.client.impl.ClientSessionFactoryImpl] (http-0.0.0.0-8080-1) Tried 1 times to connect. Now giving up on reconnecting it.

                 

                which of course results in a number of client application errors.  The logs read:

                 

                ERROR 27 Sep 2011 08:45:07,627 http-0.0.0.0-8080-1 net.rim.serviceplan.services.workers.QueueWorker - Problem while pushing message to queue named '/queue/RequestQueue'.
                javax.jms.JMSException: Failed to create session factory
                at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:615)
                at org.hornetq.jms.client.HornetQConnectionFactory.createQueueConnection(HornetQConnectionFactory.java:133)
                at net.rim.serviceplan.services.workers.QueueWorker.setup(QueueWorker.java:44)
                at net.rim.serviceplan.services.workers.QueueWorker.pushMessage(QueueWorker.java:111)
                at net.rim.serviceplan.services.ClientServices.subscribe(ClientServices.java:99)
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
                at java.lang.reflect.Method.invoke(Method.java:597)
                at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:140)
                at org.jboss.resteasy.core.ResourceMethod.invokeOnTarget(ResourceMethod.java:255)
                at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:220)
                at org.jboss.resteasy.core.ResourceMethod.invoke(ResourceMethod.java:209)
                at org.jboss.resteasy.core.SynchronousDispatcher.getResponse(SynchronousDispatcher.java:519)
                at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:496)
                at org.jboss.resteasy.core.SynchronousDispatcher.invokePropagateNotFound(SynchronousDispatcher.java:155)
                at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:212)
                at org.jboss.resteasy.plugins.server.servlet.FilterDispatcher.doFilter(FilterDispatcher.java:59)
                at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:274)
                at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242)
                at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
                at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
                at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:181)
                at org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.event(CatalinaContext.java:285)
                at org.jboss.modcluster.catalina.CatalinaContext$RequestListenerValve.invoke(CatalinaContext.java:261)
                at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:88)
                at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:100)
                at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:159)
                at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
                at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
                at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
                at org.jboss.web.tomcat.service.request.ActiveRequestResponseCacheValve.invoke(ActiveRequestResponseCacheValve.java:53)
                at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:362)
                at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:877)
                at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:654)
                at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:951)
                at java.lang.Thread.run(Thread.java:662)
                Caused by: HornetQException[errorCode=2 message=Cannot connect to server(s). Tried with all available servers.]
                at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:619)
                at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:611)
                ... 36 more


                So, to me it appears that my client isn't configured properly to work with a Live-Backup configuration.  Any suggestions?

                 

                Thanks,

                 

                J

                • 5. Re: Client to access cluster (possibly hitting backup server)
                  clebert.suconic

                  How do you create the connection at your client? on  net.rim.serviceplan.services.workers.QueueWorker?

                  • 6. Re: Client to access cluster (possibly hitting backup server)
                    jhannah

                    Yes, the QueueWorker creates the connection with the following code:

                     

                    javax.naming.Context initialContext = new InitialContext();
                    QueueConnectionFactory connectionFactory = (QueueConnectionFactory) initialContext.lookup("ConnectionFactory");
                    queueConnection = connectionFactory.createQueueConnection("user", "pass");
                    queue = (Queue) initialContext.lookup(queueName);
                    jmsSession = queueConnection.createQueueSession(false, QueueSession.AUTO_ACKNOWLEDGE);
                    queueConnection.start();
                    sender = jmsSession.createSender(queue);

                     

                    messages are then sent to the sender using the send() method.

                     

                    I suspect I have not correctly configured my hornetq-jms.xml or my hornetq-configuration.xml on the client JBoss (see earlier reply).  Thoughts?

                     

                    Thanks,

                     

                    J

                    • 7. Re: Client to access cluster (possibly hitting backup server)
                      clebert.suconic

                      QueueWorker is on a Remote JBoss as I understand, right?

                       

                      Why are you doing new InitialContext() on that case, since this will look at your local instance.

                       

                      You probalby need to set the remote instances at your new InitialContext();

                      • 8. Re: Client to access cluster (possibly hitting backup server)
                        jhannah

                        No, QueueWorker is a class within my application, but it is sending to a queue that exists within a HornetQ instance on a remote server.  It works fine until the attempted failover occurs.  Then it can't seem to connect to the backup HornetQ server.

                         

                        J

                        • 9. Re: Client to access cluster (possibly hitting backup server)
                          clebert.suconic

                          Did you specify <ha>true</ha> at your connection-factory on hornetq-jms.xml

                           

                           

                          Take a look into /examples/jms/multiple-failover on the hornetq-distribution.zip it may help you with configs.

                          • 10. Re: Client to access cluster (possibly hitting backup server)
                            jhannah

                            I do have <ha>true</ha> set.  I've been following the examples, and that's how I've gotten the Live-Backup configured.  The client issues I'm experiencing are my current hurdle.

                            • 11. Re: Client to access cluster (possibly hitting backup server)
                              clebert.suconic

                              It seems to me that you are opening the connection every time, right?

                               

                              Failover will play nicely if you have your connection opened.. and it will failover in the advent of a server failure.

                               

                               

                              Also: there were a couple of tight ups on the EAP integration regarding deployers and at the time the backup server is activated. You need to lookup for the proper server where the failure happened. Make sure the old node is not active and you are not getting the reference to the old backup somehow. (say if you just stopped the server while the JNDI is still active at your backup).

                               

                              If you connect every time, you should define both JNDIs at your InitialContext and download the factory at the other node.

                               

                               

                              I don't have the whole picture of what you're doing though. but it seems something like that.

                              • 12. Re: Client to access cluster (possibly hitting backup server)
                                jhannah

                                I'm still unclear of how I should be connecting my client to a cluster of HornetQ server.  Perhaps I'll just ask the following straightforward question... What should the connector look like for a client attempting to access a cluster of hornetq servers?

                                 

                                Currently I have:

                                 

                                <connector name="netty">

                                <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>

                                <param key="host" value="10.90.101.23"/>

                                <param key="port" value="5445"/>

                                </connector>

                                 

                                however, by including the IP address, I don't see how failover could ever work.

                                 

                                Thanks,


                                J

                                • 13. Re: Client to access cluster (possibly hitting backup server)
                                  clebert.suconic

                                  I meant when you download the initial definition of the ConnectionFactory

                                   

                                  You are downloading the ConnectionFactory every time. Maybe you are getting it from the system where it failed.

                                   

                                  If you just create a Connection, keep it open... and kill the live. Does failover kick in? (Same way as done on the example?)

                                  • 14. Re: Client to access cluster (possibly hitting backup server)
                                    jhannah

                                    I do create the connection upon each request.  Is it the best practice to open a single connection and keep it open indefinitely?

                                    You are correct that once the Live server fails, my connections are still being directed to the Live server (which is now down).  This is because of the connector I've posted above, which points to the Live server only.  I'm not sure how to point the client connector at a cluster instead of a specific server.

                                     

                                    Any thoughts?

                                     

                                    Thanks,

                                     

                                    J

                                    1 2 3 Previous Next