8 Replies Latest reply on Jun 1, 2017 6:52 AM by cheick

    Redistribute messages from a backup node of a cluster towards a live node of another cluster

    cheick

      Hi,

       

        This is my first time on this forum,  I would like to thank all the guys helping here  

       

         I am configuring two hornetq live - backup (2 clusters) pair on 2 jboss nodes. Currently the failover is working as expected, once one jboss node goes down the messages of the live server

      are replicated on the corresponding backup node on the second jboss node.

       

      My issue right now is how to access to the messages on the hornetq backup server. I am using HermesJMS from SOAPUI to read the queue on different jboss nodes. From Hermes JMS I can only see the

      message on the active node, but by using the jboss cli see the messages on the backup node.

       

      I would like to know whether is there a configuration tips helping to redistribute all the backup node messages towards the live node which is on the same jboss node but on different cluster ?

       

      Thank you in advance

       

      Cheick

        • 1. Re: Redistribute messages from a backup node of a cluster towards a live node of another cluster
          jbertram

          I am configuring two hornetq live - backup (2 clusters) pair on 2 jboss nodes.

          Just a word here on terminology...

           

          A live/backup pair is indeed part of a cluster, but what you're creating here isn't 2 individual clusters but one single cluster with 2 live/backup pairs.

           

          My issue right now is how to access to the messages on the hornetq backup server. I am using HermesJMS from SOAPUI to read the queue on different jboss nodes. From Hermes JMS I can only see the message on the active node, but by using the jboss cli see the messages on the backup node.

          How are you attempting to access the messages on the now-live backup?  Can you share your configuration?

           

          I would like to know whether is there a configuration tips helping to redistribute all the backup node messages towards the live node which is on the same jboss node but on different cluster ?

          Typically messages will be redistributed from one node to another based on demand.

           

          To be clear, if the nodes are part of different clusters then there won't be any automated redistribution.  That said, I think you're just mixing up your terminology here and the nodes really are part of the same cluster.

          1 of 1 people found this helpful
          • 2. Re: Redistribute messages from a backup node of a cluster towards a live node of another cluster
            cheick

            Hi Justin,

             

            Thank you for the response and the clarification concerning the cluster of 2 live/backup pairs.

             

            How are you attempting to access the messages on the now-live backup? Can you share your configuration?

            Indeed, my goal is to access to the messages on the now-live backup with HermesJMS using the remoteConnectionFactory (remote://host:4447). Please find my hornetq servers configurations on each jboss node.

             

            Currently, even after the failover I can only access to the messages on the live. my final goal is to have in addition the messages from the now-live backup available on the live then the failover is  seamless for the consumer connected to the live.

             

            Thank you again for your support

             

            C.

            • 3. Re: Redistribute messages from a backup node of a cluster towards a live node of another cluster
              jbertram

              Indeed, my goal is to access to the messages on the now-live backup with HermesJMS using the remoteConnectionFactory (remote://host:4447). Please find my hornetq servers configurations on each jboss node.

              Looking at your configuration I can see that on each instance of JBoss AS the "live" HornetQ instance has a <connection-factory name="RemoteConnectionFactory"> (among other connection factories) while the "backup" instances have no connection factories.  If you connect to an instance of JBoss AS (e.g. using remote://host:4447) and lookup "jms/RemoteConnectionFactory" then you're going to get a connection factory pointing to the "live" instance, not the "backup" instance.  This means that if you create a connection with that connection factory you will have access to the messages on the "live" instance, not the "backup" instance.  I hope that makes sense.

               

              Currently, even after the failover I can only access to the messages on the live. my final goal is to have in addition the messages from the now-live backup available on the live then the failover is seamless for the consumer connected to the live.

              Any client connected to a "live" instance with a connection properly configured for fail-over will automatically fail-over to the corresponding "backup" instance in the event that the "live" instance fails.  The clients which fail-over will be able to continue consuming and producing messages just as if they were connected to the now-dead "live" instance.  It should be "seamless," as you say.

               

              If you really want the messages from the now-live "backup" instance to be redistributed to the colocated "live" instance then you'll need to consume all the messages from the "live" instance and then the messages from the "backup" instance will be redistributed.  This is the normal redistribution semantic for nodes in a cluster.

               

              Couple of additional things:

              1. The "soft.jms.xaconnectionfactory" pooled-connection-factory doesn't make sense to me as it is configured for fail-over.  A pooled-connection-factory is restricted to clients running in the same JVM as the broker which means that if the broker dies then the client will almost certainly be dead as well.
              2. The "test" cluster-connection doesn't appear to have any valid use.
              3. You might have trouble with remote HornetQ clients or even with clustering itself if you bind JBoss AS to 0.0.0.0.  This has been covered on the forum lots of times (e.g. here).
              • 4. Re: Redistribute messages from a backup node of a cluster towards a live node of another cluster
                cheick

                Hi Justin,

                 

                Looking at your configuration I can see that on each instance of JBoss AS the "live" HornetQ instance has a <connection-factory name="RemoteConnectionFactory"> (among other connection factories) while the "backup" instances have no connection factories. If you connect to an instance of JBoss AS (e.g. using remote://host:4447) and lookup "jms/RemoteConnectionFactory" then you're going to get a connection factory pointing to the "live" instance, not the "backup" instance. This means that if you create a connection with that connection factory you will have access to the messages on the "live" instance, not the "backup" instance. I hope that makes sense.

                You're rigth that makes sense. The documentation was mentionning to create the connection factory and jms destinations only on the live server.

                 

                Currently, even after the failover I can only access to the messages on the live. my final goal is to have in addition the messages from the now-live backup available on the live then the failover is seamless for the consumer connected to the live.

                Any client connected to a "live" instance with a connection properly configured for fail-over will automatically fail-over to the corresponding "backup" instance in the event that the "live" instance fails. The clients which fail-over will be able to continue consuming and producing messages just as if they were connected to the now-dead "live" instance. It should be "seamless," as you say.

                 

                If you really want the messages from the now-live "backup" instance to be redistributed to the colocated "live" instance then you'll need to consume all the messages from the "live" instance and then the messages from the "backup" instance will be redistributed. This is the normal redistribution semantic for nodes in a cluster.

                It looks like the configuration  is done on the server side and the remaining part should be managed on the client side with an automatic client failover. Finally, exposing the 2 endpoints

                remote://hostname1:4447 and remote://hostname2:4447 is enough.

                 

                Couple of additional things:

                1. The "soft.jms.xaconnectionfactory" pooled-connection-factory doesn't make sense to me as it is configured for fail-over. A pooled-connection-factory is restricted to clients running in the same JVM as the broker which means that if the broker dies then the client will almost certainly be dead as well.
                2. The "test" cluster-connection doesn't appear to have any valid use.

                My bad, I added it even if I am not using it.  I have introduced the "test" to check initially how it works.

                 

                1. You might have trouble with remote HornetQ clients or even with clustering itself if you bind JBoss AS to 0.0.0.0. This has been covered on the forum lots of times (e.g. here).

                In the startup script, after facing multiple issues I have set the box IP.

                 

                I am going to test the different use cases listed (automatic client failover and all messages consumption) and quickly let you know. You helped me to clarify a lot.  Thank you

                • 5. Re: Redistribute messages from a backup node of a cluster towards a live node of another cluster
                  jbertram

                  You're rigth that makes sense. The documentation was mentionning to create the connection factory and jms destinations only on the live server.

                  Having connection factories only for the "live" instance is the generally recommended configuration as the "backup" instance typically doesn't need them for the following reasons:

                  1. Clients will automatically be connected to the backup if the live fails.
                  2. Client connections will automatically be load-balanced to the backup even if they look up the connection factory on another node in the cluster.

                   

                  It's also worth noting that in the case of colocated live/backup pairs a node failure is particularly problematic because:

                  • You lose half of the computing resources for message processing (since the work of 2 distinct machines is reduced to just 1).
                  • You lose the backup for the other node.

                   

                  In a dedicated live/backup topology where every instance of the broker runs on its own machine you can survive the failure of both live nodes.  That's not true in a colocated topology.

                   

                  It looks like the configuration is done on the server side and the remaining part should be managed on the client side with an automatic client failover. Finally, exposing the 2 endpoints

                  remote://hostname1:4447 and remote://hostname2:4447 is enough.

                  I'm not exactly sure what you're saying here.

                  • 6. Re: Redistribute messages from a backup node of a cluster towards a live node of another cluster
                    cheick

                    Unlike my initial thought of having specific URL to access to the backup nodes, I am just mentioning that the actual URL to access to the messages thru the port 4447 is sufficient.

                    • 7. Re: Redistribute messages from a backup node of a cluster towards a live node of another cluster
                      jbertram

                      Understood.

                      • 8. Re: Redistribute messages from a backup node of a cluster towards a live node of another cluster
                        cheick

                        Hi Justin,

                         

                        I would like to thank you for the support. As of now, I consider this thread as resolved.

                         

                        Cheers