2 Replies Latest reply on Jun 15, 2010 6:44 PM by smays@edmunds.com

    Clustering  ActiveMQ Master/Slave tairs topology

    gbm_systems

      How to combine the cluster topology and master / slave?

       

      Here's my needs:

       

      I have two groups of servers master / slave to ensure high availability:

       

         Group One: AMQMaster1, AMQSlaveM11, AMQSlaveM12 topology with the Master / Slave Shared Storage

       

        Group Two: AMQMaster2, AMQSlaveM21, AMQSlaveM22 topology with the Master / Slave Shared Storage

       

      secondly, to spread the load to increase performance, I wish to both nodes (group one, group two) above cluster.

       

      Can anyone explain how to cluster the two neouds Master / Slave?

       

      Edited by: gbm on Mar 12, 2010 12:43 PM

        • 1. Re: Clustering  ActiveMQ Master/Slave tairs topology
          l1nk

          Hi.

           

          I'm having the same problem.

           

          Did you get any answer?!

          • 2. Re: Clustering  ActiveMQ Master/Slave tairs topology
            smays@edmunds.com

            You can do this by embedding failover: inside of your static: directives like

            static:(failover:(tcp://broker1:1102,tcp://failover-broker1:1102),failover:(tcp://broker2:1102,tcp://failover-broker2:1102)

             

            Big pluses:

            1. You can scale wide without multicast

            2. You can sustain single failure in groups of VMs because of the failover

             

            BAD stuff:

            1. There is a known bug that causes a major problem when you failover to your backup broker, then attempt to fail BACK to your primary broker.

               a. Messages are not sent from the primary broker after fail BACK!

               b. See this ticket: https://issues.apache.org/activemq/browse/AMQ-2114

            2. To fail BACK to your primary from your secondary, you must stop all brokers, nuke the backing stores then start the primaries then secondaries.

               a. BOOOO!

               b. Trying to get this issue addressed

               c. Its still there as of 3.5.1