2 Replies Latest reply on Aug 5, 2009 8:07 AM by Joe Luo

    How to achieve ActiveMQ load balancing and failover

    joealex1 Newbie

      I was setting up a JDBC Master Slave as per the following doc http://activemq.apache.org/jdbc-master-slave.html This setup will give you failover since the messages are always in the DB and not owned by a single Broker.

       

      The downside is there is only one Master (Active) broker. Most of the cases this is fine. But for a high volume setup this may overload the only Master. Is here a way to achieve multiple Masters (Master-Master) setup with JDBC persistence. Ideally this setup should give both the desired results of Load balancing and Failover.

       

      Another question is for a JDBC master slave I see the messages are in ACTIVEMQ_MSGS table. For a Dynamically created Queue is any data stored in each Broker or everything is in the DB. When is ACTIVEMQ_ACKS used. Did not see that being used.

       

      Can I use the networkConnector to achieve this along with the DB persistence ?

       

       

       

      And assume the client will connect via the failover protocol

      failover:(tcp://host0:61616,tcp://host1:61616)

        • 1. Re: How to achieve ActiveMQ load balancing and failover
          Pedro Neveu Newbie

          Hi,

           

          You can potentially have a cluster of master/slave topology.  That is you could have a cluster of brokers with each broker participating in a master/slave scenario (in your case JDBC).  In your client you then have to list all the brokers in the system.  So for instance if you have 4 brokers running on different hosts, say broker A, B, C and D, you could setup all the brokers to be in a network of brokers where each broker would be pointing to every other brokers (for static) or just use multicast in the networkConnector (http://activemq.apache.org/networks-of-brokers.html).  However, a pair of the brokers would also be set up as master/slave based on what you already know.

           

          In the scenario above A and C could be in a master/slave relationship and similarly B and D could be in a master/slave relationship.  In this case A and C would be using the same port number and so would B and D.  So when you start all the brokers, A and B would be active and C and D would be waiting for a lock on the DB.

           

          Then on your client side your failover list must include all the brokers.  It would look something like this:

           

          failover:(tcp://A:61616,tcp://B:61617,tcp://C:61616,tcp://D:61617)

           

          In this scenario you should get scalability, loadbalancing and failover by virtue of network of brokers, and reliability and failover by virtue of the master/slave topology.

           

          Let me know if this is what you were asking or did I misunderstood your question?  I'm not too clear on your question regarding ACTIVEMQ_MSGS table and ACTIVEMQ_ACKS.  Could you please give me some more information?

           

          Regards,

           

          Pedro

          • 2. Re: How to achieve ActiveMQ load balancing and failover
            Joe Luo Novice

            You can configure multiple slaves but unfortunately, you can only have one master broker.

             

            For a dynamically created queue, all messages are stored in message store and the ACTIVEMQ_ACKS table is used for acknowledgment sent from durable subscribers.

             

            You might be able to use Network of brokers to do load balance provided that you have at least one consumer connected to each broker. Because the message will not be sent cross to another broker unless there is one or more consumers connected to it.

             

            To achive both load balance and persistence, you might configure a master/slave cluster in one node and another master/slave cluster in a seperated node and then network them together using network connectors.

             

            For instance, you can configure brokerA and broker B as master/slave on the localhost1 and brokerC and brokerD as master/slave on the localhost2, then use networkConnector to connect between the master/slave cluster on localhost1 and the master/slave cluster on the localhost2.