10 Replies Latest reply on Feb 8, 2016 3:05 AM by Miroslav Novak

    MDB queue consumption issue

    NA NA Newbie

      I have a MDB queue consumption issues in regards to consumption speed, and also an issues with number of active MDB instances that service the queue on both WildFly 8.1. and 9.0.2.

      The consumption issues is that it takes around 25ms for a MDB to consume a new message from a jms queue. To demonstrate I've made a simple Servlet that sends 100 messages to a queue, and on the other side a simple MDB that consumes those messages. MDB simulates processing of some data with Thread.sleep(50) and moves on to the next message. To configure wildfly I've used standalone-full-ha.xml and added a queue/test. The code:

       

      Servlets doGet:

      protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        String destinationName = "java:/jms/queue/test";
            PrintWriter out = response.getWriter();
            Context ic = null;
            ConnectionFactory cf = null;
            Connection connection =  null;
            try {         
              ic = new InitialContext();
              cf = (ConnectionFactory)ic.lookup("/ConnectionFactory");
              Queue queue = (Queue)ic.lookup(destinationName);
              connection = cf.createConnection();
              Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
              MessageProducer publisher = session.createProducer(queue);
              connection.start();
         
              for (int i = 0; i < 100; i++){
                   TextMessage message = session.createTextMessage("Hello " + i);
                   publisher.send(message);
                   LOGGER.info("Sent msg " + i);
              }
              out.println("Message sento to the JMS Provider");
         
           }
            catch (Exception exc) {
              exc.printStackTrace();
            }
           finally {         
               if (connection != null)   {
               try {
                  connection.close();
               } catch (JMSException e) {                    
                 e.printStackTrace();
               }
           } 
         }
      
        }
      

       

      MDB:

      @MessageDriven(
        activationConfig = { 
        @ActivationConfigProperty(propertyName = "destination", propertyValue = "jms/queue/test"), 
        @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
        @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"),
        @ActivationConfigProperty(propertyName = "maxSession", propertyValue = "1")
        }, 
        name = "ConsumerMdb")
      public class ConsumerMdb implements MessageListener {
        private static final Logger LOGGER = Logger.getLogger(ConsumerMdb.class.getName());
          /**
           * Default constructor. 
           */
        private static int globalId = 1;
        private int id;
          public ConsumerMdb() {
             
              id = ++globalId;
              LOGGER.warning("created mdb " + id);
          }
      
        /**
           * @see MessageListener#onMessage(Message)
           */
          public void onMessage(Message message) {
         
          LOGGER.info("IN ON MSG " + id);
          try {
        Thread.sleep(50);
        TextMessage msg = (TextMessage) message;
        LOGGER.info("Proccessing: " + msg.getText());
      
        } catch (Exception e) {
        e.printStackTrace();
        }
         
          LOGGER.info("OUT ON MSG");
          }
      
      
      }
      

       

      I've tested this on 2 PCs running on Windows and on CentOS as VirtualMachine they all have time difference of around 25ms between "OUT ON MSG" log and "IN ON MSG" log. I've tested the same code on Apache TomEE, and the consumtion speed on it is around 1ms. I've checked everything I could think off and I couldn't find the reason causing it. Any ideas how to fix this issue?

       

      Another issue I have with wildfly is the number of active MDB instances servicing a single queue. Eg. When I remove maxSession property the number of active MDBs is around 2-3 on Windows while on CentOS is around 9. I've tried ussing pool annotation min and max session properties with no avail. When I increase the pool size to eg. 4000 wildfly will initially create 4000 MDBs and when servlet starts sending messages it'll create 2 new MDBs and just use them. Any ideas how to fix this also?

        • 1. Re: MDB queue consumption issue
          Justin Bertram Master

          I'm not sure about the delay between processing messages, but the behavior you're observing with regard to the number of active MDBs is almost certainly caused by client-side buffering.  Each client (i.e. each session) has a buffer which holds messages so that the client doesn't have to do a network round-trip every time it consumes a message.  The buffer is controlled by the "consumerWindowSize" activation configuration property.  The buffer size is 1024 * 1024 bytes by default so if you have a bunch of small messages then just a few of the sessions will prefetch those messages into their buffers and other clients won't be able to consume them.  You can set consumerWindowSize to 0 to disable message buffering and that will allow more clients to process messages concurrently but will also eliminate any buffering optimization.  I recommend you benchmark and adjust the setting according to your needs.

           

          I'll try to take a look at the delay between messages...

           

          Also, I'd encourage you to push more messages through during your tests.  One hundred messages is probably too small a sample size to get statistically significant data.  Maybe try 50 producers each pushing 10,000 messages.

           

          Lastly, if you're in the same JVM as the broker then I'd also encourage you to use the JmsXA connection factory for sending your messages.

          • 2. Re: MDB queue consumption issue
            Justin Bertram Master

            I made a quick modification to the helloworld-mdb Wildfly quickstart to mimic your use-case, and I only see a delay of 2-4 milliseconds at the end of the 100 message block (with most being 2-3 ms). I pushed the commit to my fork so you could test it out yourself if you like.  Just follow the instructions in the quickstart; it's pretty straight-forward.

            • 3. Re: MDB queue consumption issue
              NA NA Newbie

              I've downloaded wildfly 10.0.0.final to test this on and it's slower on some messages and on others about the same, screen shot:

              mdb.PNG

              I thought the issue might be my laptop performance or JVM performance, but considering that TomEE does it with 1ms delay.. I don't see the reason why wildfly would be slower with 25ms or even in your case with 2-3ms. At this point I'm guessing there's a check-box that I didn't check.

              Thanks for the replies!

              • 4. Re: MDB queue consumption issue
                Justin Bertram Master

                There's lots of reasons why TomEE might be faster depending on what the messaging implementation is doing behind the scenes and how it's configured by default (e.g. not syncing to disk, treating messages as non-durable by default).  It's a tough comparison to make without knowing the underlying semantics are 100% equivalent.

                • 5. Re: MDB queue consumption issue
                  NA NA Newbie

                  Yeah, I know. I just used TomEE as a sort of benchmark for JMS because I couldn't find any actual benchmarks. Still 25ms for me is still way too slow

                  • 6. Re: MDB queue consumption issue
                    Justin Bertram Master

                    What exactly are your performance requirements?  Are you required to process messages serially?  If not, change "maxSession" to something greater than 1 so you'll be able to process messages concurrently.  If messages are piling up in the MDB's session's buffers then tune the "consumerWindowSize" or test with a larger volume of messages.  HornetQ (the broker on which Artemis is based) has been benchmarked handling over 8 million messages per second using SpecJMS.  I expect that Artemis could do even more than that with some optimizations that we've made since then.

                    • 7. Re: MDB queue consumption issue
                      NA NA Newbie

                      Well, yes and no. I'm using MDB to exchange messages between "agents" in which multiple workers are sending messages to single master and await response. I used the "consumerWindowSize" and it did improve the number of active MDBs (btw. thanks for that) but the master is a bottleneck which somehow slows the work down even more. So an algorithm that should run for 5 minutes runs for over an hour. Only difference between the machine that runs for 5 minutes and the rest that I tried to run the algorithm on is the MDB consumption speed. Only if I could figure out which which wildfly background process is the issue. Eg. For some reason if management service doesn't start the MDB consumption speed slow down from 25ms to 150ms. So I'm guessing it's some banal thing as that one but finding it is the main problem.

                      • 8. Re: MDB queue consumption issue
                        Miroslav Novak Master

                        I agree with Justin, increasing maxSession and adjusting consumerWindowSize should reduce latences.

                         

                        If you don't need transactions, you can get some performance by disabling it. Can you remove "transaction="xa"" from configuration of pooled-connection-factory? Also just adding @TransactionAttribute(value = TransactionAttributeType.NOT_SUPPORTED) to your MDB will do the job. Transactions need additional disk writes which might slow down significantly.

                        • 9. Re: MDB queue consumption issue
                          NA NA Newbie

                          Lack of SSD makes life hard. Removing transaction="xa" didn't work but the annotation did and improved the speed from 25ms to 1-2ms. In the meantime I got my hands on a machine with SSD and got about the same results as Justin did 2-4ms with transactions enabled.

                          • 10. Re: MDB queue consumption issue
                            Miroslav Novak Master

                            Thanks for sharing this information! I hope it's ok for you to be without (XA) transactions.