2 Replies Latest reply on Jun 8, 2010 2:25 PM by Leo van den berg

    JMS Posting Performance Issue and Fix

    Richard Stanford Newbie

      I ran into a nasty concurrency issue with a Stateless bean that was publishing to a JMS queue.  Whenever I brought our load tester up to more than 10-15 concurrent threads, we saw this behavior.




      Actors:








      • CheckoutQueueSender - defined in components.xml as follows:



          <jms:managed-queue-sender name="checkoutQueueSender" auto-create="true" queue-jndi-name="/queue/checkoutQueue"/>



      • APIFramework - Stateless bean processing from a URL reference, specified in pages.xml

      • APICallHandler - Stateless bean used by APIFramework, contains '@In QueueSender checkoutQueueSender'



      Problem:



      The performance of the call using the ApiCallHandler would get really bad after a while.  Adequate logging showed that while it would process internally (from the top of the execute() method to the return) in about 100ms, the call itself (as logged with before/after measurements) was measuring at more like 2-3s.  Process of elimination showed that this odd gap went away when the queueSender injection was removed.


      Note again that the time delay was not in using the sender, it was in having it injected.  Moving the beans from Stateless POJOs to actual @Stateless EJBs (and back) made no difference.


      The eventual fix was to ignore the Seam support for JMS and create a utility class, included here for reference.  This class was injected through normal injection into ApiCallHandler, and performance is back up to where it should be with no odd delays observed so far.


      @AutoCreate
      @Name("messageSender")
      @Scope(ScopeType.STATELESS)
      public class MessageSender {
      
           void sendObjectMessage(String queueName, Serializable payload) {
               QueueConnection queueConnection = null;
              QueueSession queueSession = null;
              MessageProducer messageProducer = null;
              InitialContext initialContext = null;
              try
              {
                  initialContext = new InitialContext();
                  QueueConnectionFactory queueConnectionFactory = (QueueConnectionFactory) initialContext.lookup("/ConnectionFactory");
                  Queue queue = (Queue) initialContext.lookup(queueName);
                  queueConnection = queueConnectionFactory.createQueueConnection();
                  queueSession = queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
                  messageProducer = queueSession.createProducer(queue);
                  ObjectMessage message = queueSession.createObjectMessage(payload);
                  messageProducer.send(message);
              }
              catch (Exception e)
              {
                  log.error("Error sending object message {0} to {1}", e, payload, queueName);
                  throw new RuntimeException(e);
              }
              finally
              {
                   if (messageProducer != null) {
                       try {
                           messageProducer.close();
                       } catch (Exception ignore) {
                       }
                   }
                   if (queueSession != null) {
                       try {
                           queueSession.close();
                       } catch (Exception ignore) {
                       }
                   }
                   if (queueConnection != null) {
                       try {
                           queueConnection.close();
                       } catch (Exception ignore) {
                       }
                   }   
                   if (initialContext != null) {
                       try {
                           initialContext.close();
                       } catch (Exception ignore) {
                       }
                   }
               }        
          }
           
          @Logger
          Log log;
      
      }



      Hopefully this helps someone else not have to track down this kind of error themselves.  It was pretty odd, since you'd never see it without load testing, and everything worked, just really slowly under load.