8 Replies Latest reply on Mar 24, 2017 10:02 AM by Justin Bertram

    setting compress-large-messages=true on ActiveMQ connection factory results in scheduled messages sent to queue

    Robert Benkovitz Newbie

      Hello:

       

      I'm attempting to tune the sending performance of ActiveMQ Artemis in WildFly 10.1.0.  We are upgrading our application from JBoss 5.1, and have noticed that our sender clients are significantly slower in ActiveMQ than they are in JBoss Messaging.  Our main design pattern is to read through large files, create non-persistent messages, and send them to WildFly queues (using ActiveMQ broker) where they are consumed by various MDBs.

       

      One setting that seemed to greatly speed up the sender response was to set compress-large-messages=true on the ActiveMQ connection factory.  However, monitoring the queues I notice that a whole bunch of messages are being sent as "scheduled" messages, where they sit there not being consumed.  Setting the property back to the default (false) yields the proper behavior, but slow speeds.

       

      I have searched the documentation for any mention of this "feature", to no avail.  Has anyone else seen this issue?  Is there a way around it?

       

      Thanks for any help you can provide.

        • 1. Re: setting compress-large-messages=true on ActiveMQ connection factory results in scheduled messages sent to queue
          Justin Bertram Master

          Can you provide a fuller explanation of your use-case?  For example:

          • Are all messages sent using the same connection or is a connection created every time a message is sent (which is a common anti-pattern)?
          • What is your server configuration?
          • Can you quantify "slow speeds"?  What kind of performance numbers were you getting before?
          • What specifically do you mean by "sender response"?
          • How large are the messages you're sending?

           

          If you have a reproducible test-case that would be ideal so I can see exactly what your code is doing.

          • 2. Re: setting compress-large-messages=true on ActiveMQ connection factory results in scheduled messages sent to queue
            Robert Benkovitz Newbie

            First, thanks for the prompt reply!!  Answers to your questions:

             

            • Are all messages sent using the same connection or is a connection created every time a message is sent (which is a common anti-pattern)?
              • All messages are being sent using the same connection
            • What is your server configuration?
              • For this test we're running a four-node managed domain using full-ha.  Basically the default configuration that comes with WildFly - below is the activemq config in our domain.xml.  We are connecting to the LotReconProcessorQueue, and using the RemoteConnectionFactory connection factory to connect:

             

                        <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">

                            <server name="default">

                                <security enabled="false"/>

                                <cluster password="${jboss.messaging.cluster.password:defaultPWD}"/>

                                <security-setting name="#">

                                    <role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>

                                    <role name="bpc" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>

                                </security-setting>

                                <address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" redelivery-delay="5000" max-delivery-attempts="5"/>

                                <address-setting name="jms.queue.BPCLoggingQueue" dead-letter-address="jms.queue.DLQ" expiry-address="jms.queue.ExpiryQueue" max-delivery-attempts="1"/>

                                <address-setting name="jms.queue.PASNettingProcessorQueue" max-delivery-attempts="1"/>

                                <http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>

                                <http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">

                                    <param name="batch-delay" value="50"/>

                                </http-connector>

                                <in-vm-connector name="in-vm" server-id="0"/>

                                <http-acceptor name="http-acceptor" http-listener="default"/>

                                <http-acceptor name="http-acceptor-throughput" http-listener="default">

                                    <param name="batch-delay" value="50"/>

                                    <param name="direct-deliver" value="false"/>

                                </http-acceptor>

                                <in-vm-acceptor name="in-vm" server-id="0"/>

                                <broadcast-group name="bg-group1" jgroups-channel="activemq-cluster" connectors="http-connector"/>

                                <discovery-group name="dg-group1" jgroups-channel="activemq-cluster"/>

                                <cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/>

                                <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>

                                <jms-queue name="LotReconProcessorQueue" entries="java:jboss/exported/jms/queue/LotReconProcessorQueue" durable="false"/>

                                <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>

                                <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" compress-large-messages="true" block-on-acknowledge="true" block-on-durable-send="false" reconnect-attempts="-1"/>

                                <pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>

                            </server>

                        </subsystem>                                                                                                                                       

                                                                                     

            • Can you quantify "slow speeds"?  What kind of performance numbers were you getting before?
              • We haven't quantified it exactly - we are running apples-to-apples comparisons using the same data with two equally provisioned systems, one using JBoss 5.1 (JBoss Messaging) on Java 6 and one WildFly 10.1. on Java 8.  For this particular job, the message processing takes very little resources on the server side (I've even tried "eliminating" any server-side processing by basically having the MDB do nothing on the message - same level of performance).  The executed code is exactly the same between the two systems - however, we are seeing significantly slower overall job speed on the WildFly 10.1
            • What specifically do you mean by "sender response"?
              • Good question!!  I have a metric around the QueueSender.send() method which calculates the cumulative time it takes for this method to fire throughout the job, and reports this number at the end.  For this particular job, which lasts a bit over 4 minutes using WildFly and about 2:30 using JBoss 5, we're seeing the cumulative time it takes to call this method at least three times greater in WildFly than in JBoss (140 seconds vs 35 seconds).  When setting compress-large-messages=true this cumulative time is actually slightly _faster_ than what we see in JBoss 5 - however, a bunch of messages are getting sent with an unknown "schedule" and are NOT being processed by the consumer (i.e the MDB).
            • How large are the messages you're sending?
              • Not 100% sure but messages sizes can vary from quite small (less than 10 objects) to fairly large (a package of several hundred objects) - I am assuming that at the current default threshold of what amounts to a "large" message (10MB I believe), about a third of the messages sent to the queue exceed that size.  So the messages are of fairly significant size

             

            As for a test case, unfortunately I don't have a concise one at this point - this is actually a current "production" job.  I am seeing similar performance behavior with several of our jobs of a similar paradigm - slower throughput using Wildfly as compared to JBoss, although I haven't been able to examine these in detail.

             

            Let me know if you need anything else, and thank you for your help.

             

            - Rob

            • 3. Re: setting compress-large-messages=true on ActiveMQ connection factory results in scheduled messages sent to queue
              Robert Benkovitz Newbie

              Update to the issue:

              • First, I overstated the size of our messages - about a third of them are over 100k, not 10MB - also the default min-large-message-size in ActiveMQ is 100k.
              • Second, I just experimented with this setting - I first set it to 5 times the size (around 500k) - this GREATLY SLOWED down the posting of messages to the queue - the job took about 80% longer to post the messages.  When I lowered the min-large-message-size to 1k, it SPED UP the poster greatly.  During the job I observed ActiveMQ write several hundred files to disk in the <serverHome>/data/activemq/largemessages folder.  After the job completed, the folder was empty.

              To me this is counter-intuitive - why would the sender perform BETTER when the receiver (the queue in JBoss) had to write files to disk?  Is it something about the acknowledgement of the messages that does it?  Does it have (wild guess here) anything to do with block-on-durable-send or block-on-non-durable-send?  Just seems strange.

              • 4. Re: setting compress-large-messages=true on ActiveMQ connection factory results in scheduled messages sent to queue
                Justin Bertram Master

                If you're sending non-durable (i.e. non-persistent) messages as you indicated in your initial comment the sender should basically return immediately because the default value for block-on-non-durable-send is false.  In other words, the client shouldn't wait for the server to acknowledge receipt of the message.  It should essentially fire and forget, returning control to the sender almost immediately.  On a single instance of Artemis running on my laptop I can send thousands of messages per second.  Also, "large" messages (i.e. messages that exceed the min-large-message-size) should always be slower because of the way they are persisted to disk outside the normal journal.  I think this is the first thing we should investigate.

                 

                I recommend you simplify your test setup drastically.  Use a single client.  Send messages to a single server (i.e. not a "four node managed domain" as you're doing now).  Don't have any consumers.  This will give you a clearer picture of sending performance.  Once you're satisfied with that then we can start adding back other elements of your application environment and see how performance changes (if at all).

                 

                After you simplify your application run some tests and let me know the performance results.

                • 6. Re: setting compress-large-messages=true on ActiveMQ connection factory results in scheduled messages sent to queue
                  Robert Benkovitz Newbie

                  Hi Justin:

                   

                  Sorry, things got busy.  I haven't been able to get back to this testing.  I agree with your approach on simplifying the test a bit, and will get to it as time permits and will post the results.

                   

                  For now, back to my original question:  Have your or anyone else seen the behavior where utilizing the compress-large-messages=true setting causes larger messages (I'm assuming any that get compressed) to be "scheduled"?

                   

                  Thanks again for your answers - I hope to post more results soon.

                  • 7. Re: setting compress-large-messages=true on ActiveMQ connection factory results in scheduled messages sent to queue
                    Justin Bertram Master

                    Have your or anyone else seen the behavior where utilizing the compress-large-messages=true setting causes larger messages (I'm assuming any that get compressed) to be "scheduled"?

                    I have never seen a large message (compressed or otherwise) get implicitly scheduled on original send.  That's one thing I'm hoping to investigate with a simpler test.