5 Replies Latest reply on Dec 6, 2011 6:05 AM by rchallapalli

    AciveMQ performance problem

    rchallapalli

      Hi folks,

       

      I am involved in developing a POC using Fuse/came/blueprint and ActiveMQ as messaging server. To put it simple my app consume http messages using camel-jetty and write the message to a Queue. Parallelly I do run a performance test as I develope the app bit by bit. Till today it gave me > 2500 tps before adding the camel route to write the message to a queue. Once I added the route the throughput and average processing times dropped drastically. Current throughput is 50-55 tps and average per message is 1sec or even more. I followed the ActiveMQ stuff on apache.org and did some tuning as mentioned on teh apache/fuse (http://fusesource.com/docs/broker/5.4/tuning/index.html)

      sites

       

      -Xmx1536M -Dorg.apache.activemq.UseDedicatedTaskRunner=true

       

      Did KahaDB config changes as mentioned in the Fuse broker guide above.

       

      Configured client side JMS connection pool sizes starting from 32 to 128 till 300 even..

       

      Test environment: Servicemix & ActiveMQ each running on different machines... Intel Xeon E5440 2x 2.83GHz, 12 GB RAM, 32-bit Windows 2003. Load (50,100 & 300 concurent threads ) run using JMeter from a 3rd machine of similar config.

       

      I thought if the network is culprit and so co-located ActiveMQ on the servicemix box.. but still could not see any improvement.. just a difference of 4-5 tps.

       

      Please help.. what could be the bottleneck?? what am I missing out?

       

      Thanks in advance!

      ravi

       

      Edited by: rchallapalli on Nov 17, 2011 9:46 AM

        • 1. Re: AciveMQ performance problem
          davsclaus

          Are you using persistent messages in your test? This is used by default by Camel.

           

          What does your Camel route look like?

           

          And what versions of AMQ, Camel and SMX are you using?

          • 2. Re: AciveMQ performance problem
            rchallapalli

            Hi Claus,

             

            I am using apache-servicemix-4.4.1-fuse-01-06, camel 2.8.1, apache-activemq-5.5.0-fuse-00-27.

             

            My messages are persistent and it is a key requirement... are persistant messages that slow.. tps dropped from >2k to 50.. and average time per message raised from around 80ms to 900ms post adding routing to queue?

             

            Route:

            Java DSL

            from("jetty:http://10...../inbound").routeId("httpToInboundWithAck")
            .log("received xml message")
            .transform(body()).log("called transform")
            .inOnly("activemq:inboundQueue")
            .transform(simple("Message Ack"))
            .log("done!");
            }

            Some bits from blueprint.xml

             

            Jetty:

            <bean id="jetty" class="org.apache.camel.component.jetty.JettyHttpComponent">

            <property name="minThreads" value="200"/>
            <property name="maxThreads" value="300"/>

            </bean>

             

            camelContext:

             

            <camelContext id="preack-camel-context" xmlns="http://camel.apache.org/schema/blueprint">

            <routeBuilder ref="preAck.routeBuilder" /> *<!-- where preAck.routeBuilder ref the above Java DSL RouteBuilder impl -->*
            <threadPoolProfile id="preackThreadPoolProfile" poolSize="128" defaultProfile="true" maxPoolSize="150" />

            </camelContext>

             

            Activemq connection pooling:

            <!--connection pooling start -->

                <bean id="preackConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">

                    <property name="brokerURL" value="${activemq.brokerUrl}" />

                </bean>

             

                <bean id="preackPooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory">

                    <property name="maxConnections" value="128" />

                    <property name="maximumActive" value="130" />

                    <property name="connectionFactory" ref="preackConnectionFactory" />

                </bean>

             

                <bean id="preackJMSConfig" class="org.apache.camel.component.jms.JmsConfiguration">

                    <property name="connectionFactory" ref="preackPooledConnectionFactory" />

                    <property name="transacted" value="false" />

                    <!-- <property name="concurrentConsumers" value="20" /> -->

                </bean>

             

                <!--connection pooling end -->

             

            Thanks in advance,

            ravi

             

            Edited by: rchallapalli on Nov 17, 2011 11:16 AM

            • 3. Re: AciveMQ performance problem
              rchallapalli

              Tried all the optimizations still get the same performance. So this is what I can expect!

              • 4. Re: AciveMQ performance problem
                garytully

                The limiting factor here should be disk speed. On each jms send transaction completion there is an fsync to disk to ensure the messages is safely on disk before the response to the commit goes back to the client.

                 

                You can check your disk speed using the DiskBenchmark. From the base directory of an installation run: java -classpath lib/kahadb-<version>.jar org.apache.kahadb.util.DiskBenchmark.

                The tool will access a local file named disk-benchmark.dat with simple writes, writes followed by an fsync and reads, then report the results.

                The Sync Writes number represents the max transaction/second possible on your hardware. It is possible to experiment with the block size --bs to get an optimum value and configure KahaDB accordingly through the journalMaxWriteBatchSize and indexWriteBatchSize.

                 

                More important though, is to introduce some parallel processing in to the broker transaction processing. In this way, multiple transaction completions can share the overhead for a single disk sync. This simplest way to achieve this is to create a duplicate route that uses a second jms endpoint (which insures the that the connection is not shared). You can continue to add duplicate routes to achieve the desired level of parallel transactions. In this way, you can maximize the throughput by sharing the disk latency across multiple users.

                • 5. Re: AciveMQ performance problem
                  rchallapalli

                  This is an awesome advise and really worked out for me. The disk on our windows environment was very poor. I ran the benchmark utility on the test env and another Solaris machine as you advised and the Solaris machine gave 20x better disk performance than the windows hardware we used.

                  Elated to know you.

                   

                  Once again my sincere thanks to you.

                   

                  Cheers,

                  ravi.

                   

                  Edited by: rchallapalli on Dec 6, 2011 11:04 AM