9 Replies Latest reply on Jul 31, 2008 11:03 AM by timfox

    JMS Benchmark

    viniciuscarvalho

      In case someone missed it :


      http://www.c2b2.co.uk/iPoint/ipoint?SelectedPage=69&110ArticleID=17

      JBM is the best choice of the selected. Can't wait for 2.0 :D

        • 1. Re: JMS Benchmark
          clebert.suconic

          One thing I didn't understand on Matt's Brasier article was the relevance he put on Opening a connection.


          All the benchmarks I have ever delt with (including Specj, SpecJMS) don't mind about setup and ramp up times, only considering time after all the classes are loaded, everything is established and steady. (Defined as steady state on those benchmark's documentations).

          The concept of measure connection time is broken IMO. The benchmark still favorable to us... but Mr. Brasier should consider only steady state against any provider he chooses from. (I'm saying that with an independent point of view from JBoss Messaging).

          • 2. Re: JMS Benchmark
            clebert.suconic

            Another reason for steady state is JIT compilations and JRE's optimizations.

            • 3. Re: JMS Benchmark
              mbrasier

              Hi,
              One of the reasons I included the time to open a connection is that I have found that developers often open a connection, send a single message and then close the connection.

              All the test results are average figures, based on a large number of consecutive runs for a given scenario. For example, for sending 1000 messages, the client creates 5 threads, and each thread connects, sends 1000 messages, and disconnects, 1000 times.

              I also ran each test 3 times (clearing the queue between each run) and used the third set of results, although this did not make a large difference.

              So while I cannot be certain that the tests have achieved 'steady state', I hope that any overheads factor out to some extent. Also, as I am really performing these tests in order to provide a baseline for later tuning benchmarks.


              Matt

              • 4. Re: JMS Benchmark
                ataylor

                 

                One of the reasons I included the time to open a connection is that I have found that developers often open a connection, send a single message and then close the connection.


                you're right, Spring JMS template does this, it is however a JMS anti pattern and shouldn't be done.

                • 5. Re: JMS Benchmark
                  ataylor

                  You should also try JBM 2.0 alpha

                  • 6. Re: JMS Benchmark
                    shimi

                    How can someone use these results if all the tests failed after 100000 - 1000000 messages.

                    MQ 1000000 - server stops responding to connections.
                    MS 100000 - server runs out of heap space.
                    GF 100000 - client receives errors indicating queue is full, server continues to function correctly.

                    The configurations could easily be tuned to prevent these errors, but this would not be compatible with our goal of using an out-of-the box configuration.


                    Changing the configuration might change the test results.

                    • 7. Re: JMS Benchmark
                      shimi

                      I would have want to see results for different size of messages (30 characters is too small) and to see ActiveMQ as part as the test as well.

                      • 8. Re: JMS Benchmark
                        timfox

                        I have to agree with Clebert.

                        A warm-up period is critical before taking any measurements since JIT warmup can make a huge difference (I've noticed about a 300% performance difference).

                        Secondly just sending messages is not really a good measurement. One reason is that after the call to send() has returned it doesn't mean the message has actually reached the queue, in all likelihood it's still sitting in the client buffer waiting to be sent, or in transit. Just measuring send speed doesn't really tell you anything.

                        For non persistent messages (and in some cases with persistent messages with some providers) messages are typically sent asynchronously.

                        • 9. Re: JMS Benchmark
                          timfox

                           

                          "shimi" wrote:
                          and to see ActiveMQ as part as the test as well.


                          http://www.jboss.org/file-access/default/members/jbossmessaging/freezone/docs/userguide-2.0.0.alpha1/html/performance.html#performance.results

                          What's more JBM 2.0 is even faster now then when those results were taken :)