3 Replies Latest reply on Apr 17, 2009 9:21 PM by dave.stubbs

    High message volumes result in consistent out of memory errors

    dave.stubbs

      I've been using Apache MQ for a while and now switched to Fuse 5.1.0.0 as I'd been having many problems with running out of memory.

       

      There seems a direct relation between the memory config in the activeMQ systemUsage section of the config, but no matter what values I use I always end up with out of memory errors.

       

      I should stress we are heavily stress testing the server. We started by loading in 4 million messages (and yes, this is a requirement for a client, not just an arbitrarily big number) and then started up a consumer application to process these messages.

       

      We are using persistent queues, I've tried with and without using fileQueueCursor to get stability but with no improvements. We are also using commitment control on the messages to guarantee delivery and hand off to a journalled server application.

       

      The messages are about 1K in size each, and it usually crashes ata around 300,000 messages processed.

       

      Our memory usage limit is set at 20 mb and the storage usage limit is set at 1gb, temp is 100mb. Though there is no documentation I can yet find to help fine tune these values.

       

      What's just as bad is that after a restart it takes 30 minutes or so before the queue is available again to continue processing.

       

      I usually have no choice but to kill the MQ server as it can't be shutdown any other way (hence the abrupt ends in the log).

       

      I've attached the log file containing the stack traces.

       

      This is a high profile installation for a major company and they want to embrace ActiveMQ but at the moment I'm having to warn them off of it until I can fully get control of these memory issues. Does anyone have any ideas how to overcome these problems.

       

      Cheers

      Dave

        • 1. Re: High message volumes result in consistent out of memory errors
          dave.stubbs

          Found out through some experimentation that the memory leak occurs only when commitment control is turned on when connecting the session (using JMS).

           

          So the following works ok.....

           

          connection = connectionFactory.createConnection();

          connection.start();

          session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);

          Destination destination = session.createQueue(queueName);

           

          message = consumer.receive();

           

          But the following causes the memory leak

           

          connection = connectionFactory.createConnection();

          connection.start();

          session = connection.createSession(true, Session.AUTO_ACKNOWLEDGE);

          Destination destination = session.createQueue(queueName);

           

          message = consumer.receive();

          // do some other stuff

          session.commit();

           

          What is really quite interesting is that if I've had a memory leak for ages, then connect in non transactional mode all that held memory suddenly gets released.

           

          Edited by: dave.stubbs on Aug 7, 2008 4:16 PM

          • 2. Re: High message volumes result in consistent out of memory errors
            garytully

            hi Dave,

            this does seems like strange behavior. In essence, it seems that you cannot pre-load a queue and subsequently consume transactionally from that queue, without an "out of memory" error. Is that accurate?

             

            If this is the case, it would be great if you could create and  issue  and submit a test case with your code and configuration.

             

            One thing to check, after the preload and before you start your consumer, via the Jmx console, do the queue stats look fine?

             

            Do you see the message consumption reflected in the JMX stats, inflight, dequeue etc?

             

            On the slow restart, yea this is a known  issue  that is being worked on.

            • 3. Re: High message volumes result in consistent out of memory errors
              dave.stubbs

              After long testing over many months I've come to the conclussion that the persistance mechanism is a little flakey at best.

               

              I always run with a 4GB memory allocation for my queue policy but sometimes I can only pre load 70,000 or so messages, others I can preload 500,000 or more.

               

              There seems to be no consistent reproducable behaviour around this and it's worrying to say the least.

               

              The clients all block when MQ refuses to accept more messages, though I don't like this behaviour as we can't perform diagnostics if we don't get an error on the write.

               

              I have to admit I'm not very confident about MQs reliability with all the strange issues I've seen (and still get).