-
1. Re: High message volumes result in consistent out of memory errors
dave.stubbs Aug 7, 2008 4:17 PM (in response to dave.stubbs)Found out through some experimentation that the memory leak occurs only when commitment control is turned on when connecting the session (using JMS).
So the following works ok.....
connection = connectionFactory.createConnection();
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue(queueName);
message = consumer.receive();
But the following causes the memory leak
connection = connectionFactory.createConnection();
connection.start();
session = connection.createSession(true, Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue(queueName);
message = consumer.receive();
// do some other stuff
session.commit();
What is really quite interesting is that if I've had a memory leak for ages, then connect in non transactional mode all that held memory suddenly gets released.
Edited by: dave.stubbs on Aug 7, 2008 4:16 PM
-
2. Re: High message volumes result in consistent out of memory errors
garytully Aug 12, 2008 1:22 PM (in response to dave.stubbs)hi Dave,
this does seems like strange behavior. In essence, it seems that you cannot pre-load a queue and subsequently consume transactionally from that queue, without an "out of memory" error. Is that accurate?
If this is the case, it would be great if you could create and issue and submit a test case with your code and configuration.
One thing to check, after the preload and before you start your consumer, via the Jmx console, do the queue stats look fine?
Do you see the message consumption reflected in the JMX stats, inflight, dequeue etc?
On the slow restart, yea this is a known issue that is being worked on.
-
3. Re: High message volumes result in consistent out of memory errors
dave.stubbs Apr 17, 2009 9:21 PM (in response to dave.stubbs)After long testing over many months I've come to the conclussion that the persistance mechanism is a little flakey at best.
I always run with a 4GB memory allocation for my queue policy but sometimes I can only pre load 70,000 or so messages, others I can preload 500,000 or more.
There seems to be no consistent reproducable behaviour around this and it's worrying to say the least.
The clients all block when MQ refuses to accept more messages, though I don't like this behaviour as we can't perform diagnostics if we don't get an error on the write.
I have to admit I'm not very confident about MQs reliability with all the strange issues I've seen (and still get).