1 Reply Latest reply on Apr 30, 2014 10:44 AM by Clebert Suconic

    Java/native memory leak with hornetq jms mdb rollback with 5000 messages re-tried over many days.

    georgemat Newbie

      Need  to solve a jms anti-pattern situation.

      Platform: Windows 2008 R2 64 bit system.  Jdk 64bit 1.7.0_21

      AppSever versions: JBoss5.1.0, WildFly Alpha3, WildFly 8.0.0.Final

      Unfortunately, we cannot move to another WildFly build sooner.


      Our application is functioning as message router.


      A JMS consumer (an MDB listener) on standard hornetq (JMS) queue using XATransaction  with a retry interval of 90 seconds kept retrying for multiple days.

      The JMS consumer sends the message to another destination ( say a JMS queue on an MQ sever or  a different JMS hornetq queue on the same AppServer).

      The message is set to expire after a few days (say 2).

      The message input rate is about 2000 messages per day.

      The JMS mdbcontext.setrollbackonly() is being called and each message is re-tried every 90 seconds. The rollback is usually due to destination not available ( a domain requirement) for days.


      There are heap memory errors observed at times. Adding more heap memory at start up delayed the problem, but not prevented it.

      I am always seeing the Working Set (Memory) use by the Windows 2008 R2 increased.  This memory usage didn't come down even after messages were successfully delivered to its destination (MQ Sever or another JMS Queue on the same AppSever).


      Is there a way to fix this in the current design using the standard JMS infrastructure? Is there a quick-start code that does this correctly?

      I am also looking for a way to suspend the rollback and re-start message delivery until after a JMS destination is made available.


      Thank you in advance,