7 Replies Latest reply on Feb 1, 2012 11:20 PM by Muthukumaran Manickavasagam

    HornetQ FullGC Issue

    Muthukumaran Manickavasagam Newbie

      We have been trying to switch over from our existing Messaging provider to HornetQ. Till now we were able to correct our functional flow to work HornetQ using JMS API's. As of now we are running the tests in our performance environment and we are getting the expected average timing in sending and receiving messages with HornetQ. However during our load test (multiple producer and multiple consumer, basically a threadpool), we observe continuous Full GC then and there. At some point of time we are seeing continous Full GC in the HornetQ JVM log for five minutes. I have attached the jvm_HornetQ.log for the entire duration (For 7 days) and note the following place, later i believe it recovered.

      2011-12-11T22:02:07.984-0800: 414952.248: [Full GC [PSYoungGen: 3046K->0K(232256K)] [PSOldGen: 676546K->645209K(699072K)] 679592K->645209K(931328K) [PSPermGen: 21998K->21998K(22016K)], 1.5113700 secs] [Times: user=1.50 sys=0.00, real=1.51 secs]
      2011-12-11T22:02:10.757-0800: 414955.021: [Full GC [PSYoungGen: 115749K->0K(232256K)] [PSOldGen: 645209K->645344K(699072K)] 760959K->645344K(931328K) [PSPermGen: 22000K->22000K(22016K)], 1.2823130 secs] [Times: user=1.28 sys=0.00, real=1.28 secs]
      2011-12-11T22:02:14.754-0800: 414959.018: [Full GC [PSYoungGen: 115776K->0K(232256K)] [PSOldGen: 645344K->648324K(699072K)] 761120K->648324K(931328K) [PSPermGen: 22001K->22001K(22016K)], 1.2068470 secs] [Times: user=1.21 sys=0.00, real=1.20 secs]

      At that particular time, the over all queue depth together is around 700K messages. The following JVM parameter is used to start the HornetQ,


      /usr/bin/java/jre1.6.0_24_x64/bin/java -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:/var/log/jvm_HornetQ.log -XX:+UseParallelGC -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Xms512M -Xmx1024M -Dhornetq.config.dir=../config/stand-alone/non-clustered -Djava.util.logging.config.file=../config/stand-alone/non-clustered/logging.properties -Djava.library.path=. -classpath ../lib/twitter4j-core.jar:../lib/netty.jar:../lib/jnpserver.jar:../lib/jnp-client.jar:../lib/jboss-mc.jar:../lib/jboss-jms-api.jar:../lib/hornetq-twitter-integration.jar:../lib/hornetq-spring-integration.jar:../lib/hornetq-logging.jar:../lib/hornetq-jms.jar:../lib/hornetq-jms-client-java5.jar:../lib/hornetq-jms-client.jar:../lib/hornetq-jboss-as-integration.jar:../lib/hornetq-core.jar:../lib/hornetq-core-client-java5.jar:../lib/hornetq-core-client.jar:../lib/hornetq-bootstrap.jar:../config/stand-alone/non-clustered:../schemas/ -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=3000
      -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false org.hornetq.integration.bootstrap.HornetQBootstrapServer hornetq-beans.xml


      The HornetQ version comes in the log is "HornetQ Server version 2.2.8.CR2 (HQ_2_2_8_EAP_CR2, 122) [acf92a94-de5d-11e0-912d-68b5996f7714]) started"


      I have attached the hornetq-configuration.xml which can explain the configuration that we are using.


      If you could help me in fixing the Full GC issue, we are good to go to production environments. Please note that the average heap size is 600MB and i do not believe increasing the heap size will stop me getting Full GC pause for long time like i have seen it for about 5 minutes.


      -- Muthu