8 Replies Latest reply on Dec 10, 2010 9:20 AM by Robert Lee

    How to configure HornetQ to be faster at loading the journal?

    Robert Lee Newbie

      Can we please ask some advice on how to get HornetQ to start up more quickly? Currently, it can take several hours for our server to start up.


      We have a HornetQ sever with a single topic, with a typical message size of around 3-5K, and the following address settings:




      The Java MX setting is set to 10240m, which we thought would be plenty to cover the running of HornetQ with a 4096MiB topic.


      When we start HornetQ, it very quickly slows down as it reads each file in the journal. The slowdown seems to be the checkDeleteSize() anonymous class in JournalImpl.load().


      For most of the journal files, the free memory is not less than 20% of the maximum memory, so this method does nothing. But when it gets about 80% of the way through, the loading process slows right down. We have seen start-up times of several hours in total.


      Some more observations:

      - most of the files have a deleteCount of 20001, exactly one more than the threshold.

      - the amount of memory saved by this loop does not make any significant difference to the heap usage shown in the JMX console (which also shows Eden space and Old Gen space as full)

      - this method loops over all messages in memory for each journal file after a certain threshold. It is therefore order O(M*N), where M is the number of messages and N the number of journal files.

      - this method is single-threaded, which means we are limited by the speed of each CPU core; if it used a multi-threaded approach (even just to find the records), it would give us a significant speed boost.


      The main problem this causes is that we need to artificially increase the MX heap usage setting for Java in order to start HornetQ in a sensible time-frame. But there is no way to reduce this setting without restarting HornetQ, which means waiting for the journal directory to become small. (We think we need between 12 and 14Gb to start up avoiding this loop, for a single 4Gb address).


      Would it be better to have fewer, larger journal files, or would this cause a performance problem during normal use?

      Would it be better to have more, smaller journal files, to try and avoid having 20000 deletes in each file?

      Is there any way we can "compact" the journal (ie remove deleted records) while HornetQ is stopped?


      Many thanks for any help or advice.