1 of 1 people found this helpful
You can't have files really large, as we read the entire file in memory. (loading would be much slower)
We keep in a loop like this
buffer = readFile();
Second... In Theory the file should be the size of a cylinder to minimize the head move during write. I was told this is about 10M, but on more advanced disks it won't really make a difference.
On the tests I have done, 100M behaved a bit better but the testcase I have done wasn't doing any compacting. If you have paging or messages surving you may want to use 10M so you won't need as much memory to process the files during compacting.
Anything beyond that won't probably be really fast as you will have to allocate large buffers during compacting.
After the talk I've had over IRC with Clebert I came up to the following conclusions :
a.) By default HornetQ will store messages that are larger than 100K outside the journal(there will only be references to the files and syncs on the messages) which may lead to performance drops.If your average size of the messages is larger than the default value it is recommended that this is to be raised to a value suitable for your specific needs.
b.) If using standard HDD's (mechanical disks) and not SSD's it is worth the effort of using a separate disk for the journal as mentioned by the HornetQ Performance Tuning Guide.
c.) In my particular case using a larger number of journal files @ a smaller size seemed to perform better than a smaller number of journal files @ a larger size.