1 2 Previous Next 23 Replies Latest reply on Mar 24, 2010 3:23 PM by kreide Go to original post
      • 15. Re: Journal compaction

        The reason I create a new session is to more accurately model how this will be used in practice. In a production setting the producer and consumer will in fact be running on different physical servers, thus this must work across sessions. But in fact it does not seem to make any difference what I do, the server never compacts its files.

        • 16. Re: Journal compaction

          OK, have you had a look at what is happening on the HornetQ server using jconsole? jconsole.PNG

          It is really handy to see how many sessions are being created, how many messages are on the queue.  It may give some clues as to what is going on here.  If you go to the bin directory of your JDK installation you will find the jconsole.exe there. Start jconsole after HornetQ and select the HornetQ connection, go to the tab as shown above and then run your app. refresh to see what is happening. What does the 'bar' queue ConsumerCount do (does it increase)? What does the MessageCount do (does it cap at 1000)?

          • 17. Re: Journal compaction

            Also, can you please post your config where you set the address settings, BLOCK and max-size-bytes?

            • 18. Re: Journal compaction

              I tried to replicate this issue, to no avail.


              To clarify - I used your code you pasted, here it is again:



              ClientSessionFactory sf = HornetQClient.createClientSessionFactory(tc);
                    int count = 0;
                    while (true)
                       log.info("*** ITERATION " + count++ + "\n\n\n\n");
                       ClientSession session = sf.createTransactedSession();
                       ClientProducer producer = session.createProducer("bar");
                       for (int i = 0; i < 1000; i++)
                          ClientMessage message = session.createMessage(true);
                          // log.info("sent " + i);
                       session = sf.createSession();
                       ClientConsumer consumer = session.createConsumer("bar");
                       for (int i = 0; i < 1000; i++)
                          ClientMessage msgReceived = consumer.receive();
                          // log.info("read " + i);


              I set policy to BLOCK and max-size-bytes to 300 MiB, and added your queue in hornetq-configuration.xml:



                    <!--default for catch all-->
                    <address-setting match="#">
                  <queue name="bar">


              I then ran the test against, first HornetQ TRUNK - it ran through to millions of messages without issues.


              I repeated this against HornetQ-2.0.0.GA - the distro from the download page, again it ran through without issues, and the number of journal files never rose above 3, as would be expected if dead space is being reclaimed.

              • 19. Re: Journal compaction

                I also tried with both NIO and AIO journals - makes no difference.


                One other observation - you're sending 1000 messages, then consuming 1000 in a loop. So there are never more than 1000 messages in the queue.


                Each message is probably about 250 bytes, that means no more than around 250K of data in the queue at any one time - meaning setting the BLOCK limit to 300 MiB won't do anything, since it can't ever get near that amount of data in memory.

                • 20. Re: Journal compaction

                  I suspect either:


                  a) Something else hass been changed or broken in your config/installation which I'm not aware of


                  b) The queue is actually full of old data, from previous experiments you've done and where you weren't acknowledging messages. If the queue is very near it's upper limit of 300 MiB, and you try to send a further 1000 messages (i.e. 250 KiB), then if there's not enough space then the producer will block.


                  Try re-installing HornetQ and/or wiping the data directory and try again.

                  • 21. Re: Journal compaction

                    This is the darndest thing. I was actually only wiping the journal directory, not the entire data directory; following your advice I wiped everything there and that did the trick. The only other files in there were "bindings/hornetq-bindings-1.bindings" and "bindings/hornetq-bindings-2.bindings". Is it possible that these files become corrupt or out of sync with the journal in some way that caused this issue to happen? In any case my issue is resolved, but I still think the behavior I observed was quite disturbing.


                    Thank you so much for the help. One more question though if I may: how scalable is the "delayed message" feature, i.e. using the 'setExpiration' method. Will that scale to several million messages without using up too much of the server's resources?


                    Thanks for all the help so far.

                    • 22. Re: Journal compaction

                      Not sure what you mean by the "delayed message" feature? Do you mean scheduled deliveries?


                      If so, that's not related to setExpiration(). Can you clarify?

                      • 23. Re: Journal compaction

                        Yes, scheduled deliveries, sorry for my confusion. I am using:


                        message.putLongProperty("_HQ_SCHED_DELIVERY", date.getTime());


                        to schedule the messages for delivery. I just want to know if there are any performance concerns with large number of messages scheduled for delivery, say a few days into the future.

                        1 2 Previous Next