minLargeMessageSize effectively Ignored in 2.0.0 GA ?
stevere Feb 15, 2010 6:58 AMHi
I've been experiencing a similar issue to one in a previous discussion http://community.jboss.org/message/521854#521854.
However, I believe that there are some issues in this area that effectively mean that minLargeMessage size is effectively ignored.
Firstly I assume minLargeMessageSize is used to determine the cutover point at which a message is deemed large on the client and therefore chunked on sending ? If not then this discussion post could probably be ignored as user error.
So, this is a shortened stack trace for my error, recorded in my jboss-server.log :-
java.lang.IllegalStateException: Can't write records bigger than the bufferSize(501760) on the journal
at org.hornetq.core.journal.impl.TimedBuffer.checkSize(TimedBuffer.java:208)
at org.hornetq.core.journal.impl.AbstractSequentialFile.fits(AbstractSequentialFile.java:162)
at org.hornetq.core.journal.impl.JournalImpl.appendRecord(JournalImpl.java:2812)
at org.hornetq.core.journal.impl.JournalImpl.appendAddRecord(JournalImpl.java:755)
at org.hornetq.core.persistence.impl.journal.JournalStorageManager.storeMessage(JournalStorageManager.java:489)
at org.hornetq.core.postoffice.impl.PostOfficeImpl.processRoute(PostOfficeImpl.java:904)
at org.hornetq.core.postoffice.impl.PostOfficeImpl.route(PostOfficeImpl.java:665)
at org.hornetq.core.server.impl.ServerSessionImpl.send(ServerSessionImpl.java:1995)
at org.hornetq.core.server.impl.ServerSessionImpl.handleSend(ServerSessionImpl.java:1426)
at org.hornetq.core.server.impl.ServerSessionPacketHandler.handlePacket(ServerSessionPacketHandler.java:275)
I am running JBoss 5 and HornetQ 2.0.0 GA on Windows XP Pro. I have a test JUnit JMS client which is trying to submit a simple JMS Bytes message containing a part of 600000 bytes. Note that the part is not set as a stream on the message.
I have studied the GA documentation and set:
1. <min-large-message-size>250000</min-large-message-size> for my connection factory in hornetq-jms.xml.
2. <journal-buffer-size>300000</journal-buffer-size> for HornetQ Server in hornetq-configuration.xml. (Note that this setting seems to be ignored by Journalling)
The first issue is that if you are running on a non-linux platform you'll be using NIO for Journalling. However, unless you specifically set <journal-type>NIO</journal-type> in your hornetq-configuration.xml file, Journaling Defaults to ASYNCIO and therefore the ASYNCIO journal buffer size is set from configuration, in my case 300000. However when the time comes to actually create the SequentialFileFactory, AIO is not supported and therefore the type is switched to NIO, fine, however you'll pick up the system default NIO settings, any NIO settings you've added to the configuration file will be ignored. So that explains why the buffer size reports as 501760 on the error even though it' set as 300000 in my config. Also it explains why in the previous discussion setting <journal-type>NIO</journal-type> changed the behaviour in the previous discussion.
The second issue is that unless you specifically set an inputStream on your message, it will never be treated as a large message and chunked across the comms. Therefore it will be sent and recieved as a normal message and journalled as a normal message, hence the error, when the message size exceeds the journal buffer size. The reason, I think, is down the the ClientProducerImpl.class in it's doSend() method :
if (msgI.getBodyInputStream() != null || msgI.isLargeMessage())
{
isLarge = true;
}
As far as I can tell the isLargeMessage() flag is only set by a consumer, so doesn't really apply for the submission of a new message.
In BETA 5 the same test looked like :
if (msg.getBodyInputStream() != null || msg.getEncodeSize() >= minLargeMessageSize || msg.isLargeMessage())
{
sendMessageInChunks(sendBlocking, msg);
}
However I can see that minLargeMessageSize seems to be used to determine the chunk/buffer size when sending a large message.
I'm happy to raise these as issues on JIRA, I just wanted to run it past the forum first, mainly to make sure I hadn't got the wrong end of the stick and this was actually the expected behaviour.