As i have given how compressed large messages are sent in http://community.jboss.org/thread/158813. This post discuss the compressed message at the receiving end.
When hornetQ server delivers compressed large messages to a receiver, there are two ways to get the messages.
1. First way:
Directly read the message buffer. To support this, we only need to construct a GZIPInputStream object to wrap the internal buffer.
2. Second way:
To supply an OutputStream to asynchronously receive message. To support this, we need to convert the incoming compressed stream into decompressed stream before writing to the supplied outputstream, i.e.
--->incoming packets from hornetq server (compressed bytes)---(write to)-->Decompressed stream ---(write to)--> user's output stream.
To achieve this, a special class GZipOutput is used to wrap the user's output stream to accept the incoming packets, as detailed in the following:
a) client prepares a normal OutputStream and call msg.setOutputStream(out);
b) Inside setOutputStream, a GZipOutput is created, wrapping the user's output stream
c) client call msg.waitOutputStreamCompletion(0) to start receive compressed message
d) when a packet comes from the wire and is written to the GZipOutput, the GZipOutput object holds the bytes and writes them into a GZIPInputStream object.
e) Then it reads from the GZIPInputStream the de-compressed bytes and write them into user's output stream.
Because reading from GZIPInputStream is a blocking call (until some compress bytes is available), and we can't decide when a GZIPInputStream.read() will surely returns some bytes without blocking, I
keep all the incoming compressed bytes in memory, until the incoming stream is ended. Then it will start to read from the GZIPInputStream and write to the user's output stream.
So if the compressed message size is very large and the client has not as much memory, there will be a problem.
I think of two possible ways:
1. having a thread read (with timeout) the GZIPInputStream whenever some new bytes arrive. If there are some bytes read, instantly write them to user's output stream.
2. having a fixed size buffer to hold the compressed bytes, any more bytes will be saved in a temp local file. When we started to de-compress, we load them back to buffer, like paging/depaging.
What do you think of it?