Version 5

    This section is comments on Remoting API Extensions for Messaging (


    Client-side callback API


    "For the sake of intuitiveness, I would suggest either addListener(new CallbackListener()) or addHandler(new CallbackHandler()). However, this is really a minor issue."  I agree, and will create new interface called CallbackListener that extends the InvokerCallbackHandler interface and deprecate the InvokerCallbackHandler.


    ServerInvocationHandler API


    Allowing handler to know when an invocation is a oneway call is good since explicitly declares that no return is expected (both to the user implementing the ServerInvocationHandler interface and internally within remoting) and allows handler to know that must any response message asynchronously.


    public void onewayInvoke(InvocationRequest invocation);


    Note: only change was to change the method name to 'onewayInvoke' instead of 'asyncInvoke' so matches method name within Client.


    Server-Side Callback API


    The exact behavior of the callback API is not always known to the callback sender.  This is because when a callback handler (i.e. InvokerCallbackHandler previously registered with the ServerInvocationHandler via the addListener(InvokerCallbackHandler callbackHandler) is called to handle the callback, it is really a server side proxy instance to the InvokerCallbackHandler implementation within the client VM that is being called.  The behavior of the server side proxy is based on:


    a) callback listener registered for push callbacks (meaning that within implementation there is a callback server within client vm waiting to receive callbacks from the server side).  In this case, the call to InvokerCallbackHandler.handleCallback(Callback) on the server side will make an invocation to the client vm and know that the callback made to the client.  However, there is no verification that the callback was consumed by the original InvokerCallbackHandler on the client VM.  If an error does occur during the process of sending the callback to the client, the server side caller of InvokerCallbackHandler.handleCallback(Callback) will be thrown a HandlerCallbackException.


    b) callback listener registered for pull callbacks (meaning that within implementation there is no callback server and callbacks will polled for by the client side).  In this case, the call to InvokerCallbackHandler.handleCallback(Callback) on the server side will cause the server side proxy for the InvokerCallbackHandler to queue the callback to be picked up by the client at a later time (how the callback message is "queued" is dependant on the callback store being used and can be simply in-memory or persisted and remoting docs cover this in more detail).  In most cases, the call to handleCallback() will return as soon as the callback is queued (only exception is when using blocking callback store, which don't think is in remoting docs yet, but will block caller if queue as reached max in-memory capacity).  At a later, undetermined point in time, the client will call the server asking for all the queue callbacks that have been stored and will receive all waiting callbacks in batch.  The server side caller to handleCallback() has no visibility to if the callbacks were consumed by the client InvokerCallbackHandler or if the callbacks even made it to the client at all. 


    With all this in mind, I think there is possibly a need for two additional features:


    1. Sending of callbacks using oneway (async) invocation.  This would only be useful in the case of push callbacks where instead of currently making synchronous invocation, could make an oneway (async) invocation.  This would however prevent the caller from receiving any exception due to an error that may occur on the far side (within the callback server during its delivery of the callback), but would be faster.  For pull callbacks, the behavior would be unchanged.


    2. Acknowledgement of callback consumption by InvokerCallbackHandler on client side.  Ron has been working on this and has implemented so that a callback sender can ask for (a) an acknowledgement after the callback has been sent, and/or (2) an acknowledgement after the callback has been received on the client side.  These work differently for push and pull callbacks (and adding asynchronous push callbacks would add another little wrinkle).  Ron is still doing work in this area.


    Sending of raw byte{FOOTNOTE DEF  } (or Streams or ByteBuffer)


    I would perfer to not include any additional methods to the remoting high-level API that includes anything which directly exposes the lower level transport and marshalling.  One reason is this will be very difficult to support transparently over all the different types of transports.  I also don't think it will provide much in regards to gains in performance (quick run with profiler showed less than 0.1% performance gain).  Finally, I feel that this would be better exposed at lower level where can be customize/optimized per transport (more on this in a moment).




    A "bidirectional" transport


    This seems to be a bit of a gooey topic.  I feel that the current multiplex transport meets the functional requirements that I am aware of.  However, it seems that it does not meet performance requirements, although am not exactly sure what those are (maybe is just as fast or faster than other transports?).  There are possibly other problems with multiplex that I am not fully understanding, but does seem clear there is a clear desire for another bidirectional, multiplex transport to be implemented.  Now the only question is who/where/when should this be done (maybe following sections will help in answering this).




    Transport layered model


    Am just using this term as was put out previously on a former forum thread, but would like to propose some ideas on remoting API changes that might help with some of the previous issues.  To start, it seems to me that a lot of the issues that have been raised recently are mostly to do with having direct access to I/O layer within remoting and being able to customize to particular needs (such as framing packets on the wire, dealing with raw data not in Object form, etc.).  For framing of packets, would be nice to be able to segment components doing the framing since each component would likely have different responsibilities (i.e. adding header info, adding routing info, adding payload, etc.) which are independent of one another and only thing that matters is the order in which they write out on the wire.


    To address this, would like to add the concept of chaining marshaller/unmarshaller within remoting.  This would allow each marshaller to write out their particular data on the wire and then call on the next marshaller to write out its data to the wire and so on.  So if needed to do packet framing, might have:


    HeaderMarshaller.write() --> RoutingMarshaller.write() --> PayloadMarshaller.write()

  --> --> PayloadUnMarshaller().read()


    Would also be possible to dynamically alter the chain based on what was to be written out.  For example, if the object to be written was a byte array, the first marshaller in the chain could choose to use a marshaller implementation that specifically handles byte{FOOTNOTE DEF  } (where may just write directly to the output stream) but if the object to be written out was FooPacket object, then use a marshaller implementation specifically for that (which may itself have other marshallers in the chain), otherwise just us default serialization marshaller.  Note, is perfectly acceptable for user to call Client.invoke(new byte{FOOTNOTE DEF 1024 1024}) if so desired. 


    Obviously the biggest problem with switching out marshallers that could potentially change the wire format is being able to detect the proper wire format on the other side (when unmarshalling), but that is what framing is all about, so will have to make sure that are accounting for this within the marshalling implementation.  Remoting will be able to provide some default marshallers out of the box that can help with this for generic cases where heavy customization is not needed.


    Another capability of using chained marshalling would be to change the type of stream being used at the top end of the chain.  For example, would be able to include a compression marshaller that wrapped the default stream in a compression stream so that marshallers down the chain would use that compression stream when writing out their data (same applies for wrapping in Object stream, encrypted stream, etc.).


    I have not worked out exactly how to expose configuration for this (but am thinking will be very similar to how we do interceptor chains elsewhere). 


    Also, users can currently send raw data on the wire without being wrapped in remoting's InvocationRequest object, but requires using config entry in the metadata Map parameter to indicate is a raw.  Adding a new method to expliclitly indicate that is a raw invocation might be worth while (basically asking if you think this would be beneficial?) which would look like:


    public Object rawInvoke(Object payload) throws Throwable 


    public Object rawInvoke(Object payload, Map metadata) throws Throwable 



    NIO (socket channel, selector, and ByteBuffer wrinkle)


    Up to this point, have only been talking about streams within remoting since all current transports only deal with OutputStream and InpustStream.  In order to support a NIO transport, will also need to support socket channels and ByteBuffers at some level (which can provide a lot of benefit for blocking socket transports as well).  However, having the marshalling layer manage channels, selector, and ByteBuffers directly is not practical (as most of this should be handled by the transport itself).  I would prefer the NIO transport to abstract the NIO internals and provide a facade where needed to the marshalling layer.  For the marshaller, this isn't really necessary as can have:


    public interface BufferMarshaller extends Marshaller {    public void write(Object dataObject, java.nio.ByteBuffer outputBuffer) throws IOException; }


    but for the unmarshaller, would prefer not to directly expose the channel directly which will be needed to do the actual read into a ByteBuffer.  Instead, would like to provide a facade which would do the real read and expose the ByteBuffer where the data sits.  So the unmarshaller interface would look something like:


    public interface BufferUnMarshaller extends UnMarshaller {    public Object read(InputBuffer inputBuffer, Status status, Map metadata) throws IOException, ClassNotFoundException; }

    where InputBuffer has the following methods (amoung others):


    public void limit(int size) public void flip() public void clear() public java.nio.ByteBuffer getByteBuffer() public InputStream getInputStream()


    The Status will indicate state based on events from selector, which is basically just used to determine if should continue reading.


    I have the BufferMarshaller/UnMarshaller extend their stream counterparts in the case could be used with traditional stream based transports and was thinking would throw an exception if they do not support streams.


    These BufferedMarshaller/UnMarshaller would also support chaning as well as same framing concepts mentioned earlier. 


    All this is based on some prototyping I have been doing using a NIO framework called EmberIO ( and have some basic code working for:


    1. Object serialization - uses one marshaller/unmarshaller and will put/get: magic byte, length of object, serialized bytes for object


    2. Chained raw reads - uses two marshallers/unmarshallers that are chained.  The first one put/get magic byte, second one gets/puts two ints which represents client number and call number.


    3. Streams - uses one marshaller and two unmarshallers (the first unmarshaller is used to find end of the bytes for serialized object and re-position the ByteBuffer accordingly for the second umarshaller that simply uses ObjectInputStream.readObject()).  This one is only partially working as fails under load.


    However, these have been coded outside of remoting (just using EmberIO and extra custom code including marshalling code above).  Integration with remoting would still be needed.


    There are still a ton of issues to be worked out before feel totally confident with all this or even with using EmberIO.  However, think this is a decent first step and also feel like EmberIO is a good framework to consider using (see for more info on why).


    Am sure we'll have plenty to discuss in regards to direction to go for NIO implementation.



    Exposing the transport layer


    I have not spent any time looking into how to better expose just the remoting transport layer (i.e. stripping away the high level remoting API layer).  Is possible to get the transport invoker directly, but the API for that is probably not where we would need it to be, so can look into that further as well.