Why is serverID needed there?
I haven't looked at Ron's branch, but Callback Managers do need to use ServerID. In case a client is connected to different servers, when a callback happens the serverID has to be used on the callback key also.
I have merged Ron's HTTP branch into the trunk (http://wiki.jboss.org/wiki/Wiki.jsp?page=JBossMessagingMergingActivity). All functional tests were passing on the branch, prior to merging. The smoke tests (including the http test) were also passing. The branch should be considered dead, no changes must be applied to it. I will delete it soon.
After the merge, the trunk uses a new Remoting version (2.2.0.Alpha4), and the Remoting Callback API to send messages from server to client.
The latest trunk version uses asynchronous push callbacks (and polling under the hood for HTTP). The pre-merge trunk relied on synchronous calls to send messages from server to client and expected synchronous delivery confirmation (not to be mistaken with client acknowledgment). Such a behavior cannot be expected from asynchronous callbacks, so I modified MessageCallbackHandler to asynchronously send delivery confirmations back to server, and the ServerConsumerEndpoint to manage those delivery confirmations sent by the client.
One case in which delivery confirmation management is necessary is when the consumer is closed. The ServerConsumerEndpoint cannot be closed if there are messages in transit from server to client, and the delivery confirmation count mechanism is used to confirm that there are no in-flight messages.
New configuration parameters for HTTP
The polling period (in ms) can be configured, in this order:
1. By setting "jboss.messaging.callback.pollPeriod" system property on the client. This overrides any other configuration.
2. By specifying it as value of the "callbackPollPeriod" attribute, in the HTTP Connector configuration on the server (remoting-service.xml).
3. By relying on the hardcoded JMSRemotingConnection.CALLBACK_POLL_PERIOD_DEFAULT value (100 ms), if no value is explicitly specified.
Ok good. :)
(I would have liked to have had a chance to have a look at the changes before committing to TRUNK though).
Regarding the extra calls from client to server - how does this affect throughput?
My understanding is that this should be necessary only for the HTTP transport, but it seems to be enabled currently for all transports.
My worry is that throughput can be massively sensitive to changes that effect flow control (like this one).
If this is only for the HTTP transport, I don't have such an issue with it.
Also these issues will become moot when we streamline the whole system when implementing the NIO stuff in 1.2.1.
OK, so let's think about this a little bit.
In the old version, the server-side executor thread would put the message(s) on the wire and WAIT all the time until the messages make it to the MessageCallbackHandler and a reply to that RPC (in the form of a MessagingMarshallable instance) is sent back to the server.
In the new version, the server-side executor thread just hands the messages over to Remoting, in the form of a Callback instance and returns immediately. Long time after that, when messages make it to the client, the client asynchronously sends a delivery confirmation, that says it actually received the messages (this is, again, NOT the client acknowledgment).
The server actually uses that information (the number of in-flight messages) only in a special use case, when the consumer is stopped, so contending on the lock that protects it is done in the context of a life cycle (i.e. very slow) event.
So, no, the modification doesn't negatively affect the throughput, it actually improves it, if anything.
As for the argument that this behavior is needed only for the HTTP transport, that is not true. It is needed for any transport. Remember that even for TCP, we want "truly" asynchronous "invocations". The combination of new Remoting API and changed ServerConsumerEndpoint/MessageCallbackHandler behavior makes it transparent to the transport.
Well, I had no idea you were going to do this.
It was my understanding we were going to keep the current way of doing things in remoting until 1.2.1 when I will make all this moot by replacing this with a NIO based system which will make most of this obselete.
Can you explain in more detail how this works?
If "The server actually uses that information (the number of in-flight messages) only in a special use case", then what's the point of sending it in other cases?
Are these true asynchronous invocations? Or are they remoting style asynch where a normal RPC is done and the result discarded?
Have you done any tests to support your statement that the throughput has actually improved?
I would really like to be kept in the loop in the future if such major technical changes are going to be made and committed into trunk. Is that reasonable?
By reading this thread, you are in the loop, aren't you?