1 2 Previous Next 24 Replies Latest reply on Jan 23, 2006 3:13 AM by ron_sigal

    remaining issues with multiplex transport for 1.4.0 final re

      As I understand it, there is one (and only one) remaining issue with multiplex transport in regards to use within the JBoss Messaging project. This issue is that the local invoker is not used to make direct, by reference calls to multiplex server invoker from remoting client when is in the same JVM. Further, this seems to be due to client locator url and server locator url having to be different.

      As far as I know, there is not a jira issue or test case within the remoting project for this. There needs to be both. Ron or Tim, can you do this and post back to this thread with the information.

        • 1. Re: remaining issues with multiplex transport for 1.4.0 fina
          ron_sigal

          I've done the following:

          1. Created an issue for this problem: JBREM-292. It's a sub-issue of JBREM-91.

          2. Made some last minute (provisional) changes, which fix the problem. In particular, multiplex invoker (or socket invoker) parameters can be passed into Connector and Client by way of a configuration Map instead of the InvokerLocator. This keeps the offending parameters from confusing InvokerRegistry.createClientInvoker().

          3. Checked in two test cases, org.jboss.test.remoting.transport.multiplex.LocalInvokerTestCase, which specifically tests for this problem, and org.jboss.test.remoting.transport.multiplex.MultiplexInvokerConfigTestCase, which exercises the use of the configuration Map more extensively.

          4. Tagged CVS "jboss-remoting-before-client-param-map" before committing the above.

          To elaborate on the configuration Map: LocalInvokerTestCase has the following code:

          // Create Connector.
          String connectorURI = "multiplex://localhost:5757";
          InvokerLocator connectorLocator = new InvokerLocator(connectorURI);
          System.out.println("Starting remoting server with locator uri of: " + connectorURI);
          log.info("Starting remoting server with locator uri of: " + connectorURI);
          Connector connector = new Connector();
          connector.setInvokerLocator(connectorLocator.getLocatorURI());
          connector.create();
          SimpleServerInvocationHandler invocationHandler = new SimpleServerInvocationHandler();
          connector.addInvocationHandler("test", invocationHandler);
          connector.start();

          // Create Client.
          Map configuration = new HashMap();
          configuration.put(MultiplexServerInvoker.MULTIPLEX_BIND_HOST_KEY, "localhost");
          configuration.put(MultiplexServerInvoker.MULTIPLEX_BIND_PORT_KEY, "6565");
          Client client = new Client(connectorLocator, configuration);
          client.connect();

          Note that the Client and the Connector get the same URI.

          • 2. Re: remaining issues with multiplex transport for 1.4.0 fina
            timfox

            What about the other multiplex config. params connect host, connect port , client multiplex id, server multiplex id?

            Can they go in the param map too?

            The test only tests bind host and bind port?

            • 3. Re: remaining issues with multiplex transport for 1.4.0 fina
              timfox

              When callbacks are being sent from the server to the client, does remoting internally use the configuration map to ensure that the LocalInvoker is used for communication in that direction too?

              • 4. Re: remaining issues with multiplex transport for 1.4.0 fina

                I checked in Tim's test case class under src/tests/org/jboss/test/remoting/transport/multiplex/config/MultiplexTest so would be there for everyone to reference. The following is response to issues raised with this test.

                I think I found the root cause of the IOException. Looks like is because of repeated use of serverMultiplexId=mytestid.

                The MultiplexServerInvoker maintains a static Map (socketGroupMap) for the different socket groups. On the first loop iteration in the test, this group is created for the serverMultiplexId where value is mytestid (where the group is bound to a particular port, i.e. 2747). Then on the next iteration in the test, will do a look up for the socket group by the same serverMultiplexId value of mytestid (to see one has already been created and can be reused). On this second time around, there will be a socket group already created, however the group returned is bound to old port (i.e. 2747) and the new loop iteration is wanting a different port (i.e. 2748).

                So am still trying to get my mind around the configuration for all this. The serverMultiplexId attribute is used as the global identifier for mapping to a specific socket connection. Also looks like the host and port is also included in as the identifier as well for the mapping to a specific socket connection (in addition ot the serverMultiplexId). So would be impossible for multiple clients and callback server to use the same actual socket connection (mean real network socket connection here), unless they all had the same port, host, and serverMultiplexId?

                I think for this particular test case, could just clean up the socketGroupMap when there are no more users (client disconnects and the callback server stops) and would solve the problem since the next iteration loop would then start fresh with a new socketGroupMap entry (instead of throwing the exception since there is a conflict due to different port being specified). However, this brings up the previous concern about multiple clients and callback servers using the same physical connection to the target remoting server. Can't remember if we said this would be acceptable?

                • 6. Re: remaining issues with multiplex transport for 1.4.0 fina
                  timfox

                  I have amended the test case not to use the same multiplexid and now it seems to work :)

                  Prior to using the multiplex transport we were using the socket transport.

                  With this transport we would create a new Client using the socket transport per JMS connection.

                  It's my understanding that the underlying socket connections are pooled so that when we closed our Clients and reopened new ones the underlying connections would be re-used.

                  This meant subsequent creation of JMS connections (when the underlying tcp connection was re-used) was pretty fast.

                  If we are going to follow the pattern of having one multiplex connection (one Client instance and one corresponding callbackserver Connector instance) per JMS connection as we did with the socket transport, then with the multiplex transport it doesn't appear to re-use the underlying sockets or pool them.

                  This results in a very much slower connection setup time (since I guess it's creating new TCP connections for each multiplex connection).

                  Is there any way of getting these to pool, like in the socket transport?



                  • 7. Re: remaining issues with multiplex transport for 1.4.0 fina
                    timfox

                    Tom-

                    One way around this, which I mentioned some time ago, would be for me to pool the multiplex connections on the application level, although at the you suggested this should be coped with in remoting, which I agree is the best place for it.

                    What are your views on this now?

                    • 8. Re: remaining issues with multiplex transport for 1.4.0 fina
                      ron_sigal

                      Quite a flurry. I'll try to catch up:

                      Q1. What about the other multiplex config. params connect host, connect port , client multiplex id, server multiplex id?

                      A1. Any multiplex and socket invoker parameters can go in the configuration map. E.g., this is from org.jboss.test.remoting.transport.multiplex.MultiplexInvokerConfigTestClient:

                      Map configuration = new HashMap();
                      configuration.put("backlog", "2");
                      configuration.put("numAcceptThreads", "5");
                      configuration.put("socketTimeout", "300000");
                      configuration.put(MultiplexServerInvoker.SERVER_MULTIPLEX_ID_KEY, "testMultiplexId");
                      configuration.put(MultiplexServerInvoker.MULTIPLEX_CONNECT_HOST_KEY, serverHost);
                      configuration.put(MultiplexServerInvoker.MULTIPLEX_CONNECT_PORT_KEY, Integer.toString(serverPort));

                      The first three parameters are used by socket invoker, and the latter three are used by multiplex invoker.

                      Q2. does remoting internally use the configuration map to ensure that the LocalInvoker is used for communication in that direction too?

                      A2. No - no changes there - the parameters come from the InvokerLocator sent from the Client. However, the use of the configuration map allows the multiplex parameters that were mucking things up to be removed from the InvokerLocator, so that InvokerRegistry.createClientInvoker() should be able to match the InvokerLocator sent with the "add listener" request to the callback server's InvokerLocator and then create a local invoker. And now Tom has verified that it works.

                      • 9. Re: remaining issues with multiplex transport for 1.4.0 fina
                        ron_sigal

                        Thanks to Tom for looking into the IOException problem.

                        1. MultiplexClientInvoker.handleDisconnect() and MultiplexServerInvoker.cleanup() both should delete the entry in socketGroupMap. I'll have to look at this.

                        2. Re Tom's concern about multiple clients and callback servers using the same physical connection. Client and server multiplex invokers don't actually need the multiplex parameter apparatus to share a physical socket. Those parameters are there to give the Client or Connector that starts first either enough information to start up a connection that will work for subsequent partners (e.g., tell a Client to bind to the same port that the Connector wants to bind to, or tell a Connector which host/port to connect to) or tell it that a partner is coming along later (with the same multiplexId so that they recognize each other) to supply that information. Once the connection is made multiple Clients should be able to use it, simply by connecting to the host and port at the remote end. The only restriction is that a multiplex connection supports at most one VirtualServerSocket, which implies that at most one MutiplexServerInvoker, and therefore at most one callback Connector, can use a multiplex connection.

                        • 10. Re: remaining issues with multiplex transport for 1.4.0 fina
                          ron_sigal

                          "the multiplex transport it doesn't appear to re-use the underlying sockets or pool them"

                          Actually, the current version of MultiplexServerInvoker is subclassed from SocketServerInvoker, and inherits from it the pooling mechanism (which made it considerably faster than the previous version). Tim, are you seeing something that indicates socket pooling isn't working for multiplex invoker?

                          • 11. Re: remaining issues with multiplex transport for 1.4.0 fina
                            timfox

                            I'm probably not explaining myself very well. :(

                            I'll put together some test cases to try and explain what I mean.

                            I'm probably doing something wrong, but it would be good if one of you guys could look over it and advise.

                            It's going to be tomorrow since it's getting late here.

                            Thx

                            • 12. Re: remaining issues with multiplex transport for 1.4.0 fina

                              With the SocketClientInvoker, when disconnect() is called on it (which the Client's disconnect() will do), it will drain all the socket connections from the pool. So the next client that is created will have to re-fill the pool with socket connections as it makes calls. The pooling of socket connections is really most beneficial when making continous invocations on server as does not require creating a new client socket for each request, as can reuse a previous one.

                              However, if have multiple Client instances that have the same locator url, each of the Client instance will be using the same SocketClientInvoker instance (and thus the same client socket connection pool).

                              The MultiplexClientInvoker extends the SocketClientnvoker, so the behavior will be the same in this regard. However, the real problem is that can NOT have multiple callback connectors on the same virtual connection, so therefore can not have multiple client/callback server pairs on the same virtual connection at the same time.

                              Guess I am still struggling with what your requirement is. As I see it, can be one of the following:

                              1. Want client/callback server pairs connecting to the same remoting target server to use the same virtual connection (hence same physical socket connection).
                              2. Want client/callback server paris connecting to the same remoting target server to always use a different virtual connection (hence different physical connection).
                              3. Will never have more than one client/callback server pair connected to the same remoting target server at any one point in time.

                              Am in the weeds again. Sorry for having to ask for clarification again.

                              • 13. Re: remaining issues with multiplex transport for 1.4.0 fina
                                ron_sigal

                                 

                                "tom.elrod@jboss.com" wrote:

                                I think I found the root cause of the IOException. Looks like is because of repeated use of serverMultiplexId=mytestid.

                                The MultiplexServerInvoker maintains a static Map (socketGroupMap) for the different socket groups. On the first loop iteration in the test, this group is created for the serverMultiplexId where value is mytestid (where the group is bound to a particular port, i.e. 2747). Then on the next iteration in the test, will do a look up for the socket group by the same serverMultiplexId value of mytestid (to see one has already been created and can be reused). On this second time around, there will be a socket group already created, however the group returned is bound to old port (i.e. 2747) and the new loop iteration is wanting a different port (i.e. 2748).


                                OK, I see what's happening - not a bug, really, but a first-rate gotcha. Passing the multiplex parameters to Client via configuration Map allows InvokerRegistry.createClientInvoker() to make a LocalClientInvoker, which proceeds to completely ignore those parameters. On the other hand, when the callback MultiplexServerInvoker sees the serverMultiplexId parameter, it expects to be partnered up with a MultiplexClientInvoker which will come along and tell it what host/port to connect to. Meanwhile, it will not start, and, in particular, it will not call SocketServerInvoker.start(), which would set running = true. Eventually, the call to callbackServerConnector.stop() leads to a call to SocketServerInvoker.stop(), which has the test

                                 if(running)
                                 {
                                 cleanup();
                                 }
                                


                                So MultiplexServerInvoker.cleanup() never gets called, and the callback MultiplexServerInvoker's entry never gets removed from socketGroupMap.

                                If serverMultiplexId is removed from the callback Connector's InvokerLocator, everything works. The implication is that the multiplex version of the in-JVM {Connector, Client, callback Connector} structure needs to be configured differently than the networked version (i.e., without the multiplex parameters, which aren't even used). It would be prettier to not have to make the distinction, but I don't see how right off. Which leads me to two questions.

                                1. How objectionable is this distinction?
                                2. Tim, now that I think of it, why use multiplex invoker when the cient and server are in the same JVM and there are no sockets anyway?

                                • 14. Re: remaining issues with multiplex transport for 1.4.0 fina
                                  ron_sigal

                                   

                                  "ron_sigal" wrote:
                                  not a bug, really, but a first-rate gotcha.


                                  OK, its a bug and a gotcha. It's a bug because MultiplexServerInvoker should clean up the stale entries in the static tables. So, to that end, I've overriden SocketServerInvoker.stop() in MultiplexServerInvoker.

                                  But this doesn't make the in-JVM problem go away because once it sees the serverMultiplexId parameter, MultiplexServerInvoker will not start until it hears from a MultiplexClientServer with the same multiplexId, which won't happen if a LocalClientInvoker is created instead.

                                  Of course, multiplexIds aren't necessary if there are no sockets.

                                  1 2 Previous Next