1 2 Previous Next 24 Replies Latest reply on Jan 23, 2006 3:13 AM by ron_sigal Go to original post
      • 15. Re: remaining issues with multiplex transport for 1.4.0 fina

        The answer to #2 above is that may not actually know if target server is in the same jvm or not, so is way remoting automatically takes care of making the by-reference call for you, using the local invoker. This is a requirement of remoting. The behavior if making local or remote call should be seen as the exact same from the client's perspective (meaning no code or configration changes needed).

        I think as long as JMS Messaging can accept using a different serverMultiplexId value for every client/callback server pair on the client side, then we are set (please correct me if I'm wrong here). The performance may not be what you want, but we can address that after the 1.4.0 final relase.

        • 16. Re: remaining issues with multiplex transport for 1.4.0 fina
          ron_sigal

           

          "tom.elrod@jboss.com" wrote:

          The answer to #2 above is that may not actually know if target server is in the same jvm or not, so is way remoting automatically takes care of making the by-reference call for you, using the local invoker. This is a requirement of remoting. The behavior if making local or remote call should be seen as the exact same from the client's perspective (meaning no code or configration changes needed).


          I see. Thanks.

          "tom.elrod@jboss.com" wrote:

          I think as long as JMS Messaging can accept using a different serverMultiplexId value for every client/callback server pair on the client side, then we are set (please correct me if I'm wrong here). The performance may not be what you want, but we can address that after the 1.4.0 final relase.


          The need for different serverMultiplexIds should now be eliminated: MultiplexServerInvoker should behave and clean up its static tables so that outdated entries aren't encountered.

          I just wrote a long note about why there was still a problem, until I realized it probably wasn't a problem. What I thought was a problem was that if a LocalClientInvoker is created instead of a MultiplexClientInvoker, and if the MultiplexServerInvoker gets a serverMultiplexId parameter, then the MultiplexServerInvoker will wait for a MultiplexClientInvoker to come along to start it up. This will never happen, which I thought was a problem. But it looks like it doesn't matter since start() just does transport level stuff (start accept threads and so forth) that never get used.

          Tom, am I right that it shouldn't be a problem?



          • 17. Re: remaining issues with multiplex transport for 1.4.0 fina
            timfox

             

            "ron_sigal" wrote:
            "the multiplex transport it doesn't appear to re-use the underlying sockets or pool them"

            Actually, the current version of MultiplexServerInvoker is subclassed from SocketServerInvoker, and inherits from it the pooling mechanism (which made it considerably faster than the previous version). Tim, are you seeing something that indicates socket pooling isn't working for multiplex invoker?


            I have put together some test cases and sent them to you and Tom offline.

            In the test cases I do the following in a loop:

            Create a Client instance to a remote server.
            Conect the client
            Create a callback server.
            Create and start the callback server.
            Disconnect the client.
            Stop the callback server.

            This is what we currently do when we create and destroy a JMS connection. Each JMS connection has it's own callback server. Currently we are using the socket transport.

            I repeat the above test with a) the socket transport, and b) the multiplex transport.

            The results are as follows:

            Multiplex transport: Setup/teardown 4.3 connections/sec
            Socket transport: Setup/teardown 85 connections/sec

            I.e. the multiplex transport is approx 20 times slower in setting up/tearing down connections.

            Consider that the creation of a connection in both transports should end up with the same end result of creating a single TCP connection which should be the slow bit, I can only assume the socket transport is quicker because it is using pooled connections and multiplex is not.

            Perhaps this is an incorrect assumption? Or maybe I have made a usage error in the tests.

            I have sent the tests to you so you can verify the results. (I have also put some tests for relative invocation speed in there, but we can come to them later).

            I have also profiled the tests using JProfiler, to see where all the time is going. I have also sent these to you in html format.

            In summary for the socket transport, most of the time is spent in:
            org.jboss.remoting.transport.socket.SocketClientInvoker.getConnection

            but for the multiplex transport, most of the time is spent in:
            org.jboss.remoting.transport.multiplex.MultiplexClientInvoker.setup









            • 18. Re: remaining issues with multiplex transport for 1.4.0 fina
              timfox

              Correction to the above.

              In the socket transport the end result of setting up the jms connection should be the creation of 2 TCP connections (one for client to server invocations and one for callbacks), but for the multiplex the end result is the setting up of just one actual TCP connection.

              So In a way I would expect it to be actually faster than the sockect transport on setup (assuming that actually creating the physical connection is the slow bit) ??

              • 19. Re: remaining issues with multiplex transport for 1.4.0 fina
                timfox

                Noticed this in the code:

                (MultiplexServerInvoker.createPrimingSocket)

                while(! manager.isRemoteServerSocketRegistered())
                {
                try
                {
                Thread.sleep(500);
                }
                catch (InterruptedException ignored)
                {
                }
                }

                Perhaps this is related to the connection startup time which is often around the 0.5 second mark?

                • 20. Re: remaining issues with multiplex transport for 1.4.0 fina
                  timfox

                  On my local copy of remoting I have replaced the Thread.sleep(500) "anti-pattern" line with Thread.yield(), which is the proper way of waiting in a loop for a condition with hogging the cpu IMHO.

                  Now am getting over over 30 connections/sec created which is of the same order of magnitude as the socket connection :)

                  • 21. Re: remaining issues with multiplex transport for 1.4.0 fina

                     

                    "timfox" wrote:
                    On my local copy of remoting I have replaced the Thread.sleep(500) "anti-pattern" line with Thread.yield(), which is the proper way of waiting in a loop for a condition with hogging the cpu IMHO.


                    Both are wrong.
                    Thread.yield()
                    

                    is very non-portable.
                    It is little more than a hint to the OS Scheduler which
                    isn't guaranteed to be honoured (especially on SMP).

                    The correct mechanism is something like this pseudo code:
                    synchronized (manager)
                    {
                     boolean interrupted = false;
                     while(manager.isRemoteServerSocketRegistered() == false)
                     {
                     try
                     {
                     manager.wait(500);
                     }
                     catch (InterruptedException notIgnored)
                     {
                     interrupted = true;
                     }
                     }
                     if (interrupted)
                     Thread.interrupt();
                    }
                    


                    with the manager doing something the following in its registration
                    Manager::register()
                    {
                     synchronized(this)
                     {
                     doRegistration();
                     this.notifyAll();
                     }
                    }
                    


                    Additionally infinite loops like that isRegistered() wait are bad.
                    If something goes wrong while the manager is being registered you now
                    have an orphan thread that will never complete (the manager is never registered).

                    • 22. Re: remaining issues with multiplex transport for 1.4.0 fina
                      timfox

                      Also seem to be a couple of other places in the code where Thread.sleep(...) is being used.

                      My guess is that these could be replaced with Thread.yield() too, although I am no expert on the codebase.

                      Anyway, I have done this locally and this brings me to be about 55 connections/sec.

                      OutputMultiplexor.doShutdown()
                      VirtualServerSocket.close()
                      VirtualServerSocket.doClose()

                      However after creating and closing several hundred connections I'm getting outOfMemory errors - it looks like resources aren't being cleared up on close:

                      There was 1 error:
                      1) testConnectionSetupTeardownSpeedMultiplex(org.jboss.test.messaging.jms.Multip
                      lexPerfTest)java.lang.OutOfMemoryError: unable to create new native thread
                       at java.lang.Thread.start0(Native Method)
                       at java.lang.Thread.start(Thread.java:574)
                       at org.jboss.remoting.transport.socket.SocketServerInvoker.start(SocketS
                      erverInvoker.java:182)
                       at org.jboss.remoting.transport.multiplex.MultiplexServerInvoker.finishS
                      tart(MultiplexServerInvoker.java:261)
                       at org.jboss.remoting.transport.multiplex.MultiplexServerInvoker.start(M
                      ultiplexServerInvoker.java:185)
                       at org.jboss.remoting.transport.Connector.start(Connector.java:286)
                       at org.jboss.test.messaging.jms.MultiplexPerfTest.testConnectionSetupTea
                      rdownSpeedMultiplex(MultiplexPerfTest.java:304)
                       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
                      java:39)
                       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
                      sorImpl.java:25)
                       at org.jboss.test.messaging.tools.junit.SelectiveTestRunner.main(Selecti
                      veTestRunner.java:58)
                      


                      One thing I noticed in SocketServerInvoker.cleanup(), threads are being interrupted, but the method exits without waiting for the threads to end. (I.e. by doing a thread.join() for each thread)

                      I don't know if this is related to the above problem, but I guess this could result in running out of threads if many instances were created and closed in quick succession.



                      • 23. Re: remaining issues with multiplex transport for 1.4.0 fina
                        timfox

                         

                        "adrian@jboss.org" wrote:
                        "timfox" wrote:
                        On my local copy of remoting I have replaced the Thread.sleep(500) "anti-pattern" line with Thread.yield(), which is the proper way of waiting in a loop for a condition with hogging the cpu IMHO.


                        Both are wrong.


                        As usual, you're right :)

                        I think there's something about this in Josh Bloch's effecctive Java book?

                        • 24. Re: remaining issues with multiplex transport for 1.4.0 fina
                          ron_sigal

                          When the blues singer Mississippi John Hurt invited the audience to sing along, he said "I know I'm asking y'all to do my work for me." Anyway, thanks to Tim for his test cases and eagle eye.

                          1. I've corrected the Thread.sleep()'s. One was pentimento - should have disappeared months ago. One runs in a separate thread where it shouldn't impact performance. And the other one is now replaced by a callback.

                          2.

                          "timfox" wrote:

                          after creating and closing several hundred connections I'm getting outOfMemory errors - it looks like resources aren't being cleared up on close


                          Right now virtual socket groups, and the invoker groups that live on top of them, are good at not shutting down too soon (so we can't have a virtual socket joining a socket group that is already shutting down at the remote end), but they're not as good at shutting down in a timely manner. An invoker group on the client side gets things going on the server side by opening up a virtual "priming socket", which results in the creation of a virtual MultiplexServerInvoker. When the last member in the group shuts down, it closes the priming socket, and when the virtual server socket in the virtual MultiplexServerInvoker times out and sees that the priming socket has been closed, it shuts itself down.

                          The default timeout period for the virtual server socket is 60 seconds, but it can be set with the socketTimeout parameter. I suspect that Tim is seeing a lot of threads, created for the virtual socket infrastructure, just not dying soon enough. In general, there should be better shut down handling of invoker groups.

                          3. In playing with Tim's sample code, I discovered a way in which the multiplex invokers are not completely compatible with other invokers. If a Client is created without a binding host and port (multiplexBindHost, multiplexBindPort parameters), it will not be usable until a callback Connector comes along to supply that information. This interacts negatively with Tom's new lease mechanism, which starts in Client.connect() and which expects a usable client invoker. The implication is that in this particular case, Client.connect() may not be called until the callback Connector has been started. The temporary fix is to either (1) start the callback Connector first, or (2) supply the bind information to the Client when it is created (which is what happens in Tim's test case)

                          1 2 Previous Next