1 2 Previous Next 22 Replies Latest reply on Mar 14, 2006 12:06 PM by clebert.suconic

    multiplex transport for release 1.4.1

    ron_sigal

      I'm starting this thread to follow up "remaining issues with multiplex transport for 1.4.0 final" (http://www.jboss.org/index.html?module=bb&op=viewtopic&t=75672) so that Tim Fox will have a platform to beat on multiplex. ;-)

      At the end of the aforementioned thread I wrote:


      "timfox" wrote:

      after creating and closing several hundred connections I'm getting outOfMemory errors - it looks like resources aren't being cleared up on close


      Right now virtual socket groups, and the invoker groups that live on top of them, are good at not shutting down too soon (so we can't have a virtual socket joining a socket group that is already shutting down at the remote end), but they're not as good at shutting down in a timely manner. An invoker group on the client side gets things going on the server side by opening up a virtual "priming socket", which results in the creation of a virtual MultiplexServerInvoker. When the last member in the group shuts down, it closes the priming socket, and when the virtual server socket in the virtual MultiplexServerInvoker times out and sees that the priming socket has been closed, it shuts itself down.

      The default timeout period for the virtual server socket is 60 seconds, but it can be set with the socketTimeout parameter. I suspect that Tim is seeing a lot of threads, created for the virtual socket infrastructure, just not dying soon enough. In general, there should be better shut down handling of invoker groups.


      I added a mechanism to VirtualSocket which lets it inform listeners when its remote peer has closed. Now, instead of relying on a timeout, a virtual MultiplexServerInvoker registers as a listener on its "priming socket", and when that shuts down, the invoker shuts down.

      Running Tim's test case that iteratively opens and closes 100 (Client, callback Connector) pairs, I see the following number of open (real) sockets at 2 second intervals: 2, 12, 12, 16, 11, 0. I.e., they seem to be closing pretty quickly.

        • 1. Re: multiplex transport for release 1.4.1
          timfox

          I've had some time today to try and re-integrate multiplex into messaging to see if the previous problems are resolved.

          Amongst other issues, I'm still seeing connections taking a long time to close.

          Each connection typically takes 2 to 3 minutes (!) to close after the client has been disconnected and the callbackserver closed.

          This results in us running out of connections very quickly in the test suite.

          It also means that any jms application opening more than 1 connection every 2 to 3 minutes will end up with open connections building up unsustainably. Which clearly is not going to work in the real world.

          I think connection closing really *must* be synchronous with closing of the jms connection, otherwise I don't see how we can use multiplex as the UIL2 transport for messaging.

          How easy would it be to make it synchronous?

          JBossMessaging 1.0 is released at the end of the month and as you know we are reliant on multiplex for the UIL2 replacement.

          Other option would be to simply (famous last words) port UIL2 to remoting as a new transport, since we do not require all the bells and whistles of full-blown multiplex, we just need a simple UIL2 style transport (it needs to have the same or better performance as UIL2 too) which uses the same socket for traffic in both directions.

          I guess this could be done by extending and modifying the socket transport.

          Although I doubt whether this could be done by the end of the month.




          • 2. Re: multiplex transport for release 1.4.1
            ron_sigal

            Tim,

            I've been running a version of the performance test you once wrote (listing below), which just does this:

            for i = 0; i < N; i++
             open client
             invoke()
             close client
            


            and what I'm seeing is that the actual sockets are closing and then remaining in a TIME_WAIT state, which is what they're supposed to do. (See, e.g.,
            http://www.developerweb.net/forum/showthread.php?t=2941.) Open sockets are shown by netstat as being in an ESTABLISHED state. I ran the following script while running the test program to watch for open sockets:

            #!/bin/tcsh
            
            set seconds = 0
            
            while (1)
             echo -n ${seconds}": "
             netstat -n | grep tcp | grep ESTABLISHED |wc |cut -b2-12
             sleep 5
             @ seconds += 5
            end
            
            


            and for N=500 I got


            0: 1
            5: 11
            10: 15
            15: 11
            20: 14
            25: 15
            30: 13
            35: 15
            40: 15
            45: 11
            50: 1


            (That last socket was held by Eclipse, for some reason.) In other words, the number of open socket settles into a steady state. (The performance isn't good, but that's another matter.)

            So, are we looking at two different things?

            Here's the test program:

            package org.jboss.remoting.transport.multiplex.experimental;
            
            import javax.management.MBeanServer;
            
            import junit.framework.TestCase;
            
            import org.jboss.remoting.Client;
            import org.jboss.remoting.InvocationRequest;
            import org.jboss.remoting.InvokerLocator;
            import org.jboss.remoting.ServerInvocationHandler;
            import org.jboss.remoting.ServerInvoker;
            import org.jboss.remoting.callback.Callback;
            import org.jboss.remoting.callback.HandleCallbackException;
            import org.jboss.remoting.callback.InvokerCallbackHandler;
            import org.jboss.remoting.transport.Connector;
            import org.jboss.remoting.transport.PortUtil;
            
            /**
            * @author <a href="mailto:tim.fox@jboss.com">Tim Fox</a>
            *
            */
            public class TimsTest extends TestCase
            {
             private String transport;
            
             public TimsTest(String name, String transport)
             {
             super(name);
             this.transport = transport;
             }
            
             public void testConnectionsRemainOpen() throws Throwable
             {
             //Start a server
            
             Connector serverConnector = new Connector();
             InvokerLocator serverLocator = new InvokerLocator(transport + "://localhost:9099/?socketTimeout=10000");
             serverConnector.setInvokerLocator(serverLocator.getLocatorURI());
             serverConnector.create();
             SimpleServerInvocationHandler invocationHandler = new SimpleServerInvocationHandler();
             serverConnector.addInvocationHandler("JMS", invocationHandler);
             serverConnector.start();
            
             for (int i = 0; i < 500; i++)
             {
             int localPort = PortUtil.findFreePort("localhost");
             String locatorURI = transport +
             "://localhost:9099/?clientMultiplexId=mytestid&multiplexBindHost=localhost&multiplexBindPort=" + localPort;
             InvokerLocator locatorClient = new InvokerLocator(locatorURI);
             Client client = new Client(locatorClient);
             System.out.println("created Client: " + i + " (" + localPort + ")");
             client.connect();
             client.invoke("Cheese");
             client.disconnect();
             }
            
             serverConnector.stop();
             }
            
             class SimpleCallbackHandler implements InvokerCallbackHandler
             {
             Callback callback;
            
             public void handleCallback(Callback callback) throws HandleCallbackException
             {
             this.callback = callback;
             }
             }
            
             class SimpleServerInvocationHandler implements ServerInvocationHandler
             {
             InvokerCallbackHandler handler;
            
             public void addListener(InvokerCallbackHandler callbackHandler)
             {
             this.handler = callbackHandler;
             }
            
             public Object invoke(InvocationRequest invocation) throws Throwable
             {
             return "Sausages";
             }
            
             public void removeListener(InvokerCallbackHandler callbackHandler)
             {
             }
            
             public void setInvoker(ServerInvoker invoker)
             {
             }
            
             public void setMBeanServer(MBeanServer server)
             {
             }
             }
            
            
             public static void main(String[] args)
             {
             try
             {
             String transport = "multiplex";
            
             if (args.length > 0)
             transport = args[0];
            
             System.out.println("using transport: " + transport);
            
             new TimsTest("main", transport).testConnectionsRemainOpen();
             }
             catch (Throwable t)
             {
             t.printStackTrace();
             }
             }
            }


            • 3. Re: multiplex transport for release 1.4.1
              timfox

              Ron-

              I am running the test which I gave you some time ago which creates and "closes" multiplex connections in a loop, and after the connections have been closed I am seeing tcp connections in an ESTABLISHED state, not a TIME_WAIT state.

              Here's a snip from my netstat output.

              I seem to get one established connection for each multiplex connection I create and close in the loop

               TCP MRLAPPY:2780 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2781 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2782 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2783 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2784 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2785 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2786 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2787 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2788 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2789 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2790 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2791 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2792 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2793 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2794 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2795 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2796 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2797 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2798 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2799 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2800 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2801 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2802 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2803 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2804 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2805 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2806 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2807 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2808 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2809 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2810 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2811 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2812 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2813 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2814 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2815 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2816 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2817 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2818 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2819 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2820 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2821 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2822 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2823 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2824 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2826 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2827 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2828 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2829 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2830 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2831 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2832 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2833 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2834 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2835 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2836 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2837 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2838 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2839 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2840 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2841 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2842 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2843 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2844 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2845 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2846 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2847 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2848 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2849 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2850 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2851 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2852 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2853 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2854 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2855 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2856 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2857 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2858 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2859 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2860 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2861 localhost:9099 ESTABLISHED
               TCP MRLAPPY:2862 localhost:9099 ESTABLISHED
              


              For continous operation, I should be able to open and close connections ad infinitum in a loop - this is what JMS clients would do.

              When I make the loop infinite after a few seconds it dies with OutOfMemory Error:

              1) testMultiplex(org.jboss.test.messaging.jms.MultiplexPerfTest)java.lang.reflec
              t.InvocationTargetException
               at sun.reflect.GeneratedConstructorAccessor9.newInstance(Unknown Source)
              
               at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
              onstructorAccessorImpl.java:27)
               at java.lang.reflect.Constructor.newInstance(Constructor.java:274)
               at org.jboss.remoting.InvokerRegistry.createClientInvoker(InvokerRegistr
              y.java:339)
               at org.jboss.remoting.Client.<init>(Client.java:177)
               at org.jboss.remoting.Client.<init>(Client.java:156)
               at org.jboss.remoting.Client.<init>(Client.java:125)
               at org.jboss.test.messaging.jms.MultiplexPerfTest.testMultiplex(Multiple
              xPerfTest.java:136)
               at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
               at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
              java:39)
               at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
              sorImpl.java:25)
               at org.jboss.test.messaging.tools.junit.SelectiveTestRunner.main(Selecti
              veTestRunner.java:58)
              Caused by: java.lang.OutOfMemoryError: unable to create new native thread
               at java.lang.Thread.start(Native Method)
               at org.jboss.remoting.transport.multiplex.MultiplexingManager.setup(Mult
              iplexingManager.java:268)
               at org.jboss.remoting.transport.multiplex.MultiplexingManager.connect(Mu
              ltiplexingManager.java:653)
               at org.jboss.remoting.transport.multiplex.VirtualSocket.connect(VirtualS
              ocket.java:865)
               at org.jboss.remoting.transport.multiplex.VirtualSocket.<init>(VirtualSo
              cket.java:210)
               at org.jboss.remoting.transport.multiplex.VirtualSocket.<init>(VirtualSo
              cket.java:196)
               at org.jboss.remoting.transport.multiplex.MultiplexServerInvoker.createP
              rimingSocket(MultiplexServerInvoker.java:524)
               at org.jboss.remoting.transport.multiplex.MultiplexClientInvoker.configu
              reSocketGroupParameters(MultiplexClientInvoker.java:281)
               at org.jboss.remoting.transport.multiplex.MultiplexClientInvoker.setup(M
              ultiplexClientInvoker.java:105)
               at org.jboss.remoting.transport.socket.SocketClientInvoker.<init>(Socket
              ClientInvoker.java:120)
               at org.jboss.remoting.transport.multiplex.MultiplexClientInvoker.<init>(
              MultiplexClientInvoker.java:82)
               ... 23 more
              
              FAILURES!!!
              


              I have sent the test case to you. The test you want is "testMultiplex"

              I noticed that in the test you're using you not creating a callback server, in the test I am using, I create one.


              Cheers

              Tim

              • 4. Re: multiplex transport for release 1.4.1

                Is there a jira issue for this? If not, one of you mind opening one and putting againts remoting release 1.4.1?

                Thanks.

                -Tom

                • 5. Re: multiplex transport for release 1.4.1
                  ron_sigal

                  JBREM-331

                  • 6. Re: multiplex transport for release 1.4.1
                    ron_sigal

                    Tim,

                    Well, the situation is rather murky.

                    1. I discovered a problem, with a workaround, that probably isn't related to the socket close problem. Client invokers can be shared by multiple Clients connected to the same serverr. When one of those clients closes, the invoker is destroyed, and it is recreated the next time another Client needs it. I didn't really appreciate until now that this causes a problem for MultiplexClientInvoker, since closing a MultiplexClientInvoker that doesn't have a companion callback MultiplexServerInvoker causes the "priming" socket to close, which can cause the server side MultiplexServerInvoker to close, even if it's still in use. The workaround is to use a different InvokerLocator for every Client so that they never share a MultiplexClientInvoker. E.g., I changed ""multiplex://localhost:9099/?blah=123" to ""multiplex://localhost:9099/?blah=" + localPort. Now, this isn't very nice, and I will create a JIRA issue about.

                    2. Before I made that change, your program didn't always run to completion for me, but even so I couldn't duplicate your results. The sockets seemed to be closing in a reasonable way. After making the change, I ran the loop 2000 times and got the following live socket counts at 2 second intervals:


                    54: 4
                    56: 6
                    58: 38
                    60: 48
                    62: 48
                    64: 36
                    66: 48
                    68: 56
                    70: 62
                    72: 56
                    74: 66
                    76: 36
                    78: 68
                    80: 54
                    82: 56
                    84: 54
                    86: 58
                    88: 38
                    90: 46
                    92: 48
                    94: 24
                    96: 28
                    98: 18
                    100: 26
                    102: 20
                    104: 18
                    106: 16
                    108: 16
                    110: 12
                    112: 14
                    114: 12
                    116: 10
                    118: 8
                    120: 6
                    122: 8
                    124: 8
                    126: 4


                    This is your program with just the one change. It must be something about the way we're running the program, or the environment. I'm running inside of Eclipse, for example. Is there anything else going on when you're running? I'm mystified.

                    3. I found that after about 2000 iterations, I got an Out of Memory exception. I will start looking for a memory leak.



                    • 7. Re: multiplex transport for release 1.4.1
                      timfox

                      Just to be absolutely explicit here is the test you need to run. Don't run it in eclipse:

                      public void testMultiplex() throws Throwable
                       {
                       //Start a server
                      
                       Connector serverConnector = new Connector();
                       InvokerLocator serverLocator = new InvokerLocator("multiplex://localhost:9099");
                       serverConnector.setInvokerLocator(serverLocator.getLocatorURI());
                       serverConnector.create();
                       SimpleServerInvocationHandler invocationHandler = new SimpleServerInvocationHandler();
                       serverConnector.addInvocationHandler("JMS", invocationHandler);
                       serverConnector.start();
                      
                       log.info("Server started");
                      
                       for (int i = 0; i < 200; i++)
                       {
                       //Create a client for invoking on the server:
                       int localPort = PortUtil.findFreePort("localhost");
                      
                       log.info("Free port is:" + localPort);
                      
                       String locatorURI =
                       "multiplex://localhost:9099/?blah=123";
                      
                       InvokerLocator locatorClient = new InvokerLocator(locatorURI);
                       Map configuration = new HashMap();
                       configuration.put(MultiplexServerInvoker.CLIENT_MULTIPLEX_ID_KEY, "mytestid" + i);
                       configuration.put(MultiplexServerInvoker.MULTIPLEX_BIND_HOST_KEY, "localhost");
                       configuration.put(MultiplexServerInvoker.MULTIPLEX_BIND_PORT_KEY, String.valueOf(localPort));
                      
                       Client client = new Client(locatorClient, configuration);
                      
                       log.info("Created client");
                      
                       client.connect();
                      
                       log.info("Invoking");
                      
                       String response = (String)client.invoke("Cheese");
                      
                       log.info("Invocation completed");
                      
                       assertNotNull(response);
                       assertEquals("Sausages", response);
                      
                       Connector callbackServerConnector = new Connector();
                      
                       InvokerLocator callbackServerLocator = new InvokerLocator("multiplex://localhost:" + localPort + "/?serverMultiplexId=mytestid" + i);
                      
                       callbackServerConnector.setInvokerLocator(callbackServerLocator.getLocatorURI());
                       callbackServerConnector.create();
                       callbackServerConnector.start();
                      
                       log.info("Created callback server");
                      
                      
                       log.info("Invoking");
                      
                       response = (String)client.invoke("Cheese");
                      
                       log.info("Invocation completed");
                      
                       assertNotNull(response);
                       assertEquals("Sausages", response);
                      
                       SimpleCallbackHandler callbackHandler = new SimpleCallbackHandler();
                      
                       client.addListener(callbackHandler, callbackServerLocator);
                      
                       assertNotNull(invocationHandler.handler);
                      
                       Callback cb = new Callback("Squirrel");
                      
                       invocationHandler.handler.handleCallback(cb);
                      
                       assertNotNull(callbackHandler.callback);
                       assertEquals("Squirrel", callbackHandler.callback.getParameter());
                      
                       client.disconnect();
                      
                       callbackServerConnector.stop();
                      
                       callbackServerConnector.destroy();
                      
                       }
                      
                       Thread.sleep(1000 * 60 * 30 * 10);
                      
                       //Now run netstat -a and see how many TCP connections are open!
                      
                       }
                      
                      


                      It is 100% reproducible for me.

                      BTW would do you mean "sockets closing in a reasonable way"?

                      To be useful for us, sockets need to be closed (or returned to the pool)

                      a) synchronously
                      b) quickly




                      • 8. Re: multiplex transport for release 1.4.1
                        ron_sigal

                        Well, I don't get it. I copied and pasted your test case and ran it with a task I added to the JBossRemoting main build.xml file, and I'm still seeing the sockets leaving the ESTABLISHED state. I made two changes to your code: (1) I added a distinctive tag to locatorURI, and (2) I commented out the Thread.sleep() to get a better sense of the timing. But I get the same behavior with the Thread.sleep() in place.

                        I think I mean the same thing you do by "sockets closing in a reasonable way". Here's the ant output:


                        Buildfile: build.xml

                        get-jvm:

                        configure:

                        compile:
                        [javac] Compiling 1 source file to C:\Documents and Settings\rsigal\workspace\JBossRemoting\output\classes

                        manifest:

                        jars:

                        tests.compile:

                        tests.jars:

                        tests.tim:
                        [echo] 01:55:58
                        [junit] Running org.jboss.remoting.transport.multiplex.experimental.TimPerfTest
                        [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 5.718 sec
                        [echo] 01:56:05

                        BUILD SUCCESSFUL
                        Total time: 9 seconds


                        and here's the output of a script running at the same time, counting sockets in ESTABLISHED state (4 of which predated the test):


                        Fri Mar 10 13:55:58 EST 2006:
                        4
                        Fri Mar 10 13:55:59 EST 2006:
                        6
                        Fri Mar 10 13:56:00 EST 2006:
                        32
                        Fri Mar 10 13:56:01 EST 2006:
                        28
                        Fri Mar 10 13:56:03 EST 2006:
                        30
                        Fri Mar 10 13:56:04 EST 2006:
                        26
                        Fri Mar 10 13:56:05 EST 2006:
                        22
                        Fri Mar 10 13:56:06 EST 2006:
                        4
                        Fri Mar 10 13:56:07 EST 2006:


                        The program is creating about 35 sockets/second, and we never see more than 32 - 4, so it looks like the sockets are closing in less than a second.

                        Well, what else can you tell me about how you're running the test? There must be SOME difference!!!

                        -Ron

                        • 9. Re: multiplex transport for release 1.4.1
                          timfox

                          The Thread.sleep is crucial to prevent the process exiting (I guess this could be done another way though)

                          The idea is we open and close N connections.

                          Then we hit the Thread.sleep - this prevents the process from exiting.

                          While the thread is sleeping, do netstat -a.

                          This is when I see established connections.

                          If I then wait (with the Thread still sleeping), then after some time the connections eventually disappear.

                          Can you try this?

                          BTW I am running Windows XP. Maybe it's a platform issue.

                          • 10. Re: multiplex transport for release 1.4.1
                            ron_sigal

                            Coincidentally, I just came back to the forum to tell you that I get the same results on Windows XP and linux (kernel 2.6.8.1-10mdk).

                            I took the Thread.sleep() out just to get an idea about timing. I get the same results when I leave it in.

                            How are you starting the program? Ant? java on the command line?

                            • 11. Re: multiplex transport for release 1.4.1
                              ron_sigal

                              Hey Tim,

                              Which version of Remoting are you using?

                              -Ron

                              • 12. Re: multiplex transport for release 1.4.1
                                timfox

                                I am using the version of remoting in the remoting cvs repository.

                                I built it from CVS, maybe 3 days ago.

                                • 13. Re: multiplex transport for release 1.4.1
                                  timfox

                                  I am running the test using the junit ant task from an ant script.

                                  • 14. Re: multiplex transport for release 1.4.1
                                    timfox

                                    This doesn't make any sense.

                                    Ron, can you validate that you are doing *exactly* the following:

                                    Using exactly the test I sent you the other day, using ant from the command line

                                    After the loop is run, the Thread.sleep line is hit, *immediately* run netstat -a (please don't use a script) in another console.

                                    You should see many connections in state "ESTABLISHED".

                                    After a minute or so, they change their state to TIME_WAIT.

                                    When you exit the process (CTRL-C) they disappear altogether.

                                    Can you validate this?

                                    If not, then I shall try and create a small program that does the above automatically and captures the results, and ask you to run it on your system.

                                    Unfortunately I am extremely busy right now with messaging 1.0 release so not sure when I will have time to do this.

                                    1 2 Previous Next