4 Replies Latest reply on Jun 6, 2018 9:54 AM by crinaboitor

    Lots of Sockets Opened by Default I/O Threads?

    jfisherdev

      We recently started using this file leak detector tool File Leak Detector - on our standalone WildFly 9.0.2.Final deployment.

       

      It has been quite helpful in helping us find resource leaks in applications and reducing "too many open files" issues on JBoss AS/WildFly servers we have seen over the years and had difficulty identifying the exact cause(s) of.

       

      While I have a pretty good understanding of what most of the sockets opened correspond to, something that I'm less certain about would be the sockets opened by threads named "default I/O-###"

       

      It is usually pretty easy to understand what application/process opened based on the stack trace and/or thread name; however, in this case the only stack trace information is this:

       

      Trace: socket channel by thread:default I/O-### on [Date/Time]
      at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:135)
      at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:266)
      at org.xnio.nio.NioTcpServer.accept(NioTcpServer.java:385)
      at org.xnio.nio.NioTcpServer.accept(NioTcpServer.java:52)
      at org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:289)
      at org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:286)
      at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
      at org.xnio.nio.NioTcpServerHandle.handleReady(NioTcpServerHandle.java:53)
      at org.xnio.nio.WorkerThread.run(WorkerThread.java:539)
      

       

      I do have a guess about this:

      This corresponds to an I/O thread for a worker named "default" that is associated with one or more subsystems. In our case, I think most of this would be related to Undertow servicing requests [remote EJB, remote JMS, and web services]. I believe client/request volume may affect the number of sockets opened.

       

      The thing that seems strange and concerns me is the combination of age and number of sockets opened by these threads. In particular, I find it odd when I see a lot of sockets that have remained open for an extended period of time [hours or days in some cases], especially when the client/request volume is low.

       

      I'm wondering if there is something either in the IO subsystem or Undertow subsystem configuration that might explain what I am seeing, as I have used the default IO subsystem configuration that ships with WildFly and not done much configuration on Undertow. IO subsystem configuration is an area I still don't fully understand and did ask about previously in this post Tuning IO Subsystem in WildFly 9.0.2.Final but haven't gotten a response.

       

      Any information that might help explain what I am seeing or recommendations about tuning the IO or Undertow subsystems to prevent open files issues would be very much appreciated.

        • 1. Re: Lots of Sockets Opened by Default I/O Threads?
          ctomc

          IO threads are the "listening/accepting" threads, so they correspond to number of IO threads which are listening on server socket for new connections to come in.

          • 2. Re: Lots of Sockets Opened by Default I/O Threads?
            jfisherdev

            Thank you for responding.

             

            I looked around in JMX and found the org.xnio.management.XnioServerMXBean in the org.xnio.Xnio domain that is associated with the "default" worker and socket binding used by the Undertow HTTP listener by which essentially all client requests are handled. While there was not an exact one-to-one relationship, the ConnectionCount attribute and number of sockets opened by the "default" worker IO threads were pretty close.

             

            The issue I'm seeing is that there are lots of idle connections, and thus open file descriptors, being held by the server when there is no application activity. I think this is likely due to how I have the Undertow HTTP listener configured and that there are things I don't have set that may be causing these resources to be retained.

             

            Here are the settings that are currently undefined and I suspect are relevant based on the documentation I could find:

             

            - no-request-timeout

            - read-timeout

            - write-timeout

            - tcp-keep-alive [undefined, which I assume is the same as false]

             

            I'm guessing one or more of the timeout settings are probably the relevant setting(s) for this. I'm not sure what the difference between configuring the no-request-timeout versus read-timeout/write-timeout would be, as well as if setting tcp-keep-alive to true would have any impact on this. With the timeouts, I'm also not sure what factors would need to be considered when configuring them.

             

            Some information about the application that might help:

            - About 75% of the application traffic comes from remote standalone clients via remote EJB/JMS calls and 25% from web service requests from other web clients.

            - There are about 700-1000 standalone client applications that may be open at any given time, but most aren't actively being used/making requests at the same time. It looks like a lot of these connections remain established, even though most of them are idle.

             

            I'm not sure if traffic for remote EJB/JMS calls needs to be considered differently, since it's making use of the HTTP upgrade capability and being routed through the Undertow HTTP port.

             

            If you or anyone else has any information or could direct me to some resources that would be helpful, I would appreciate that very much.

            • 3. Re: Lots of Sockets Opened by Default I/O Threads?
              jfisherdev

              Looking at those four Undertow HTTP listener settings I mentioned in the previous message, these three appear to be the most relevant to channel behavior:

               

              - tcp-keep-alive

              - read-timeout

              - write-timeout

               

              Setting tcp-keep-alive to true has helped with the case of invalid connections leading to too many open files issues over time.

               

              The read/write-timeout settings appear to trigger socket closure in a timely and predictable manner after a period of no I/O activity on the channel. I would like to use these, but I'm finding tuning them to be a challenge. In particular, I have observed with standalone remote EJB clients that make a long running call that exceeds the timeout [e.g. 30 second read/write timeout and 40 second processing time before writing the response] will result in the channel being closed before the response can be written. Any guidelines or recommendations about tuning these would be appreciated if anyone would care to share any.

               

              One thing to note is that the management model documentation definition of read/write-timeout don't exactly match their behavior. Both are defined the same as the javadoc in org.xnio.Options; however, the difference appears to be is that rather than XNIO throwing a Read/WriteTimeoutException on the next read/write after a period of no I/O activity, Undertow will close the channel. I think looking at this high level, where this is treated as a timeout for idle channel activity that will result in an exception on the next read/write after it's exceeded, this is about the same, but I thought I would mention it in case it should be clarified or corrected.

              • 4. Re: Lots of Sockets Opened by Default I/O Threads?
                crinaboitor

                Hi,

                I have an app using Wildfly10 and on high concurrency it no longer accepts any http requests.

                I am currently analyzing how to configure the read/write timeout to solve the issue.

                I wanted to know if you solved your connection issues with this configuration?

                 

                Thank you,

                Crina