2 Replies Latest reply on Nov 1, 2017 12:27 PM by jfisherdev

    Lots of Sockets Opened by Default I/O Threads?

    jfisherdev Novice

      We recently started using this file leak detector tool File Leak Detector - on our standalone WildFly 9.0.2.Final deployment.

       

      It has been quite helpful in helping us find resource leaks in applications and reducing "too many open files" issues on JBoss AS/WildFly servers we have seen over the years and had difficulty identifying the exact cause(s) of.

       

      While I have a pretty good understanding of what most of the sockets opened correspond to, something that I'm less certain about would be the sockets opened by threads named "default I/O-###"

       

      It is usually pretty easy to understand what application/process opened based on the stack trace and/or thread name; however, in this case the only stack trace information is this:

       

      Trace: socket channel by thread:default I/O-### on [Date/Time]
      at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:135)
      at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:266)
      at org.xnio.nio.NioTcpServer.accept(NioTcpServer.java:385)
      at org.xnio.nio.NioTcpServer.accept(NioTcpServer.java:52)
      at org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:289)
      at org.xnio.ChannelListeners$10.handleEvent(ChannelListeners.java:286)
      at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
      at org.xnio.nio.NioTcpServerHandle.handleReady(NioTcpServerHandle.java:53)
      at org.xnio.nio.WorkerThread.run(WorkerThread.java:539)
      

       

      I do have a guess about this:

      This corresponds to an I/O thread for a worker named "default" that is associated with one or more subsystems. In our case, I think most of this would be related to Undertow servicing requests [remote EJB, remote JMS, and web services]. I believe client/request volume may affect the number of sockets opened.

       

      The thing that seems strange and concerns me is the combination of age and number of sockets opened by these threads. In particular, I find it odd when I see a lot of sockets that have remained open for an extended period of time [hours or days in some cases], especially when the client/request volume is low.

       

      I'm wondering if there is something either in the IO subsystem or Undertow subsystem configuration that might explain what I am seeing, as I have used the default IO subsystem configuration that ships with WildFly and not done much configuration on Undertow. IO subsystem configuration is an area I still don't fully understand and did ask about previously in this post Tuning IO Subsystem in WildFly 9.0.2.Final but haven't gotten a response.

       

      Any information that might help explain what I am seeing or recommendations about tuning the IO or Undertow subsystems to prevent open files issues would be very much appreciated.

        • 1. Re: Lots of Sockets Opened by Default I/O Threads?
          Tomaz Cerar Master

          IO threads are the "listening/accepting" threads, so they correspond to number of IO threads which are listening on server socket for new connections to come in.

          • 2. Re: Lots of Sockets Opened by Default I/O Threads?
            jfisherdev Novice

            Thank you for responding.

             

            I looked around in JMX and found the org.xnio.management.XnioServerMXBean in the org.xnio.Xnio domain that is associated with the "default" worker and socket binding used by the Undertow HTTP listener by which essentially all client requests are handled. While there was not an exact one-to-one relationship, the ConnectionCount attribute and number of sockets opened by the "default" worker IO threads were pretty close.

             

            The issue I'm seeing is that there are lots of idle connections, and thus open file descriptors, being held by the server when there is no application activity. I think this is likely due to how I have the Undertow HTTP listener configured and that there are things I don't have set that may be causing these resources to be retained.

             

            Here are the settings that are currently undefined and I suspect are relevant based on the documentation I could find:

             

            - no-request-timeout

            - read-timeout

            - write-timeout

            - tcp-keep-alive [undefined, which I assume is the same as false]

             

            I'm guessing one or more of the timeout settings are probably the relevant setting(s) for this. I'm not sure what the difference between configuring the no-request-timeout versus read-timeout/write-timeout would be, as well as if setting tcp-keep-alive to true would have any impact on this. With the timeouts, I'm also not sure what factors would need to be considered when configuring them.

             

            Some information about the application that might help:

            - About 75% of the application traffic comes from remote standalone clients via remote EJB/JMS calls and 25% from web service requests from other web clients.

            - There are about 700-1000 standalone client applications that may be open at any given time, but most aren't actively being used/making requests at the same time. It looks like a lot of these connections remain established, even though most of them are idle.

             

            I'm not sure if traffic for remote EJB/JMS calls needs to be considered differently, since it's making use of the HTTP upgrade capability and being routed through the Undertow HTTP port.

             

            If you or anyone else has any information or could direct me to some resources that would be helpful, I would appreciate that very much.