0 Replies Latest reply on Aug 28, 2017 4:34 PM by jfisherdev

    Tuning IO Subsystem in WildFly 9.0.2.Final

    jfisherdev Novice

      I am working on some performance tuning for a standalone WildFly 9.0.2.Final server, and one area I don't fully understand how to tune is the IO subsystem. I would appreciate some clarification and/or advice on some of these items. I will explain what I have so far and what my understanding is.

       

      At this time, we are using the out-of-the-box configuration for the IO subsystem, which does seem to be fine, but I'm not sure that it's optimal and I am working to figure that out.

       

      I understand there are two types of resources to tune in this subsystem--workers and buffer pools--and the one that would seem to be of more interest would be the workers.

       

      The default configuration appears to be a single worker, called "default" with these attributes:

       

      ioThreads=[undefined/no explicit value]

      stackSize=0

      taskCoreThreads=2

      taskKeepalive=60

      taskMaxThreads=[undefined/no explicit value]

       

      The "task" attributes appear to be for a bounded queue thread pool, or rather a thread pool this worker would be used to create. I understand that a worker manages both IO threads and task threads. As I understand, IO threads are for non-blocking requests, and task threads are for handling blocking tasks, such as Servlet requests. From what I have read--mainly from the Undertow documentation, this article WildFly performance tuning--it sounds like both values for ioThreads and taskMaxThreads, or at least the recommendations for them, are influenced by the number of CPU cores on the system, but that task thread settings are more influenced by application requirements.

       

      Looking at this article Monitoring Undertow thread pools, it sounds like these values determine the settings for each worker created, correct? One thing I'm not sure about is exactly how the subsystem worker configuration values determine the three settings that would appear to carry over to each worker and what happens when these aren't defined in the worker configuration.

       

      For the worker configuration above, I have the following three worker configurations:

       

      NameCore worker pool sizeIo thread countMax worker pool size
      default2432
      Remoting "$host$:MANAGEMENT"4116
      XNIO-25210

       

      I did experiment with setting these on the default worker, and it did change the Io thread count and Max worker pool size to match, but not the Core worker pool size for some reason. The Remoting worker appears to be independently configured in the Remoting subsystem, but I have no idea where the XNIO-2 worker comes from.

       

      My main questions about tuning workers would be:

      - For both IO and task threads, is it better to set this explicitly or leave it undefined and let it be determined automatically as it appears to be by default? If it is going to be set explicitly, is there something that should be monitored to help determine this value. My guess for task threads would be monitoring the WorkerQueueSize.

      - Why doesn't my task core threads setting seem to be honored?

       

      Any information or suggestions on these items would be appreciated.