Hello and good day,
I am experiencing a performance problem when making HTTP calls from one servlet to another within the same Wildfly instance. In the scenario described below, Wildfly 8.2 and 9.0 Beta2 appears to near-deadlock under a load of about 120 concurrent users. It occurs earlier in WF8.0. A thread dump (attached) does not reveal any blocked threads, so it my not be a literal deadlock, but is an useful description. The JMeter output below also provides a graphical description.
I have reproduced the behavior on 8.0, 8.2 and 9.0 Beta 2. In all cases I have used a 100% clean, freshly unzipped wildfly package, using the standalone.xml configuration with no changes. I have also reproduced the same results with Windows 2008R2, Windows 2012R2, and CentOS 6.2. JDK 1.7.0_75 x64.
The same scenario runs as I would expect using default configurations of Tomcat 7 or JBoss 6.2. That is, performance gradually degrades as load is ramped above 400 users, rather than the hard-fail seen in Wildfly.
I have extracted the essence of our real-world architecture into an extremely simple spike (attached), but without any consumption of CPU, memory, etc. It is made up of 3 servlets in two maven projects:
- slow-service (SlowServlet)
- This servlet waits for X milliseconds (50ms used in all scenarios) then returns a timestamp.
- In all scenarios, this war was running in its own, isolated tomcat container. I didn’t want the call to Thread.sleep(…) within the same container/JRE as the two servlets below, as I felt it could artificially influence the test.
- chained-service (InnerServlet)
- This service does nothing but use Apache HttpComponents to call SlowServlet and echo its response
- chained-service (ChainedServlet)
- This service does nothing but use Apache HttpComponents to call InnerServlet and echo its response
I can't guess whether this is a bug in Wildfly, per se, or a lack of some configuration on my part needed to enable this type of usage. If the latter, it's unclear what I would need to configure.
I look forward to any insight the community may have.
Thanks in advance,