0 Replies Latest reply on Sep 28, 2018 4:10 AM by Lucas Basquerotto

    Wildfly 13 - File Upload Leak (Undertow)

    Lucas Basquerotto Newbie

      I migrated recently from JBossAS 7.1.1 (Java 7) to Wildfly 13 (Java 8).


      Almost everything was fine, but, before 2 days running, the server stopped working with a lot of errors of "Too many open files".


      I've never received that before (that I remember).


      Restarting the server solved the issue for the moment, but I wanted to know what caused that.


      I digged around a bit and found that the docker container in which it was running allowed until 4096 open files, so, for now, I increased this limit in the wildfly container.


      This error is not happening anymore, because I increased the limit a lot, but I created a cronjob to verify the top 10 processes that has more opened file descriptors.


      Right after restarting, Wildfly have about 600, but, as time goes on, it increases, and sometimes decreases, but when comparing the results of a day with the day before, it increased by about 1000 opened file descriptors.


      This is an example of the results after Wildfly was running for 5 days:


      pid =  9570 with 6035 fds: java

      pid =  8766 with  241 fds: node

      pid =  8568 with  235 fds: mysqld

      pid =  9058 with   89 fds: java

      pid =  2033 with   59 fds: dockerd

      pid =  9100 with   34 fds: docker-proxy

      pid =  1953 with   22 fds: fail2ban-server

      pid =  5303 with   21 fds: httpd

      pid =  5270 with   21 fds: httpd

      pid =  5212 with   21 fds: httpd


      The java process related to wildfly has 6035 opened file descriptors, and this was a moment with very few online users (the other java process is logstash).


      I've run the command ls -l /proc/9570/fd to see the opened file descriptor and saw 6045 lines, of which 5312 are similar to the following:


      lr-x------ 1 1000 1000 64 Sep 27 03:20 999 -> /opt/jboss/wildfly/standalone/tmp/app.war/undertow6144806476267093537upload (deleted)


      So I think there is some leak in the file upload process (although I don't receive errors).


      Maybe this is related to this bug in Undertow [UNDERTOW-961] File descriptors leak in MultiPartParserDefinition - JBoss Issue Tracker (created at 19/Jan/17 7:36 AM and resolved at 16/Feb/17 6:02 PM), but I'm not sure (and the issue was solved, although I don't know which version of Undertow Wildfly 13 uses).


      I don't remember receiving such issues in JBossAS 7.1.1 (or maybe there was, but in less intensity, in such a way that the service was always restarted before the number of opened file descriptors reached the limit).


      I haven't changed the file upload process after the migration, and like I said, I don't receive error during file upload.


      Actually, I started receiving about 5 or 10 errors per day like:


      java.io.IOException: UT000128: Remote peer closed connection before all data could be read

      at io.undertow.conduits.FixedLengthStreamSourceConduit.exitRead(FixedLengthStreamSourceConduit.java:338)

      at io.undertow.conduits.FixedLengthStreamSourceConduit.read(FixedLengthStreamSourceConduit.java:255)

      at org.xnio.conduits.ConduitStreamSourceChannel.read(ConduitStreamSourceChannel.java:127)

      at io.undertow.channels.DetachableStreamSourceChannel.read(DetachableStreamSourceChannel.java:209)

      at io.undertow.server.HttpServerExchange$ReadDispatchChannel.read(HttpServerExchange.java:2337)

      at org.xnio.channels.Channels.readBlocking(Channels.java:294)

      at io.undertow.servlet.spec.ServletInputStreamImpl.readIntoBuffer(ServletInputStreamImpl.java:192)

      at io.undertow.servlet.spec.ServletInputStreamImpl.read(ServletInputStreamImpl.java:168)

      at io.undertow.server.handlers.form.MultiPartParserDefinition$MultiPartUploadHandler.parseBlocking(MultiPartParserDefinition.java:213)

      at io.undertow.servlet.spec.HttpServletRequestImpl.parseFormData(HttpServletRequestImpl.java:833)


      after migrating, but it's very few of those errors, while the number of opened file descriptors increase by about 1000 per day. The error above happens when I call request.getParameter("some_param") in a servlet filter, but very rarely (and I couldn't simulate). I think this is not related to the problem of too many open files.