Thank you, I'm seeing the other thread.
I'm trying with no-request-timeout="60000" in order to see if estabilished connections grow anyway.
before this new config I had see that there are estabilished connection of ip who request pages (by http) many hours ago, So I suspect that these connection fill the workers and wildfly didn't response anymore. But it is only an hypothesis.
The no-request-timeout can't solve the problem.
When wildfly freeze I saw only 23 connection estabilished on port 80 and 3 on port 443.
Sure, The problem wasn't dns because a simple http request on ip can't response..
telnet on port 80 estabilish connection but, when press ctrl+c telnet freeze until restart wildflly.
I dumped a jstack when widlfy freeze but I can't see any strange lines..
Are there a way to get the number of current active wildfly worker io-threads?
Is it possible to restart only undertow?
Any other idea to find the problem?
What does your memory usage look like? If your server runs out of memory it can cause a freeze.
Thank you to your suggest.
I will monitor fine the memory usage then I return on forum with news..
But if the server runs out of memory then I would see an OutOfMemory Exception in server.log?
If so, I never see OutOfMemory Exception.
Ok now I have jvm runtime memory details
When freeze happen
Used memory 133mb
Free memory 141mb
Max memory 494mb
the total ram of server is 1gb
I could upgrade the memory server to 2gp or up.. but first I need to identify the problem
Have you tried VisualVM? Maybe can help you identify the problem.
We are also seeing the same issue on Wildfly 8.2.0 Final. We have set the read-timeout as well as no-request-timeout parameters. Is there any other parameter that needs to be configured in the https listener of undertow? Do we need to tune any IO subsystem parameter?
Can we get any info from Visual VM for debugging this issue?
Can someone please provide inputs on how the following scenario is expected to work?
Let's say a HttpserverConnection is created due to an incoming HTTP request. The channel for this connection has a read timeout of 30 mins and the HttpReadListener has a no request timeout of 30 mins. The response is created for the HTTP request which is read by the client. However, there is no FIN packet from the client due to which the ExchangeCompletionListener is not triggered and hence the request limit counter is not decremented. In this case, the read timeout will not happen as there is a successful read. I hope my understanding is correct. As the connection does not receive requests after some time, the HTTP server connection is closed due to no-request-timeout. However, when this happens the ExchangeCompletionListener is not triggered there by not decrementing the request limit counter.
This is eventually leading to queued requests and 513 errors.
Can someone please let me know if this can be handled by setting an undertow/xnio configuration parameter? Why is request limit not decremented in this case? Is this the expected behaviour?