The maxThreads is more than sufficient - probably even 100 would handle that load. (I assume by "concurrent users" you mean "signed-in users", in which case you can usually assume that at most 20% will be submitting requests at the same time.)
What is the database pool size in your *-ds.xml file? How many connections does your app require for each user's request? The answers to that will tell you what the database connection pool size should be.
You can monitor the usage using mbeans in jmx-console, doing so should help you set tha values correctly. You could even use Jopr to monitor this data.
Thanks for reply.
in -ds.xml we following db pool size
We have one connection per request?
Is it sufficient ?
Another question i have is how many threads we should set in BasicThreadPool?
is it correlated with maxThreads in server.xml ?
If each user request needs a database connection, then no 25 connection will not be enough.
As far as I know, the BasicThreadPool is for use by services (for example, messaging might make use of it (though messaging could have its own thread pool), or if you use remote clients to connect with EJBs the EJBs could be run from threads in the thread pool), and not for web applications - the thread pool defined in server.xml is used for webb apps (servlets/JSPs) and thus also for any EBJs that the web apps call.
This clarify a lot.
Another thing is how we can control the CPU utilization ?
Increasing or decreasing connector threads will effect CPU utilization ?
Because you are running Solaris, it might have a way to limit the amount of processor utilized by a particular process. I know that Windows has no such capability and I don't think that Linux does either. There is no way to limit this in the JVM itself.
Yes, you could decrease the available HTTP threads (and increase the HTTP queue size) and that might reduce the CPU utilization. But even a few threads can use large amounts of the CPU if a request is very processor intensive and takes a long time to run.
You mean by defining minSpareThread/maxSpareThread or is it acceptCount in http connector ?
Earlier acceptCount was 100 i have increased it to 150, however CPU utilization was still around 90% during peak load.
It was not constant to 90% , it was between 60 to 90 going up and down.
Is it normal ? Should we worry about that?
I do not know if that is normal, it depends on too many factors. You could have a bottleneck somewhere in which case you need to profile the system to see where the time is being spent.
Increasing acceptCount helps only if your users are getting "server too busy" errors. Have you monitored the accept queue to see how many requests are waiting for threads? Or have you monitored the number of threads in use?
Try monitoring with Jopr ([ur]http://www.jboss.org/jopr[/url]), that might help with locating potential bottlenecks.
I have just downloaded jopr, and i am in process of installing it.
I will monitor it using jopr as well. I m also doubting on DB side.
Once i see in jopr i think i would be able to nail it down.
Thanks for your help
I just following exception during load test
2009-04-02 16:55:01,520 ERROR [org.apache.tomcat.util.net.JIoEndpoint] Socket ac cept failed java.net.SocketException: Too many open files at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384) at java.net.ServerSocket.implAccept(ServerSocket.java:450) at java.net.ServerSocket.accept(ServerSocket.java:421) at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(De faultServerSocketFactory.java:61) at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java: 310) at java.lang.Thread.run(Thread.java:595) 2009-04-02 16:55:01,520 ERROR [org.apache.tomcat.util.net.JIoEndpoint] Socket ac cept failed
What is wrong here with Jboss??
I found that it was OS problem, we have increase the FD limit and it is working now.