You did not say which machine goes over 80% cpu usage. Both of them? If so, I suspect that you are missing an index on one of your tables. Examining the database statistics (using a tool such as ptop) should tell you which table is the problem.
I'm sorry, it's the machine with jboss, postgres is almost idle. I believe all important indexes have been set already, I have also reduced ejb usage. Earlier I had a problem with "no more connections available", but this is sorted out by replacing ejb with dao in most places. Now this.
it's really amazing, one machine (4xXeon Dual Core 2,6GHz and 8GB RAM, jboss using only 1,4) can handle jboss and postgres together better, than leaving postgres on this one and moving jboss to 2 Xeon Dual Cores 3GHz with 3,75 GB RAM.
If you have a profiler you can attach it to the jbossas to see what it is doing. Or you could take multiple jvm thread dumps and see what the threads are doing; that will sometime point out code hot spots.
Also, have you eliminated garbage collection as being the issue? Have you gathered gc data (using -verbose_gc is usually sufficient) and analyzed it?
hi, i have taken thread dump when jboss became not responsible (I have used JMeter to simulate stress) and I have plenty of
13:23:26"http-0.0.0.0-8100-9" daemon prio=6 tid=0x604a1800 nid=0x1db4 in Object.wait() [
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x1f9f96a8> (a org.apache.tomcat.util.net.MasterSlaveWorke
- locked <0x1f9f96a8> (a org.apache.tomcat.util.net.MasterSlaveWorkerThr
Guess I have to use remote debugger to find out why threads start to wait and probably block each other?
These threads are expected. They are idle threads waiting for http input to process. There will always be several of these waiting around.
What you need to look for is threads that are doing work.