You can use default web connector pool which is the best pool to use and this pool does not queue or reclaim idle threads. This is the pool you use when you do not define any exceutor.
Thanks for the reply Priyanka.
I did change standalone.xml for Jboss 7.1 to remove Thread Pooling and thus use the default one.
However, it was observed in repetitive tests that performance actually degraded and requests were rejected by the Load Balancer 2 times more.
On the other hand, it was observed that changing the Max Thread Count from 512 to a higher number like 2000 provided better performance & latency.
However, we are not sure if 2000 might be too high and cause JBoss to crash eventually because of too many sockets.
Below are some additional details of our server
1> Server Size: c4 4xlarge. Memory: 30 GB. This is an Amazon Cloud-based server. And we use an Amazon Cloud-based Load Balancer.
2> JBoss standalone.conf (We realized that Perm is not used too much as hence its set low)
JAVA_OPTS="-server -Xms25600M -Xmx25600M -XX:PermSize=1536M -XX:MaxPermSize=1536M -XX:NewSize=16384M -XX:+AggressiveHeap -XX:+UseParallelGC -XX:ReservedCodeCacheSize=256m -XX:+UseCodeCacheFlushing -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000"
3> OS Level Settings: ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 118880
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 118880
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Can you please suggest any other options available for JBoss to accept more requests per seconds (especially in our case we gets spikes at the 0th and 30th minute)?