recently we had an OOM on JBoss6. Thread dumps shown >6000 of following threads:
"New I/O client worker #6463-1" daemon prio=10 tid=0x000000010345b000 nid=0x132a runnable [0x00007f71e4686000]
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
We double checked the properties and all max thread pool sizes were set to about 30 in hornetq-configuration.xml and hornetq-jms.xml. So we looked through the source of hornetq 2.2.5 and it seems that most thread pools that are created in ServerLocatorImpl are either CachedThreadPools or ScheduledThreadPools and their factory method doesn't have a maxPoolSize argument, so in fact instead maxPoolSize is passed as corePoolSize param which is actually a minPoolSize
We tried to workaround this issue via reflection - setMaxPoolSize on already created global thread pools, but this doesn't work because these pools have SynchronousQueue as task queue which rejects execution of task if no threads are available, so it caused some exceptions and netty beeing not able to handle incoming packets.
It seems that simply FixedThreadPools should be used if maxPoolSize is set.
Meanwhile, we don't know what caused so many threads beeing started. We don't have logs from that incident. Perhaps too many events arrived at once (after a failover maybe). Can you suggest some other way of avoiding this issue? Setting maxConsumerRate maybe?