I'm not sure about the virtual memory. Can you just analyze -verbose:gc?
Yes , we have that enabled and are analyzing.
Strangely the java vm memory behaves quite normally, returning to correct levels at each full GC.
The virtual memory on the linux server steadily trickles up to very large levels.
Our production environment appears to behave correctly too .... but I have only been able to run very simple tests alongside our live machines on it
I think I have found it
Not setting message expiry explicitly for these messages, and our thread-pool-max-size is the default of -1 (unbounded)
This means that new threads are constantly being created.
Since threads can have 700k of (virtual) os memory allocated,.... that accounts for the steady drip of virtual memory loss
We are looking at any possible leaks on stomp also. You are probably good to go now, but we will keep looking / investigating.
I will appreciate if you have any extra info.
I can confirm Paul's behaviour. We have brokers eating up all a servers (virtual) memory over time.
I'd just started a test set where I'd one broker and 10 clients connected. Everytime I start and stop a client more or less a new thread is created at the broker.
So far I got the "HornetQ-server-threads" under control by setting the server's thread-pool-max-size, though it is not suggested by the documentation. (Yes, I know, internally the Executors.newCachedThreadPool with default core size, unbounded max, and 60 secs timeout is used when I set the max-size to -1. Here, the pool was bounded for testing purposes)
But, still, the inital problem persists: There is a group of "pool-XX-thread-1" threads getting created along the other threads whenever a client connects and those won't be closed over time.
My question is: Where do the "pool-XX-thread-1" threads come from? It appears as if new pools were created, not only threads: the XX is going through the roof, the "thread-1" remains. So who / which HornetQComponent creates pools??
Default at 2.2 is 30, it's not unbounded.
You have pools at the client and at the server as documented.
The only other executor created is through AIO, but it doesn't grow. It stays constant for AIO poller and TimedBuffer checks.
I'd monitored the broker's behaviour over night: all threads except the "pool-XX" threads are on a expected level. The pools are as many as I left the broker yesterday.
We're running at 2.1.2. Let me upgrade to 2.2 and do the checks again.