we are quite confident with using hornetq as main distribution point of device messages in our access control system. Due to hardware limitations we use the paging feature to decrease the memory impact of hornetq on production systems. Just for completness these are our current address settings:
<address-settings> <!--default for catch all--> <address-setting match="#"> <dead-letter-address>jms.queue.DLQ</dead-letter-address> <expiry-address>jms.queue.ExpiryQueue</expiry-address> <max-delivery-attempts>-1</max-delivery-attempts> <redelivery-delay>0</redelivery-delay> <!-- 1MB max memory per address before starting to page. This value has to be adjusted according to the estimated number of controllers. Each controller is going to have its own address and queue so the memory impact will be max-size-bytes * controllers. --> <max-size-bytes>1048576</max-size-bytes> <!-- 0,5 MB per page file on disk. Must be smaller or equal to max-size-bytes --> <page-size-bytes>524288</page-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> </address-setting> </address-settings>
We have a new customer which has a total amount of about 1000 devices in his systems by now. Each device owns its queue with the above settings. So this implies a memory impact of 1000 * 1024 KByte = 1GByte. This customer is currently in the installation phase so the greatest amount of the devices are not connected so far. Therefore the queues of thess devices grow with each message that is commited for them. This happens by design since no message are allowed to get lost. The problem with that is, that the longer the hornetq-server runs the larger the heap memory consumption gets. I have monitored this for quite some time now. When the server starts up the initial memory consumption is at ~ 2.5 GByte (with 95% used by the 'old generation'). But after 6 or 7 days the total consumption grows to about 3.7 GByte (with 95% used by the 'old generation'). Since the server is limited to 4 GBytes this forces a restart each week.
Since we do not see the number of sessions or connections increasing, the memory must be consumed elsewhere. So today I got aware of the page-max-cache-size setting and did not quite understand the impact of it.
Is this a per-address setting? It defaults to 5 pages being cached so with my settings the total consumption would grow to (1000 + (5 * 512KBytes)) * 1024 KByte = 3.5 GBytes. That would for sure explain the problem and we could decrease the amount of cached pages to reduce memory usage. But what would be the performance penaltity in doing so?
Best regards to all out there