You can try 1.4.6.GA.SP1.
However I don't understand your stuck issue. Do you mean if you stop sending messages to the queue, all messages in the queue will be eventually delivered? If so, that's not stuck, it could mean your message sending is faster than the receiving end.
No, I mean that messages get stuck if the queue remains inactive for a few hours.
Messages that get stuck are only the first, sometimes only the first two, sometimes the first four, but thery are only a few messages that remain in 'delivering count' state.
All the following and subsequently messages gets delivered correctly.
This stuck messages gets delivered only if I restart the application server
By the way I tried with 1.4.5.GA and the situation did remain the same.
The server and the queue had been inactive all the night, this morning the queue has delivered correctly but now it has missed in delivering a few messages after having been inactive for 1 hour.
I see no error log (jboss.messaging log level = debug).
Are you still having the same issues, or have you found a fix/workaround for this yet?
We believe we are experiencing the same or similar issues.
We are using Jboss5.0.0 GA with Jboss Messaging Version=1.4.6 GA.
We too have not managed to find any other solution from these forums.
Any help would be appreciated.
yes same issue is still present at the moment !
The one and only workaround we have set up has been a new process that sends periodically a kind of 'fake message' to the queue.
This means a really new batch process that every 15minutes sends a Jms message that gets correctly delivered by the queue but then after it is discarded by the application.
This solution is a workaround and, even if we are a little pessimistic, we hope to find a different one, this solution adds network traffic and also some cpu processing.
I am also seeing this issue in our production environment. We are running four JBoss 5.1.0 AS nodes in a clustered configuration. Nodes 3 and 4 seem to have message counts that will stay at two messages (Node 3) and one message (node 4). We're using MySQL to persist the messages. We're using the NDB engine, but right now the tables were created using the INNODB engine. I was wondering if anyone knew if this particular set-up could cause the issue we're seeing here or if there is another solution out there?
If you are using 1.4.3.GA, be aware of this
if you upgrade, make sure you also upgrade jboss remoting jars and your remoting configuration files.
If you have a firewall sits between your client and server and it is configured to kill some connections idle for a period of time, you need to adjust your remoting configurations, esp those:
<!-- the following parameters are useful when there is a firewall between client and server. Uncomment them if so.-->
<attribute name="numberOfCallRetries" isParam="true">1</attribute>
<attribute name="pingFrequency" isParam="true">214748364</attribute>
<attribute name="pingWindowFactor" isParam="true">10</attribute>
<attribute name="generalizeSocketException" isParam="true">true</attribute>
It seems to have settled down over the past few days. It's pretty rare that a message gets stuck and it has been running solidly for several days now. We didn't really change anything but if I find anything to suggest why it suddenly got a lot better, I'll come back to share!
Thank you for your help! If we end up at the same spot with messages being stuck, I'll be sure to look into the JBM or HornetQ upgrades.
I know it's been a while, but my e-mail just got alerted about how much activity this thread has received. We have since removed JBoss Messaging clustering, and have reconfigured our MySQL to INNODB engine instead of NDBCLUSTER. We are only using a single, non-clustered JBoss 5.1.0 node. This single node holds up to our production traffic quite well. Perhaps just an ill-conceived clustering solution...
We now rarely have any issues with our JMS Queue solution. In fact, it's mostly related to the consumers who can't keep up more than any fault of the JMS queue.