Thanks for the prompt reply.
This is a different stack trace from that posted in yesterday's question.
Is our case also fixed, then? If so in which release will it appear? 2.2?
1 of 1 people found this helpful
It's the same issue.. just through a different record.
That's good to know.
Do you know what files changed for this (is there a JIRA - not HORNETQ-440, I guess?) so that we can make a build incorporating the fix but one that is compatible with 2.1.2?
We are facing the same issue, but with HornetQ v2.2.2. Our setup is as follows:
JBoss: v4.2.3 GA
OS: Windows Server 2003 R2, 32 bit
The same error was noticed in our logs, i.e.
2011-05-31 23:23:19.435+1000 WARN [Old I/O server worker (parentId: 5437246, [id: 0x0052f73e, xxx-app-01.vb.tv/220.127.116.11:5445])] QueueImpl.warn:76 - Error on checkDLQ
java.lang.IllegalStateException: Cannot find add info 29877
2011-05-31 23:23:19.435+1000 WARN [Old I/O server worker (parentId: 5437246, [id: 0x0052f73e, xxx-app-01.vb.tv/18.104.22.168:5445])] QueueImpl.warn:71 - Message has reached maximum delivery attempts, sending it to Dead Letter Address jms.queue.DLQ from jms.queue.applicationQueue
From all of the posts relating to this issue, the following trends are:
- JBoss 4.2.3 GA
- System is being heavily loaded
I am unable to reliably reproduce this problem (which is annoying) but I know that it generally happens under significant loads...
Any ideas on why this might be happening? Are there any fixes for this? As I said earlier, we are using HornetQ v2.2.2...
This is fixed on the next version. But this is an ignorable warn.
For some reason the rollback will try to redelivery but the message was already gone by the time it tried to update it.
I had the same exception, but with a different trace:
2011-06-14 13:56:46,308 ERROR [org.hornetq.core.protocol.core.ServerSessionPacketHandler] (ajp-0.0.0.0-8009-9) Caught unexpected exception: java.lang.IllegalStateException: Cannot find add info 770146
at org.hornetq.core.journal.impl.JournalImpl.appendUpdateRecord(JournalImpl.java:909) [:]
at org.hornetq.core.persistence.impl.journal.JournalStorageManager.storeAcknowledge(JournalStorageManager.java:519) [:]
at org.hornetq.core.server.impl.QueueImpl.acknowledge(QueueImpl.java:742) [:]
at org.hornetq.core.server.impl.ServerConsumerImpl.acknowledge(ServerConsumerImpl.java:571) [:]
at org.hornetq.core.server.impl.ServerSessionImpl.acknowledge(ServerSessionImpl.java:553) [:]
at org.hornetq.core.protocol.core.ServerSessionPacketHandler.handlePacket(ServerSessionPacketHandler.java:268) [:]
at org.hornetq.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:474) [:]
at org.hornetq.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:496) [:]
at org.jboss.netty.channel.socket.http.HttpTunnelingServlet.service(HttpTunnelingServlet.java:179) [:]
The message concerned was sent by a client who disconnected suddenly, maybe before it received the reply to the send() command.
We got this message about 24 times in a row when a different client connected and started a MessageConsumer. The major problem is that the Consumer received the same message also 24 times. The Consumer uses Auto-Acknowledgement, but maybe the Ack failed on the server, and the message got redelivered somehow...
Which version was this fixed in? We have HornetQ 2.2.2.Final with Netty 3.2.4.Final now.
I have fixed this exception on Branch_2_2_EAP. It was an ignorable exception when I fixed it.
Can you give it a try?
We are just fixing a latest JIRA to have another release.
We figured out why we were getting the "java.lang.IllegalStateException: Cannot find add info" errors. It had to do with long running transactions on the client side. One important fact I forgot to mention above (which I thought was insignficant at the time) was that we had 2 nodes listening on a single queue. What was happening was that when the queue was reasonably large, messages were getting buffered on the client side and then getting discarded as the load increased. The transactions on the client side are long running, hence, it wasn't able to keep up with the demand. HornetQ by default buffers the messages on the client:
By default, the consumer-window-size is set to 1 MiB (1024 * 1024 bytes).
Therefore, on the client side, we used a non-buffered connection factory (i.e. setting consumer-window-size = 0 in the connection-factory settings in the hornetq-jms.xml). The client nodes now appear to be behaving correctly.
Hope that helps someone else... Thanks Clebert for looking into it.