3 Replies Latest reply on Dec 27, 2011 8:33 AM by Clebert Suconic

    HornetQ 2.2.5 Final is not able to identify duplicate messages on failover

    Karthik N Newbie


      We are using HornetQ 2.2.5 Final with JBoss 6.0.0 Final.

      We have setup an external HornetQ live server - backup server. From JBoss, we use the resource adapter to connect to hornetQ to post and consume messages. We have defined MDB's for posting and consuming messages in our application.


      I made the configuration changes based on hornetq-2.2.5.Final\examples\javaee\mdb-remote-failover example.

      When both the live and backup server is up, it works fine. But, the below warning is getting logged for each request:

      19:29:15,018 WARN  [org.jboss.resource.connectionmanager.JBossManagedConnectionPool] Destroying connection that could not be successfully matched: org.jboss.resource.connectionmanager.TxConnectionManager$TxConnectionEventListener@1cb285b[state=NORMAL mc=org.hornetq.ra.HornetQRAManagedConnection@d37520 handles=0 lastUse=1324648755018 permit=false trackByTx=false mcp=org.jboss.resource.connectionmanager.JBossManagedConnectionPool$OnePool@ae98b6 context=org.jboss.resource.connectionmanager.InternalManagedConnectionPool@abe269 xaResource=org.hornetq.ra.HornetQRAXAResource@9d7a9d txSync=null]


      Duplicate message is not getting identified in the below scenario:

      1) Both live and backup servers are up.

      2) post a message.

      3) When the message is getting processed on the consumer side, shutdown the liver server.

      4) Fail over happens and backup server becomes live. Meanwhile, even the message gets processed.

      5) After the backup server is up, same message gets consumed.


      As mentioned in HornetQ user guide, I added <use-duplicate-detection>true</use-duplicate-detection> in <cluster-connection> mapping.

      Still, when the fail over happens during message processing, it gets consumed again.


      Is there any way to avoid the same message getting consumed multiple times?