xa recovery is registered automatically in eap 6.1 via th eresource adapter config (pooled connection factory) so you shouldnt have to do anything
thank you Andy. i do observe that now. related question :
- fail-over of brokers as well as client (MDB) behavior when that fail-over occurs is working well.
- the RecoveryManager however does not appear to want to fail-over to my backup node. Instead, it appears to want to continue to continue to connect to the original primary hornetq broker (which i killed in my scenario)
- i see the following log statement for my hornetq primary: HQ221001: HornetQ Server version 2.3.1.Final (Wild Hornet, 123) [687ffe77-cc73-11e2-9742-d5f59b901ddb]
- i see the following log statement for my hornetq backup: (HornetQ-server-HornetQServerImpl::serverUUID=687ffe77-cc73-11e2-9742-d5f59b901ddb-804833725)) HQ221031: backup announced
- after broker fail-over from primary to backup has occurred, the nodeUp(....) function of RecoveryDiscovery.InternalListener appears is being passed a TopologyMember with the following :
Pair[a=TransportConfiguration(name=netty, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=5645&host=192-168-122-1, b=null]]
- port 5645 is the correct port to the backup broker but notice the nodeId: seems that it is the nodeId of the primary not backup .
Observation/ question :
- the logic from lines 176 - 186 of HornetQRecoveryRegistry never appears to get invoked when fail-over occurs
- the reason is that the nodeId passed to nodeUp(....) function when the backup becomes live is always that of the nodeId of the original primary broker
- when broker fail-over from primary to backup has occurred, would you expect that the nodeId published by the backup would be: 687ffe77-cc73-11e2-9742-d5f59b901ddb-804833725 ??