2 Replies Latest reply on Jun 3, 2013 2:49 PM by Jeff Bride

    configure XARecoveryConfig for UDP discovery ?

    Jeff Bride Novice

      Hi,

        i'm conducting HA fail-over testing (active/passive using shared storage) with the Hornetq included in EAP6.1.Final.

       

        in regards to configuring the XAResourceRecovery, i'm wondering if there may be some examples where it is configured using UDP discovery group ?

        currently, i'm configuring using staticly defined connectors to the remote active/passive brokers;  similar to the example in xarecovery example  (lines 250 - 253 )

       

      semi-related question:   what is the signifigance of "HORNETQ1" in the property name:   com.arjuna.ats.jta.recovery.XAResourceRecovery.HORNETQ1   ??

        i'm not aware that any other hornetq ID in my environment is also set to HORNETQ1 that this XAResourceRecovery property could potentially reference.

       

       

      thanks!   jeff

        • 1. Re: configure XARecoveryConfig for UDP discovery ?
          Andy Taylor Master

          xa recovery is registered automatically in eap 6.1 via th eresource adapter config (pooled connection factory) so you shouldnt have to do anything

          • 2. Re: configure XARecoveryConfig for UDP discovery ?
            Jeff Bride Novice

            thank you Andy.  i do observe that now.  related question :

             

            background:  

               - fail-over of brokers as well as  client (MDB) behavior when that fail-over occurs is working well.

                - the RecoveryManager however does not appear to want to fail-over to my backup node.  Instead, it appears to want to continue to continue to connect to the original primary hornetq broker (which i killed in my scenario)

             

            details:

                - i see the following log statement for my hornetq primary:  HQ221001: HornetQ Server version 2.3.1.Final (Wild Hornet, 123) [687ffe77-cc73-11e2-9742-d5f59b901ddb]

               - i see the following log statement for my hornetq backup:  (HornetQ-server-HornetQServerImpl::serverUUID=687ffe77-cc73-11e2-9742-d5f59b901ddb-804833725)) HQ221031: backup announced

             

                - after broker fail-over from primary to backup has occurred, the nodeUp(....) function of RecoveryDiscovery.InternalListener appears is being passed a TopologyMember with the following :

                   nodeId:  687ffe77-cc73-11e2-9742-d5f59b901ddb

                   Pair[a=TransportConfiguration(name=netty, factory=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory) ?port=5645&host=192-168-122-1, b=null]]

             

               - port 5645 is the correct port to the backup broker but notice the nodeId:  seems that it is the nodeId of the primary not backup .

             

            Observation/ question :

                - the logic from lines 176 - 186 of HornetQRecoveryRegistry never appears to get invoked when fail-over occurs

                - the reason is that the nodeId passed to nodeUp(....) function when the backup becomes live is always that of the nodeId of the original primary broker

                - when broker fail-over from primary to backup has occurred, would you expect that the nodeId published by the backup would be:    687ffe77-cc73-11e2-9742-d5f59b901ddb-804833725   ??

             

            thank you!

             

            jeff