0 Replies Latest reply on Oct 21, 2011 6:33 AM by trondgzi

    HornetQ live/backup issue

    trondgzi

      Hi,

       

      I have a 2 node JBoss AS 6.1 cluster with HornetQ using shared-store (using NFS during testing) where S1 is configured to be live and S2 is configured to be backup. Failover from S1 to S2 works as expected, and when configured with allow-failback = true S1 becomes live and S2 becomes backup when S1 is restarted, just as expected.

       

      In my setup I want S2 to continue as live server after S1 has failed, and S1 should become the backup for S2 when restarted. I configure allow-failback = false, kill S1, failover happens to S2. When starting S1 it hangs at "Waiting to obtain live lock" since it is configured with backup = false.

       

      Do I really need to change my config to backup = true when restarting S1 to make this happen?

      Has this got anything to do with NFS?

       

      I would have guessed that S1 would eventually give up on getting the live lock, and just continue to be the backup node, but that doesn't seem to be the case. I know that I can do backup = ${my.system.property} so I don't have to actually edit the file, but rather pass a system property when restarting JBoss. But I want to start JBoss with a script (/etc/init.d/*) and then my script needs to somehow figure out if it should be started as a live or backup node.

       

      The reason I don't want to have allow-failback = true is because I have an app in deploy-hasingleton that can't automatically failback to S1 and HornetQ and the app needs to run on the same node + it really doesn't matter to me which server is live and which is backup. I only want to make sure that one of them is live, and after failover has occured I can start a backup node for the one that is now live.

       

       

      Any feedback will be much appreciated.

       

      Regards,

       

      Trond