0 Replies Latest reply on Jan 9, 2018 4:45 AM by bhaskarsk

    HornetQ Live/backup issue


      hornetq live-backup cluster is created as per jboss eap 6 clustering guide. Jboss eap 6.4 patch 16.

      We have Live, backup1,backup2 server. ( to avoid split-brain situation)


      1) Live, backup1,backup2, all running on three different nodes.

      2) Initial sequence, Live is active, backup1 is active backup, backup2 is waiting for live to fail


      Hornetq standalone configuration is same as in clustering guide. If required i can attach the same.

      Live1 => backup=false,, checkforliveserver-true, allow-failback=true

      Backup1 => backup=true, checkforliveserver=true,allow-failback=true

      Backup1 => backup=true, checkforliveserver=true,allow-failback=true


      Startup with -Dhornetq.enforce.maxreplica and max-saved-replicated-journal-size=2.


      There are two scenarios, which is happening in our clustered environment, which we need to handle. I am not sure of the behaviour of Hornetq in these cases.


      Scenario 1 : The node2 having backup1 goes down.

      Expected: Backup2 to become backup and synchronize with Live.

      Actual :     Backup2 is not able to become backup to live unless it is restarted.


      Scenario 2 : Live node goes down. Backup1 becomes live, Backup2 becomes backup. Live is restarted by ( HA configuration), live will wait for the current live( backup1) to fail. For some reason, Live node is restarted again( second time)

      Expected: Liveq to identify the current live and continue as passive backup and wait for current live to go down

      Actual : LiveQ starts as active live and deploying queues. Both Liveq and backup1 are active, which affects our consumer applications.

      I assume, this is because of the moving of data directory upon restart every time. Is there a way to control this?


      Thanks in advance