This indicates to me that either the JNDI lookup happened before the backup was fully active and had deployed all the proper destinations or that the the backup doesn't have all the proper destinations defined.
Justin is right, there are most likely missing CFs/Queues/Topics in configuration of messaging subsystem on "backup" servers. Note also that backup will deploy those objects to JNDI only if it activates so if you add them to configuration of backup serves then only if you kill live server then CFs/Queues/Topics will deploy to JNDI of WF server with backup.
I have on important note to this, WF9 supports only one backup per live. Having 2 more backups in HA with replicated journal will not work (it can work with shared store). Also deploying MDB on WF 9 with backup server is really not good as if live starts again and backup shutdown then MDB looses connection to this backup and gets a bunch of errors.
I would suggest you to configure HornetQ in collocated topology. This means that each WF9 server contains live and then backup for another live in cluster. It would look like WF1(live1,backup2) <-> WF2(live2,backup1), WF3(live3,backup4) <->WF4(live4,backup3). Now all WF9 servers are active (MDB on can process messages from local live server) and when any of WF servers crashes then there is backup on another WF server which will activate.
Is it ok for your use case?
Thank you both for your replies!!
We fixed the issue by wrapping the above code in a loop which would loop through every server listed in the server.url property in the properties file. We try each server, one by one, and if any exception is thrown then we go to the next server in the list. If all servers are tried without success then we fail, otherwise we end up connected to something and that something has the queues.