At the moment this appears to be an artifact of the not-successfully-clustered jboss being connected through the DB connections and there being junk in the DB from the other server. Maybe. I am waiting on further results.
The owner of the network says no problem with UDP, but no hornetq standalone discovery, no jboss discovery and as soon as TCP is configured I have a cluster. Have to hope I have the internal configs right. Never did that before.
At least being inside JBoss I don't have to worry about JNDI.... :-)
Still not sure what will happen when I alter the configs to try to get it to go live-backup in those two environments.
Have to say thanks to all who have helped here. This is a good community. inside
Nope - I was wrong, it was not an artifact.
Basically what it is, is that if you've created the cluster and defined it and started it, you cannot then change its composition but retain its name.
That is, you can't go from a discovery group to a statically defined set of the same servers... or vice-versa.
You have to create a NEW cluster with the composition you want. New name. No problem.
You can't re-use the olde name. The specific queues created are hashed with the cluster definition in some way and then can no longer be used, retrieved, connected.... broken big-time.
This is not quite true.
When you create a cluster, the system forms various "store and forward" queues between the nodes of the cluster. These are where messages are put prior to forwarding them from one node to another.
They are necessary, so the messages can be put somewhere if, for example, the target node is temporarily down. When the target node becomes available again the message can be forwarded with no loss of messages.
If you remove your cluster definition from your config, those store and forward queues will remain. The system can't just delete them, they might contain messages, and it doesn't know you don't want that cluster any more. E.g. you might have just temporarily commented it out in your config.
If you really don't want that cluster any more. You can just delete those queues, just like any other queue (they're just normal queues) via the management api, i.e. programmatically, or using a jmx-client e.g. jconsole. Or using the management console (in the next release).
So I can re-use the name if I am happy to delete the queues. That's good, because I didn't know what they were and that I was allowed to delete them.
Changing the composition of the cluster just leaves things dangling though, which is as true with your description as mine.
With what you say, it seems to me that it is best to delete all such queues before creating any new cluster, because they are going to be more esasily identifiable in the jms console if you haven't created a bunch of new ones (which will have different hash "tails").
Is there any manual way to simply wipe out all of it and start fresh? I tried wiping the data and binding directories but it did not seem to matter.