My guess is that you are not properly configured for high-availability of messages which (at this point) requires a shared journal between the master and slave nodes. Read the HornetQ documentation on clustering to understand better.
thx for your answere. now i switched from udp to static connections and configured shared-store.
in tried out this in both modes. domain and standalone and in both modes, i have the same problem. master server is running correct. slave server starts up with problems at messaging server.
- in domain mode, the slave server cant't start its third-server(full-ha profile), because of "Waiting to obtain live lock". With this message the slave-server will not startup correctly. but i get no error. In Domain-Console it shows server-one and server-two on, but server-three is not running.
- in standalone mode, the backup server (backup=true) startsup with following exception
11:56:16,473 ERROR [org.hornetq.ra.inflow.HornetQActivation] (default-threads - 2) Unable to reconnect org.hornetq.ra.inflow.HornetQActivationSpec(ra=org.hornetq.ra.HornetQResourceAdapter@1c39cc63 destination=queue/printQueue destinationType=javax.jms.Queue ack=Auto-acknowledge durable=false clientID=null user=null maxSession=15): HornetQException[errorCode=2 message=Cannot connect to server(s). Tried with all available servers.]
at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:774) [hornetq-core-2.2.19.Final-build2.jar:2.2.19.FINAL_build2 (HQ_2_2_19_FINAL_build2, 122)]
at org.hornetq.ra.inflow.HornetQActivation.setup(HornetQActivation.java:314) [hornetq-ra-2.2.19.Final-build2.jar:2.2.19.FINAL_build2 (HQ_2_2_19_FINAL_build2, 122)]
at org.hornetq.ra.inflow.HornetQActivation.handleFailure(HornetQActivation.java:592) [hornetq-ra-2.2.19.Final-build2.jar:2.2.19.FINAL_build2 (HQ_2_2_19_FINAL_build2, 122)]
at org.hornetq.ra.inflow.HornetQActivation$SetupActivation.run(HornetQActivation.java:635) [hornetq-ra-2.2.19.Final-build2.jar:2.2.19.FINAL_build2 (HQ_2_2_19_FINAL_build2, 122)]
at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_32]
based on this error i added in every MDB:
@ActivationConfigProperty(propertyName = "reconnectAttempts", propertyValue ="-1"),
@ActivationConfigProperty(propertyName = "setupAttempts", propertyValue ="-1")
But this properties changed nothing.
so in summary,
-domain mode starts cluster but slave server has problems with messaging server
-standalone mode starts cluster but throws exception on backup server while deploying application (.ear).
in attachements i added my subsystems for messaging.
can you help me with this problem?
Any MDB deployed on an instance of AS7 where the HornetQ server is in backup mode will not be able to connect to any local destination precisely because all the destinations will not have been deployed since the HornetQ server is in backup mode.
This means it is not possible to listen on one Queue (shared store) with more than one server(domain or standalone mode)? if not, there is no advantage for performance in cluster mode?
if not, is there a possibility for queing over a database system instead of a filesystem? I mean, is it possible to configure an other framework for queing which works with a database? like it was with jboss 5.
I think you're clonflating HA functionality (i.e. the messages on a node are still available even if that node crashes) with general clustering which can provide a performance boost via load-balancing (both client-side and server-side). I recommend you read the HornetQ chapters on these subjects (i.e. 38 and 39) to get a clearer picture about how they work.
Regarding JDBC persistence, there is no plan to implement any such mechanism. The upcoming "replication" HA mode (currently available in 2.3.0.Alpha) should fit use-cases where a shared-store is not desirable.
are there any example configurations for server-side load balancing with no UDP communication?