-
15. Re: Client to access cluster (possibly hitting backup server)
ataylor Sep 29, 2011 8:38 AM (in response to jhannah)you should re use the same connection then failover will work properly
-
16. Re: Client to access cluster (possibly hitting backup server)
jhannah Sep 29, 2011 9:06 AM (in response to ataylor)OK, I guess I should do some refactoring to reuse my connections. Sounds like if I do that, failover will start working.
But what if a failure occurs and the connection has to me closed. Trying to reconnect would be reconnecting to the Live server using the connector specified above. How would our client know to connect to the backup? Any ideas?
Thanks,
J
-
17. Re: Client to access cluster (possibly hitting backup server)
ataylor Sep 29, 2011 9:09 AM (in response to jhannah)if you are using jndi, whici assume you are, you would need to handle looking up the connection factory from the current live node.
-
18. Re: Client to access cluster (possibly hitting backup server)
jhannah Sep 29, 2011 9:24 AM (in response to ataylor)What do you mean by handle looking up the connection factory from the current live node? Do you mean a connector must be specified for both Live and Backup server and check the status of the createConnection and perform a second call if it fails? I through this type of failover logic was hidden beneath the covers.
J
-
19. Re: Client to access cluster (possibly hitting backup server)
ataylor Sep 29, 2011 9:33 AM (in response to jhannah)no, i mean when you look up the connection factory using jndi. as long as the factory is configured with both connectors it will try both
-
20. Re: Client to access cluster (possibly hitting backup server)
jhannah Sep 29, 2011 9:42 AM (in response to ataylor)OK, I think that's the answer I've been looking for. When you have a cluster containing a live-backup configuration, the client must have a connector for both the live and backup server. The connection factory being used must then have connector-ref elements pointing at both connectors... is this correct? This would make sense to me... although, this isn't how the multiple-failover-failback example is configured.
Thanks,
J -
21. Re: Client to access cluster (possibly hitting backup server)
ataylor Sep 29, 2011 10:20 AM (in response to jhannah)yes thats correct, the multiple failover example doesnt need it as we know that the initial connection will always be on the first server.
-
22. Re: Client to access cluster (possibly hitting backup server)
sv_srinivaas Nov 24, 2011 4:33 AM (in response to ataylor)Jhannah,
I too have a similar issue with hornetq2.2.5 Final and JBoss 5.1.0. Trying to configure a live and backup jms server and an MDB to consume messages. Everything works fine before failover, but once I kill the live node, I could see the backup getting started but I dont see my mdb failing over to backup rather displays a message that the connection is closed.
Is it possible to share your configuration files for live/ backup and client nodes?
Thanks
Srinivaas
-
23. Re: Client to access cluster (possibly hitting backup server)
jhannah Nov 24, 2011 8:28 AM (in response to sv_srinivaas)Hi Srinivaas,
I never did get the failover working properly. At one point I was told that there appears to be a bug and to submit a JIRA ticket. If you are successful at getting the failover to work with MDBs, would you please send me your configuration. Unfortunately, I'm not too optimistic however. Good luck.
jhannah
-
24. Re: Client to access cluster (possibly hitting backup server)
ataylor Nov 24, 2011 10:26 AM (in response to jhannah)Unfortunately, I'm not too optimistic however
Ye of little faith. I had failover working with MDB's a few days ago when doing some testing, all you need to do is make sure the Resource Adapter is configured correctly. Also try and use the latest version
-
25. Re: Client to access cluster (possibly hitting backup server)
jhannah Nov 24, 2011 10:43 AM (in response to ataylor)Hi Andy,
I was using the latest version (at least at the time back in September). Would you be willing to share your configuration files for the MDB failover you had working? I may take another crack at it, given that I'm now aware that it's possible :-)
Thanks,
jhannah
-
26. Re: Client to access cluster (possibly hitting backup server)
ataylor Nov 24, 2011 10:53 AM (in response to jhannah)I dont think i still have them, if you attach your configs i will take a look see if i can see anything wrong
-
27. Re: Client to access cluster (possibly hitting backup server)
jhannah Nov 24, 2011 10:59 AM (in response to ataylor)Unfortunately the configs I had a few months back are long gone :-(
Thanks anyway.
jhannah -
28. Re: Client to access cluster (possibly hitting backup server)
sv_srinivaas Nov 24, 2011 11:36 PM (in response to jhannah)Jhannah/ Andy, Thanks for your time. I've attached the configuration files for live/ backup and the mdb servers.
As I said earlier, I could see the backup server getting started when the live is shutdown (using ctrl-c) but the MDBs pointing to live doesn't failover to backup as expected. Your help is highly appreciated.
Note: For now i'm only consuming messages in MDB (from requestQ) and NOT sending any response message back from MDB amd hence I've not changed anything in the jms-ds.xml.
Thanks
Srinivaas
-
configs.zip 10.5 KB
-
-
29. Re: Client to access cluster (possibly hitting backup server)
underscore_dot Nov 25, 2011 6:49 AM (in response to sv_srinivaas)I'm also trying to connect to my HA instance (see [1] and [2] for configs).
On the client side I'm instanciating a connection factory in the following way both for producers and consumers (not using JNDI):
<spring:bean id="jmsConnectionFactory" class="org.hornetq.jms.client.HornetQXAConnectionFactory"> <spring:constructor-arg value="true"/> <spring:constructor-arg> <spring:list> <spring:bean class="org.hornetq.api.core.TransportConfiguration"> <spring:constructor-arg value="org.hornetq.core.remoting.impl.netty.NettyConnectorFactory"/> <spring:constructor-arg> <spring:map key-type="java.lang.String" value-type="java.lang.Object"> <spring:entry key="host" value="192.168.22.83"></spring:entry> <spring:entry key="port" value="5445"></spring:entry> </spring:map> </spring:constructor-arg> </spring:bean> <spring:bean class="org.hornetq.api.core.TransportConfiguration"> <spring:constructor-arg value="org.hornetq.core.remoting.impl.netty.NettyConnectorFactory"/> <spring:constructor-arg> <spring:map key-type="java.lang.String" value-type="java.lang.Object"> <spring:entry key="host" value="192.168.22.83"></spring:entry> <spring:entry key="port" value="7445"></spring:entry> </spring:map> </spring:constructor-arg> </spring:bean> </spring:list> </spring:constructor-arg> <!-- period in milliseconds between subsequent reconnection attempts. The default value is 2000 milliseconds--> <spring:property name="retryInterval" value="1000"/> <!-- allows you to implement an exponential backoff between retry attempts --> <spring:property name="retryIntervalMultiplier" value="2.0"/> <!-- A value of -1 signifies an unlimited number of attempts. The default value is 0. --> <spring:property name="reconnectAttempts" value="-1"/> <!-- interesting for blocked receivers: If you're using JMS it's defined by the ClientFailureCheckPeriod attribute on a HornetQConnectionFactory instance --> <spring:property name="clientFailureCheckPeriod" value="1000"/> <!-- allow the client to loadbalance when creating multiple sessions from one sessionFactory --> <spring:property name="connectionLoadBalancingPolicyClassName" value="org.hornetq.api.core.client.loadbalance.RandomConnectionLoadBalancingPolicy"/> </spring:bean> If live node is up:
- then I can instanciate the connection factory, send and receive messages normally.
If I shut live node down:
- then the backup node seems to take over (it loads all topics and queues and then logs "backup announced")
- the client logs "Connection failure has been detected: The connection was disconnected because of server shutdown [code=4]"
- if I try to send a message no exception is being thrown, but the message doesn't seem to be sent.
If I start the primary node up again:
- primery node tries to become live (loads all queues/topics), but throws the following:
[Old I/O server worker (parentId: 11171851, [id: 0x00aa780b, localhost/192.168.22.83:5445])] 06:36:59,250 WARNING [org.hornetq.core.protocol.core.ServerSessionPacketHandler] Sending unexpected exception to the client
java.lang.IllegalStateException: Cannot find binding 89d9d163-f3db-4cbd-a8f9-0a17f346aa87
- client throws the following while trying to create the session factory again:
Cannot connect to server(s). Tried with all available servers.. Type: class org.hornetq.api.core.HornetQException
Any ideas on what I'm doing wrong? Would using JNDI make a difference?
Many thanks in advance.