there is a remote JCA example in the distro, take a look at that, however you need to have something like:
<config-property name="SessionDefaultType" type="java.lang.String">javax.jms.Topic</config-property>
<!--if we over ride the connector class we must over ride the params too-->
<config-property name="ConnectorClassName" type="java.lang.String">org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</config-property>
<config-property name="ConnectionParameters" type="java.lang.String">host=127.0.0.1;port=5446</config-property>
As Andy has pointed out, it is certainly possible to do what you want.
The reason it doesn't work for the configuration you pasted is that the JCA resource adapter used by JBoss Messaging is completely different from the one used by (and shipped with) HornetQ. The JMSProviderLoader is a construct used for both inflow and outflow by the generic JMS JCA RA (i.e. the one used by JBoss Messaging). It does not apply to the HornetQ JCA RA.
Thanks for response! I have looked that example, but I cannot figure out how I should configure host address as we have two nodes in target cluster. Above configuration probably accepts only one host address, am I right? If one of the nodes is down, connections should be automatically made to other node. In that example NettyConnectorFactory is configured to boht jms-ds.xml and ra.xml, do I need both configurations?
it takes a comma separated list of possible connections, or you can just use discovery. This is all in the manual by the way!
Also, the ra.xml applies only to inflow (e.g. MDBs consuming messages). The jms-ds.xml applies to outflow (i.e. some inVM application sending messages). Since you only need to send messages then you don't need to change your ra.xml.
Thank you for both! I will try this configuration on next week, hopefully I get it working.
Unfortunately I have still no luck with this. I made configuration Andy suggested to cluster B node (of course I replaced the IP address with corresponding address of cluster A node). Now when cluster B tries to get connection factory it gets following exception:
[org.jboss.resource.connectionmanager.JBossManagedConnectionPool] (WorkerThread#0[172.19.2.11:39054]) Throwable while attempting to get a new connection: null
javax.resource.ResourceException: Error during setup
Caused by: javax.resource.ResourceException: Failed to create session factory
... 158 more
Caused by: javax.jms.JMSException: Failed to create session factory
... 159 more
Caused by: HornetQException[errorCode=2 message=Cannot connect to server(s). Tried with all available servers.]
... 162 more
2012-02-06 12:13:14,530 ERROR [org.hornetq.ra.HornetQRASessionFactoryImpl] (WorkerThread#0[172.19.2.11:39054]) Could not create session
javax.resource.ResourceException: Unable to get managed connection for FrontJmsXA
Do I need some configuration also to HornetQ in cluster A or what is the problem?
Above error was my own mistake, I used port number from example, but correct port was 5445. However I still cannot get messages sent to both cluster A nodes. I have configured connection parameters of both nodes, but still only one of the nodes receives messages. No errors are displayed in logs.
connections will only be to one node so messages will only ever get sent to one node, what you need is your 2 servers to be clustered and then (if both queues have consumers) messages will be redistributed between both nodes
Yes, I have clustering enabled. However I have disabled persistence, is it needed for redistribution to work?
No, if your 2 nodes are clustered and both have the same queue and both queues have consumers then messages will be distributed.
Then it is quite weird. It also seems that remote Hornet connection doesn't recover if I shutdown one remote node (I have configured IP's of both remote nodes to connection params in jms-ds.xml). I also tried to configure connection by using discovery, but then I didn't get connection at all.
you wont get recovery unless you have a backup node, discovery probably doesnt work because your network dosnt support Multicast or you dont have a loopback address.
If messages arent being distributed its because of one of the following:
1) your nodes arent clustered, im guessing its this because discovery isnt working either.
2) your queues dont have consumers
Okay, so if cluster B has made connection to one of the nodes of cluster A and that node crashes, connection won't be never tried to other node of cluster A without backup node configuration? If I understood correctly backup node configuration requires shared journal. In our setup this is not an option.