MDB outbound config in cluster
sv_srinivaas Dec 13, 2011 11:39 PMHI,
I'm using Hornetq2.2.5-Final, OS - Windows XP and App server is JBoss 5,1,0 GA.
I've questionn regd RemoteJmsXA configuration. Can I give a comma separated list of host and port in the outbound connection properties in -ds.xml
as shpown below so that my MDBs can send the reply message to any of the available jms node in cluster? I dont see any example that has a list of host/port specified in the outbound config of jca adaptor and hence thought of asking is that something we should not?
<tx-connection-factory> <jndi-name>RemoteJmsXA</jndi-name> ........ <config-property name="ConnectorClassName" type="java.lang.String">org.hornetq.core.remoting.impl.netty.NettyConnectorFactory </config-property> <config-property name="ConnectionParameters" type="java.lang.String">host=jms1;port=5446, host=jms2;port=5446</config-property> ,.... </tx-connection-factory>
Cluste config info: I've configured two live jms nodes in cluster and a 3rd node (stand alone and not part of cluster) where I've the MDB deployed to consume messages from the jms cluster.
On starting the jms cluster followed by the MDB, I see the consumer count in jms node1 as 7 and jms node2 as 8 (as the default MDB pool size is 15). Is this the exepcted behavior?
Then I sent 2 messages from client (java application) and it got load balanced and got 1 message each in each of the jms nodes. Since the MDBs were configured to connect to both the nodes in cluster, both the messages were processsed simultanieously, so far eveything looks fine.
Now when the MDB tries to send the reply message (using RemoteJmsXA), if both the jms nodes are alive then no issues, but if the jms node specified in outbound config goes down (after reading the message and before sending the reply) then the MDBs fail to send the reply message and thats why I would like to know if I can specify the list of jms nodes in outbound config as well.
Thanks!