Perhaps it would be best to start with a base-line.
- What exact steps did you follow? Was it just that article you linked or something else/more?
- What specifically doesn't work? Do you get any error/warn messages?
- What is your particular use-case?
I can help you with server configuration, but for JMS code there's lots of references on the Internet so I'll refrain from duplicating that information here.
I have attached several files.
First, the server.log that shows the Error.
Second, part of standalone-full.xml from the server from which we are calling the remote queue on another server.
Third, EjbTestFileConsumeBean.java, a bean that gets a message from switchyard and calls another ejb QueueHelperTest.java.
Fourth, QueueHelperTest.java, a bean that invokes another ejb, Queuehelper.java to get a connection and send a message.
Fifth, QueueHelper.java a bean that uses a singleton JmsFactoryLookup.java to get a factory that matches the server to which we want to send the message, then creates a connection and tries to send the message.
Sixth, JmsFactoryLookup, a singleton that gets a list of all the defined pooled-connections and what server IP they are for.
The basic idea is that in the standalone-full.xml we define remote connections for X number of remote queues. At run time a bean will specify the queue and the server it wants to send it to. We will build a list of all the possible connection factories and try to match a factory to the IP to which we want to send the message. If successful, we create a connection and try to send the message.
I hope that this i enough info.
First off, you have 2 connectors (i.e. "remote-hornetq-nonmanaged" and "remote-hornetq-managed") which are functionally equivalent which means one of them isn't necessary. Here's the salient XML:
<connector name="remote-hornetq-nonmanaged"> <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class> <param key="host" value="10.10.10.10"/> <param key="port" value="5445"/> </connector> <netty-connector name="remote-hornetq-managed" socket-binding="remote-hornetq-binding"/> <outbound-socket-binding name="remote-hornetq-binding"> <remote-destination host="10.10.10.10" port="5445"/> </outbound-socket-binding>
I recommend you keep the "remote-hornetq-managed" connector (since it uses the newer/better "netty-connector" configuration) and rename it to something like "remote-hornetq" so it's clear it can be used for kind of connection factory.
Second, you have two equivalent managed connection factories which again means one of them is not necessary. Here's the salient XML:
<pooled-connection-factory name="ConnectionFactoryNonManaged"> <connectors> <connector-ref connector-name="remote-hornetq-nonmanaged"/> </connectors> <entries> <entry name="java:/ConnectionFactoryNonManaged"/> </entries> <client-failure-check-period>300000</client-failure-check-period> <connection-ttl>-1</connection-ttl> <reconnect-attempts>-1</reconnect-attempts> </pooled-connection-factory> <pooled-connection-factory name="ConnectionFactoryManaged"> <connectors> <connector-ref connector-name="remote-hornetq-managed"/> </connectors> <entries> <entry name="java:/ConnectionFactoryManaged"/> </entries> <client-failure-check-period>300000</client-failure-check-period> <connection-ttl>-1</connection-ttl> <reconnect-attempts>-1</reconnect-attempts> </pooled-connection-factory>
Despite its name, the "ConnectionFactoryNonManaged" is in fact a "managed" connection factory. To be clear, any pooled-connection-factory is managed. I recommend you remove "ConnectionFactoryNonManaged" since it's name is confusing.
Third, I would recommend against setting <connection-ttl>-1</connection-ttl> on any connection factory that is used to connect to a remote server. If the connection-ttl is -1 that means the remote server to which the connection factory connects will never time out the remote connection. The timeout exists to protect the server from stale/dead connections consuming resources since it is possible for a client to crash or simply "forget" to close its connection when it is done with it. In the pathological case, the server would run out of resources and be unable to service any new connections.
Lastly, the error you're seeing in the log (i.e. "Cannot connect to server(s). Tried with all available servers.") simply indicates the connection cannot be made. Aside from the issues I noted, the client server's configuration looks fine. To investigate further I'd need to see the configuration of the host receiving the connections (i.e. the server listening on 10.10.10.10). Can you attach that?
Thanks for your help Justin. I have modified the sending server standalone-full.xml like you suggested. This is what is in the standalone-full.xml on the listening server. The acceptor, the binding, and the remote queue. true
sorry somehow this got chopped off. I'm only including the stuff I added for this test. Oh, and also I tried the connector and acceptor with port 5446 and it made no difference.
To be clear, it's not necessary to add a new acceptor to the server to which you are connecting. You can simply connect to the default acceptor on port 5445.
In any case, you indicated that you tried 5446 on the sending server which means the listening server should have accepted that connection on 5446 since it has an acceptor defined for that port. The only thing I can think that might be a problem is that your IP address is incorrect on the sending server or that the listening server isn't being bound to the proper address or that your using a port offset.
Could you attach a server log from the listening server which includes the server's start-up process?
In short, this looks environmental to me at this point.
Thanks for your help. Here is what I am seeing now. The client server sends the message. On the remote server using CLI I can see that the queue count is incremented by 1 but if I try to list the messages in the queue CLI return nothing. Also, the MDB does not try to consume the message. There is no error until the client server does a Periodic Recovery and this causes a SECURITY_EXCEPTION on both servers. I have attached a file that shows the remote queue definition, the local queue connection factory, and the java
setup.txt.zip 1.4 KB
According to the code in the attached setup.txt.zip you are setting the "_HQ_SCHED_DELIVERY" property on the message with a variable. What value is being used here in your test? The behavior you're observing (i.e. message-count increases but MDB doesn't consume and listing messages returns nothing) is consistent with a scheduled message on the queue whose scheduled delivery time simply hasn't arrived.
As far as the security exception goes I'd need to see some logs to investigate more there.
Thank you so much for your help, it is working now. Your answer put me on the right track. The time was different on the remote server so although I was passing in a delay of 5 seconds it was waiting an hour and 5 seconds! I just didn't wait long enough to see the message delivered!