6 Replies Latest reply on Jul 7, 2016 1:52 PM by hobbsjd

    WildFly 10 in OpenShift - remote JMS client gets confused after initial JNDI lookup

    hobbsjd

      I have two pods within the same OpenShift Enterprise project.  One pod is an application hosted in WildFly 10, the other is a Camel application.  In the Camel application, I establish an initial context and lookup the connection factory like so:

       

       

        LOG.log(Level.INFO, "Initiating Artemis connection to " + host + " port " + port + " as " + user);

       

        String url = "http-remoting://" + host + ":" + port;

       

        Properties props = new Properties();

        props.put(Context.INITIAL_CONTEXT_FACTORY, "org.jboss.naming.remote.client.InitialContextFactory");

        props.put(Context.PROVIDER_URL, url);

        props.put(Context.SECURITY_PRINCIPAL, user);

        props.put(Context.SECURITY_CREDENTIALS, pass);

       

        context = new InitialContext(props);

        ActiveMQConnectionFactory cf = (ActiveMQConnectionFactory) context.lookup("jms/RemoteConnectionFactory");

       

       

      The initial context lookup succeeds and I have a connection factory.  When I attempt to use it, I get a connection failure.  Stepping thru the above code, I notice that the CF returned by the lookup has a different host name than what I used in the initial lookup.  For example:

       

      I asked for "portfolio.sdateam.svc.cluster.local" when setting up the initial context.  The CF has a property set to "portfolio-60-ra5al:8080" which is the name of the pod running WildFly.  DEBUG output from the camel pod:

      2016-07-06 16:48:55.173 INFO    [Routes] Initiating Artemis connection to portfolio.sdateam.svc.cluster.local port 8080 as camel

      ...

      2016-07-06 16:49:00.269 DEBUG   [org.apache.activemq.artemis.core.client] Started Netty Connector version 4.0.30.Final

      349 2016-07-06 16:49:00.343 DEBUG   [org.apache.activemq.artemis.core.client] Remote destination: portfolio-60-ra5al:8080

      350 2016-07-06 16:49:00.359 FINE    [io.netty.util.internal.ThreadLocalRandom] -Dio.netty.initialSeedUniquifier: 0x9772249a235cbd3d (took 0 ms)

      351 2016-07-06 16:49:00.464 FINE    [io.netty.buffer.ByteBufUtil] -Dio.netty.allocator.type: unpooled

      352 2016-07-06 16:49:00.464 FINE    [io.netty.buffer.ByteBufUtil] -Dio.netty.threadLocalDirectBufferSize: 65536

      353 2016-07-06 16:49:00.574 ERROR   [org.apache.activemq.artemis.core.client] AMQ214016: Failed to create netty connection

      354 java.nio.channels.UnresolvedAddressException

       

      It appears that during the lookup, something is using the hostname of the wildfly server instead of the hostname I asked for in the initial context. 

       

      Is there a way to control what value is returned for the host name when this lookup is performed?

       

      Relevant portion of standalone-full.xml

      <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">

                  <server name="default">

                      <security-setting name="#">

                          <role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>

                      </security-setting>

                      <address-setting name="#" message-counter-history-day-limit="10" page-size-bytes="2097152" max-size-bytes="10485760" expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ"/>

                      <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/>

                      <http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">

                          <param name="batch-delay" value="50"/>

                      </http-connector>

                      <in-vm-connector name="in-vm" server-id="0"/>

                      <http-acceptor name="http-acceptor" http-listener="default"/>

                      <http-acceptor name="http-acceptor-throughput" http-listener="default">

                          <param name="batch-delay" value="50"/>

                          <param name="direct-deliver" value="false"/>

                      </http-acceptor>

                      <in-vm-acceptor name="in-vm" server-id="0"/>

                      <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>

                      <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>

                      <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>

                      <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>

                      <pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm"/>

                  </server>

              </subsystem>

        • 1. Re: WildFly 10 in OpenShift - remote JMS client gets confused after initial JNDI lookup
          jbertram

          The host and port used for a JNDI connection factory lookup really has no bearing what host and port the lookup returns.  The host and port returned for the connection factory is controlled by the connector used by the connection factory.  In your case, the connector used by "jms/RemoteConnectionFactory" is "http-connector", e.g.:

           

          <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>
          
          

           

          The "http-connector" is configured like so:

           

          <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http"/>
          
          

           

          The "http-connector" uses the "http" socket-binding which you haven't included so it's unclear how that is configured.  In any event, the client who does the lookup will ultimately get the host and port of the "http" socket-binding.  One important point here is that if the "http" socket-binding is using "0.0.0.0" then the connector has to choose a concrete network interface (essentially at random) to return rather than "0.0.0.0" since that address would be meaningless to a remote client.  If the connector is, in fact, configured with 0.0.0.0 then it should log an INFO message like this:

           

          Invalid "host" value "0.0.0.0" detected for "http-connector" connector. Switching to <newHost>. If this new address is incorrect please manually configure the connector to use the proper one.
          
          
          • 2. Re: WildFly 10 in OpenShift - remote JMS client gets confused after initial JNDI lookup
            hobbsjd

            You are correct.  I so see the 'switching to ...' message at WildFly startup.

             

            Where can I set that, in the interfaces or socket-binding section?  Currently these sections look like the following.  Wildfly is launched with -b 0.0.0.0 so that jboss.bind.address gets set to that.

             

            <interfaces>

              <interface name="management">

              <inet-address value="${jboss.bind.address.management:127.0.0.1}" />

              </interface>

              <interface name="public">

              <inet-address value="${jboss.bind.address:127.0.0.1}" />

              </interface>

              <interface name="unsecure">

                        <inet-address value="${jboss.bind.address.unsecure:127.0.0.1}"/>

                    </interface>

              </interfaces>

             

             

              <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">

              <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}" />

              <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}" />

              <socket-binding name="ajp" port="${jboss.ajp.port:8009}" />

              <socket-binding name="http" port="${jboss.http.port:8080}" />

              <socket-binding name="proxy-https" port="443" />

              <socket-binding name="https" port="${jboss.https.port:8443}" />

              <socket-binding name="iiop" interface="unsecure" port="3528"/>

                    <socket-binding name="iiop-ssl" interface="unsecure" port="3529"/>

              <socket-binding name="txn-recovery-environment" port="4712" />

              <socket-binding name="txn-status-manager" port="4713" />

              <outbound-socket-binding name="mail-smtp">

              <remote-destination host="localhost" port="25" />

              </outbound-socket-binding>

              </socket-binding-group>

             

             

            I'll try setting things in the interfaces section first...

            • 3. Re: WildFly 10 in OpenShift - remote JMS client gets confused after initial JNDI lookup
              jbertram

              The simplest solution would be to start Wildfly with a concrete interface rather than 0.0.0.0.  Aside from that you can give the "http" socket-binding a specific interface, e.g.:

               

              <socket-binding name="http" interface="http-interface" port="${jboss.http.port:8080}" />
              

               

              Of course, you'll need to define said interface in the <interfaces> element.

              • 4. Re: WildFly 10 in OpenShift - remote JMS client gets confused after initial JNDI lookup
                hobbsjd

                Testing precisely that now... Thanks for the help and I'll let everyone know how it goes.

                • 5. Re: WildFly 10 in OpenShift - remote JMS client gets confused after initial JNDI lookup
                  jbertram

                  FWIW, this issue has been covered quite a few times in various forum threads.  This thread is probably the most concise (although a little old now it still applies).

                  • 6. Re: WildFly 10 in OpenShift - remote JMS client gets confused after initial JNDI lookup
                    hobbsjd

                    It almost worked.  The lookup is successful,and returns a proper IP address as the endpoint to connect to.  I had to break out of the http-upgrade stuff for it though.  However, I think I'm going to drop back to having Artemis run on it's own instead of under WildFly.  I want the broker address to be more stable, and if the wildfly pod is redeployed the addresses change out from under the clients.  I want the app to be able to turnover without restarting the integrations.  Here's the config bits that almost worked...

                     

                    <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">

                      <server name="default">

                      <security-setting name="#">

                      <role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true" />

                      </security-setting>

                      <address-setting name="#" message-counter-history-day-limit="10" page-size-bytes="2097152" max-size-bytes="10485760"

                      expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ" />

                     

                      <remote-connector socket-binding="msg" name="netty"></remote-connector>

                      <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http" />

                      <http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">

                      <param name="batch-delay" value="50" />

                      </http-connector>

                      <in-vm-connector name="in-vm" server-id="0" />

                      <remote-acceptor name="netty" socket-binding="msg"/>

                      <http-acceptor name="http-acceptor" http-listener="default" />

                      <http-acceptor name="http-acceptor-throughput" http-listener="default">

                      <param name="batch-delay" value="50" />

                      <param name="direct-deliver" value="false" />

                      </http-acceptor>

                      <in-vm-acceptor name="in-vm" server-id="0" />

                      <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue" />

                      <jms-queue name="DLQ" entries="java:/jms/queue/DLQ" />

                      <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm" />

                      <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory"

                      connectors="netty" />

                      <pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory"

                      connectors="in-vm" />

                      </server>

                      </subsystem>

                     

                    <interfaces>

                      <interface name="management">

                      <inet-address value="${jboss.bind.address.management:127.0.0.1}" />

                      </interface>

                      <interface name="public">

                     

                     

                      <inet-address value="${jboss.bind.address:127.0.0.1}" />

                      </interface>

                      <interface name="unsecure">

                      <inet-address value="${jboss.bind.address.unsecure:127.0.0.1}" />

                      </interface>

                      <interface name="messaging">

                      <subnet-match value="10.0.0.0/8" />

                      </interface>

                      </interfaces>

                     

                     

                      <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">

                      <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}" />

                      <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}" />

                      <socket-binding name="ajp" port="${jboss.ajp.port:8009}" />

                      <socket-binding name="http" port="${jboss.http.port:8080}" />

                      <socket-binding name="proxy-https" port="443" />

                      <socket-binding name="https" port="${jboss.https.port:8443}" />

                      <socket-binding name="msg" interface="messaging" port="61616">

                      <client-mapping destination-address="portfolio-jms.sdateam.svc.cluster.local" />

                      </socket-binding>

                      <socket-binding name="iiop" interface="unsecure" port="3528" />

                      <socket-binding name="iiop-ssl" interface="unsecure" port="3529" />

                      <socket-binding name="txn-recovery-environment" port="4712" />

                      <socket-binding name="txn-status-manager" port="4713" />

                      <outbound-socket-binding name="mail-smtp">

                      <remote-destination host="localhost" port="25" />

                      </outbound-socket-binding>

                      </socket-binding-group>

                     

                    Thanks for the pointers in the right direction.  Learned a lot today!