5 Replies Latest reply on Jun 4, 2014 3:25 PM by brian.stansberry

    public interface - how to allow ip address + localhost

    dhileman

      Hello,

       

      I have a 2 node Wildfly 8 cluster setup. One master and one slave.  I have configured the interfaces on each node like this:

       

      master:

      <interfaces>
           <interface name="management">
                <inet-address value="${jboss.bind.address.management:192.168.1.212}"/>
           </interface>
           <interface name="public">
                <inet-address value="${jboss.bind.address:192.168.1.212}"/>
           </interface>
           <interface name="unsecured">
                <inet-address value="${jboss.bind.address.unsecured:192.168.1.212}"/>
           </interface>
      </interfaces>
      

       

      slave:

      <interfaces>
           <interface name="management">
                <inet-address value="${jboss.bind.address.management:192.168.1.213}"/>
           </interface>
           <interface name="public">
                <inet-address value="${jboss.bind.address:192.168.1.213}"/>
           </interface>
           <interface name="unsecured">
                <inet-address value="${jboss.bind.address.unsecured:192.168.1.213}"/>
           </interface>
      </interfaces>
      

      This is working how i want almost.  From other PC's on the network, I can access webapps on either node by their IP address (212 or 213).  The problem is, I want to also be able to access those webapps from localhost, while logged in to the machines.

       

      For example:

      This works from computers on the network

      192.168.1.212:8080/myapp

      This does not work from the hosting computer

      localhost:8080/myapp

       

      How can I configure the interfaces to allow both the machine IP, and also localhost?

       

      I tried using <any-address/> but this causes exceptions to be thrown on the slave (UnresolvedAddresException).

       

      Any ideas?

       

      The reason for wanting to do this is to allow one web app to talk to another (web service) using localhost.  I don't want to hardcode an IP because the same apps get deployed to both cluster nodes.

        • 1. Re: public interface - how to allow ip address + localhost
          kuldeep11

          Change IP to 0.0.0.0, But this not safe way to go with 0.0.0.0.

          • 2. Re: public interface - how to allow ip address + localhost
            brian.stansberry

            Please provide more details (e.g. log snippets with stack traces) on the "exceptions to be thrown on the slave (UnresolvedAddresException)."

            • 3. Re: public interface - how to allow ip address + localhost
              dhileman

              If I use <any-address/> or 0.0.0.0 in the master node configuration instead of the machine ip address, the slave node starts throwing this error non-stop:

               

              2014-06-04 07:42:26,570 ERROR [org.hornetq.core.client] (Thread-17 (HornetQ-server-HornetQServerImpl::serverUUID=6c92c4fe-a081-11e3-bd25-99c51f44e159-683421743)) HQ214016: Failed to create netty connection: java.nio.channels.UnresolvedAddressException

                  at sun.nio.ch.Net.checkAddress(Net.java:127) [rt.jar:1.7.0_45]

                  at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:640) [rt.jar:1.7.0_45]

                  at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:176) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:169) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.DefaultChannelPipeline$HeadHandler.connect(DefaultChannelPipeline.java:1008) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.CombinedChannelDuplexHandler.connect(CombinedChannelDuplexHandler.java:168) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.DefaultChannelHandlerContext.invokeConnect(DefaultChannelHandlerContext.java:495) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:480) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.DefaultChannelHandlerContext.connect(DefaultChannelHandlerContext.java:465) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:847) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:199) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:165) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:354) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:353) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101) [netty-all-4.0.15.Final.jar:4.0.15.Final]

                  at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]

              • 4. Re: public interface - how to allow ip address + localhost
                brian.stansberry

                What sort of messaging clustering are you doing? The problem is very likely coming from the two http-connector resources in the messaging subsystem. If you don't have remote messaging clients, you should be able to remove those, along with the "<connection-factory name="RemoteConnectionFactory">" that references them, leaving just the in-vm connector for use by internal clients.

                 

                If you need remote messaging, then you'll need to add a new interface, assigned to 192.168.1.xxx and then add an outbound-socket-binding. Then reference that outbound-socket-binding's name in the http-connector element's socket-binding attribute.

                • 5. Re: public interface - how to allow ip address + localhost
                  brian.stansberry

                  To elaborate a bit further on my last response. What the connector elements in the messaging subsystem are for is providing information to messaging clients telling them how to connect to the server remotely. The socket-binding attribute is used to set the address/port to which the client will be told to send traffic. A 0.0.0.0 address won't work for that; an actual IP address or a resolvable hostname is needed. What address to use depends on what the messaging client needs. If the client is running in the same vm as the server, then it doesn't need a remote connection at all; it can use the in-vm connector by looking up the connection factory at java:/ConnectionFactory. If the client is expected to be running in the same machine, then the address can be 127.0.0.1. Otherwise it needs to be the 192.168 address.