9 Replies Latest reply on Jun 19, 2009 5:10 PM by brian.stansberry

    JNP lookup from multi-homed client

    frankthetank

      Hello all,

      in a 'typical' setup the JNP lookup for the partition name works fine.

      But on one of my (WinXP) PCs, that has two network cards, it will not do the lookup.

      This works:
      (output from a test jar):
      eth0 in on one network along with the server.
      eth1 is on another network.

      This does not:
      eth0 is on another network.
      eth1 in on one network along with the server.

      The interface order of the OS is ok (the one network where the server is on is the first one).

      Is it possible to tell the lookup which interface(s) to use?

      thanks

        • 1. Re: JNP lookup from multi-homed client
          frankthetank

          Correction:
          It will not work on either setups.
          It will *only* work if there is one network card.

          • 2. Re: JNP lookup from multi-homed client
            frankthetank

            *bump*

            So what are the restrictions on clustering?
            Will this even work or not?

            thanks

            • 3. Re: JNP lookup from multi-homed client
              brian.stansberry

              The jnp.localAddress property in your jndi.properties will control what interface the client side naming context binds its socket on. See discussion of that property on

              http://www.jboss.org/community/wiki/NamingContextFactory

              If that doesn't resolve your issue, please include all your naming environment properties in your reply so I can see what you are doing.

              • 4. Re: JNP lookup from multi-homed client
                frankthetank

                Thanks for the reply.
                Yeah, I had tried that with my main NIC but it had not worked.
                Now I seem to know why.

                Besides the point that it seems we have to configure the cluster-service.xml for multi-homed servers (which will really be a killer for me) I had noticed the below issue:

                Ok this is my setup:
                two servers, both with two NICs @ 150.*.*.* (eth0) and 192.*.*.* (eth1)
                Java 4.2.3 binding with 0.0.0.0

                The cluster has a (nearly) unmodified setup.
                f.i.

                 <mbean code="org.jboss.ha.framework.server.ClusterPartition"
                 name="jboss:service=${jboss.partition.name:DefaultPartition}">
                
                 <!-- Name of the partition being built -->
                <!-- <attribute name="PartitionName">${jboss.bind.address}_${jboss.partition.name:DefaultPartition}</attribute>-->
                 <attribute name="PartitionName">${jboss.partition.name:DefaultPartition}</attribute>
                
                 <!-- The address used to determine the node name -->
                 <!--attribute name="NodeAddress">${jboss.bind.address}</attribute-->
                 <attribute name="NodeAddress">${jboss.bind.address}</attribute>
                
                 <!-- Determine if deadlock detection is enabled -->
                 <attribute name="DeadlockDetection">False</attribute>
                
                 <!-- Max time (in ms) to wait for state transfer to complete. Increase for large states -->
                 <attribute name="StateTransferTimeout">30000</attribute>
                
                 <!-- The JGroups protocol configuration -->
                 <attribute name="PartitionConfig">
                 <!--
                 The default UDP stack:
                 - If you have a multihomed machine, set the UDP protocol's bind_addr attribute to the
                 appropriate NIC IP address, e.g bind_addr="192.168.0.2".
                 - On Windows machines, because of the media sense feature being broken with multicast
                 (even after disabling media sense) set the UDP protocol's loopback attribute to true
                 -->
                 <Config>
                 <UDP mcast_addr="${jboss.partition.udpGroup:228.1.2.3}"
                 mcast_port="${jboss.hapartition.mcast_port:45566}"
                


                Now the web server binds to the localhost & 150.*.*.* address.
                Though using the ip directly is quicker.

                I have rewritten my lookup mechanism and it now will scan through all public NICs and will try to connect over each of them using the partition name (NamingContext.JNP_PARTITION_NAME) and the NICs address (NamingContext.JNP_LOCAL_ADDRESS).

                Currently the only working addresses are the localhost and 192.*.*.*

                So services are binding to different interfaces.
                Now if I disable the 192.*.*.* NIC on the server, I can easily connect via the 150.*.*.* NIC.

                Just a heads-up since I will have to tackle the multi-homed limitation anyway.

                • 5. Re: JNP lookup from multi-homed client
                  brian.stansberry

                  What AS version are you using?

                  • 6. Re: JNP lookup from multi-homed client
                    frankthetank

                    Sorry, had a typo in there.. it is JBoss 4.2.3 setup with /all config

                    • 7. Re: JNP lookup from multi-homed client
                      frankthetank

                      Update:
                      Correction:
                      the lookup against the separate server was done over the 192.*.*.* address.

                      I have also tried running jboss on my local machine (a laptop with one built-in NIC and one via USB adapter).

                      In this case the connection is done vial the 150.*.*.* interface.
                      Nothing was changed between these two systems.
                      I was unable to find any reason why this is done.
                      The network configs (I print out the info I get via the NetworkInterfaces) are nearly the same (except for MACs and Addresses naturally).

                      Is there any way to see what addresses are being used?
                      I have looked through the jmx-console and visually scanned all the info from the pages with links containing the partition name.
                      All contain

                      • 8. Re: JNP lookup from multi-homed client
                        brian.stansberry

                        By the "multi-home" limitation do you mean configuring the bind address in the JGroups UDP protocol? That's pretty straightforward.

                        You can control the interface JGroups uses by setting system property -Djgroups.bind_addr=192.**** as a command line argument.

                        If don't specifically set jgroups.bind_addr and you use -b, JBoss will set jgroups.bind_addr to the -b value. Except... if you use -b 0.0.0.0, a value JGroups can't use. In that case, JBoss will set jgroups.bind_addr to the value of InetAddress.getLocalHost().getHostName().

                        None of the above is directly relevant to HA-JNDI. I'm checking if your HA-JNDI problem results from your use of -b 0.0.0.0 though.

                        • 9. Re: JNP lookup from multi-homed client
                          brian.stansberry

                          JBoss does the same basic thing with the java.rmi.server.hostname system property

                          1) Do nothing if already set (e.g. via -Djava.rmi.server.hostname)
                          2) else if the -b value isn't 0.0.0.0, set java.rmi.server.hostname to the -b value
                          3) else set java.rmi.server.hostname to InetAddress.getLocalHost().getHostName()

                          RMI won't work properly if the RMI server isn't listening on the interface specified via java.rmi.server.hostname. For more on this see the https://jira.jboss.org/jira/browse/JBAS-4732 and the forum thread linked from there.

                          In JBoss 4.x, HA-JNDI doesn't expose a config property to let you set it's RMIBindAddress to something other than the -b value. But does setting -Djava.rmi.server.hostname=XXXX let you control the interface that works?