0 Replies Latest reply on Nov 10, 2015 1:18 AM by tc7

    Problem configuring Wildfly9 reverse proxy with backend nodes on separate hosts (unicast vs multicast)

    tc7

      I've followed Stuart's excellent interview and posts on this topic (and written a summary of my own here).

      I have a nice reverse proxy implementation working with failover and switching to static html pages when no nodes are available. I can run as many nodes as I like with sticky sessions and with SSL.

      The one remaining issue I have is the ability to link backend nodes from another server.

       

      I suspect multicast UDP traffic is not being propagated between hosts. I have been down this path but as yet have been unable to get the simple multicast send / listen test to work (RHELv7).

      Failing this I was wondering if it is possible to configure the wildfly modcluster implementation such that the backend nodes use the proxies attribute. This worked fine for me in the past using Apache mod_cluster and with remote nodes.

      It would also be useful to be able to add backend nodes that are not on the immediate subnet without having to propagate multicast traffic through routers etc.

      However I cannot see how to configure the Wildfly load balancer to listen (i.e. on port 10000 as Apache used to, or any other port for that matter).

       

      <subsystem xmlns="urn:jboss:domain:undertow:2.0">

      <buffer-cache name="default"/>

      <server name="default-server">

      <http-listener name="default" socket-binding="http" redirect-socket="https"/>

      <https-listener name="https" socket-binding="https" security-realm="ssl-realm"/>

      <host name="default-host" alias="localhost">

      <location name="/" handler="welcome-content"/>

      <filter-ref name="server-header"/>

      <filter-ref name="x-powered-by-header"/>

      <filter-ref name="modcluster"/>

      <filter-ref name="unavailable-handler" predicate="regex['/MyApp/*']"/>

      <filter-ref name="404-handler" predicate="not regex['/MyApp/*']"/>

      </host>

      </server>

      <servlet-container name="default" use-listener-encoding="true" default-encoding="UTF-8">

      <jsp-config/>

      <websockets/>

      </servlet-container>

      <handlers>

      <file name="welcome-content" path="${jboss.home.dir}/welcome-content"/>

      </handlers>

      <filters>

      <response-header name="server-header" header-name="Server" header-value="WildFly/9"/>

      <response-header name="x-powered-by-header" header-name="X-Powered-By" header-value="Undertow/1"/>

      <error-page name="unavailable-handler" code="404" path="${jboss.home.dir}/welcome-content/unavailable/SystemUnavailable.html"/>

      <error-page name="404-handler" code="404" path="${jboss.home.dir}/welcome-content/PageNotFound.html"/>

      <mod-cluster name="modcluster" management-socket-binding="http" advertise-socket-binding="modcluster"/>

      </filters>

      ...

      <socket-binding-group name="standard-sockets" default-interface="public">

      ...

      <socket-binding name="modcluster" port="23364" multicast-address="224.0.1.105"/>

      ...


      There seems no way to configure the mod-cluster filter to bind to a TCP port  (the advertise-socket-binding attribute is mandatory and expects a multicast address).

       

      And backend node config:

      <subsystem xmlns="urn:jboss:domain:modcluster:2.0">

      <mod-cluster-config advertise="true" connector="ajp">

      <dynamic-load-provider>

      <load-metric type="cpu"/>

      </dynamic-load-provider>

      </mod-cluster-config>

      </subsystem>

      ...

      <socket-binding-group name="full-ha-sockets" default-interface="public">

      ...

      <socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>

      <socket-binding name="http" port="${jboss.http.port:8080}"/>

      <socket-binding name="https" port="${jboss.https.port:8443}"/>

      <socket-binding name="iiop" interface="unsecure" port="3528"/>

      <socket-binding name="iiop-ssl" interface="unsecure" port="3529"/>

      <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>

      <socket-binding name="jgroups-tcp" port="7600"/>

      <socket-binding name="jgroups-tcp-fd" port="57600"/>

      <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>

      <socket-binding name="jgroups-udp-fd" port="54200"/>

      <socket-binding name="jgroups-udp-hq" port="55300" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45788"/>

      <socket-binding name="jgroups-udp-hq-fd" port="54300"/>

      <socket-binding name="modcluster" port="23364" multicast-address="224.0.1.105"/>

      ...

       

      The domain master starts up successfully with the load balancer. And when backend nodes are started I see

      load-balancer] INFO  [Registering node wf-serverXYZ, connection: ajp://1.2.3.4:8009/?#

      However there is no accompanying message from the load balancer indicating the context has been registered, eg:

      load-balancer] INFO  [io.undertow] (default task-6) UT005045: Registering context /MyApp, for node <-- this is not issued when a backend node on another host connects

       

      If anyone has any comments or suggestions on how a unicast wildfly load balancer configuration can be achieved using modcluster I would be grateful.