9 Replies Latest reply on Jul 2, 2014 6:19 AM by mohammad_89

    jboss clustering in wildfly8

    mohammad_89

      Hi guys,

       

                  I have created a two instance cluster of my jBoss server. Its running in localhost:8330\mywarfile and another instance running in localhost:8430\mywarfile. Now i am accessing this through apache web server as per below configuration. I have used balancer to forward the apache request to ajp port. But it is not forwarding and i am getting the error SERVER UNAVAILABLE 503.

       

      Also let me know how to set the jvmroute in wildfly8 for this configuration.

       

      Request you to please help me ASAP..

       

      In the Apache side configuration (httpd.conf).

      # Required Modules
      LoadModule proxy_module modules/mod_proxy.so
      LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
      LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
      LoadModule status_module modules/mod_status.so

       

      # Reverse Proxy
      <Proxy balancer://mybalancer>
        BalancerMember ajp://localhost:8259\mywarfile

        BalancerMember ajp://localhost:8359\mywarfile  
      </Proxy>
      ProxyPass / balancer://mybalancer/ stickysession=JSESSIONID|jsessionid

      # Forward Proxy
      ProxyRequests Off

      <Proxy *>
        Order deny,allow
        Deny from none
        Allow from localhost
      </Proxy>

      # Balancer-manager, for monitoring
      <Location /balancer-manager>
        SetHandler balancer-manager

        Order deny,allow
        Deny from none
        Allow from localhost
      </Location>

       

      Message was edited by: MOHAMED RIYAZ

        • 1. Re: jboss clustering in wildfly8
          pferraro

          The worker identifier is specified via the "instance-id" attribute in the undertow subsystem.

           

          That said, you don't need to hard code your workers if you were to use mod_cluster instead of mod_proxy_balancer.  See mod_cluster - JBoss Community

          1 of 1 people found this helpful
          • 2. Re: jboss clustering in wildfly8
            mohammad_89

            Hello Paul Ferraro,

             

            Thanks for your Reply.

             

            we have clustered the jboss wildfly by instances and configured the httpd.conf by below method.

             

             

            <VirtualHost *:80>

             

            # Reverse Proxy

            <Proxy balancer://mybalaner>

                BalancerMember ajp://localhost:8259/ route=node1

                BalancerMember  ajp://localhost:8359/ route=node2

            </Proxy>

            ProxyPass / balancer://mybalaner/ stickysession=JSESSIONID|jsessionid


            # Forward Proxy

            ProxyRequests Off


            <Proxy *>

                Order deny,allow

                Deny from none

                Allow from all

            </Proxy>


             

            <Location /balancer-manager>

             

                SetHandler balancer-manager

            Order deny,allow

                Deny from none

                Allow from localhost

               </Location>

            </VirtualHost>

             

            i have deployed war file using this link https://github.com/liweinan/cluster-demo

             


            In web.xml we've enabled session replication by adding following entry: <distributable/>


            It's working fine when two servers are running and it's also working when we stop the one instance. But how do we come to know which jboss server is replying for apache request when two servers are running, because we have only deployed WAR file and both are part of  "Other-server-groups"


            When we stop both servers and start one server, existing session is always getting Server Not Found error, where as a new WEB Page request result in Success..

             

            can you please help us?

             

            Thanks,

            Mohammad

            • 3. Re: jboss clustering in wildfly8
              pferraro

              It's working fine when two servers are running and it's also working when we stop the one instance. But how do we come to know which jboss server is replying for apache request when two servers are running, because we have only deployed WAR file and both are part of  "Other-server-groups"

              You can look at the JSESSIONID cookie returned with the response.  The node's instance-id will be appended to the session id.

              When we stop both servers and start one server, existing session is always getting Server Not Found error, where as a new WEB Page request result in Success..

              You might want to set retry=0 in your ProxyPass directive.  Otherwise, the newly started server remains in an error state for 1 minute after mox_proxy detected that it was down.

              Also, check to see which node the existing session originated.  Unless you've set nofailover=On, it should failover to the active node.

              • 4. Re: jboss clustering in wildfly8
                mohammad_89

                Hello Paul Ferraro,

                 

                           

                     As per your suggestion about for JSESSIONID , we come to know that the which server is responding for apache request and we have got output like below with same session id but got response from different server.

                 

                COOKIE   --- JSESSIONID=e9hnPaOB2S9ouevVDeHLKSlU.master:server-four

                 

                COOKIE   --- JSESSIONID=e9hnPaOB2S9ouevVDeHLKSlU.master:server-three

                 

                even though if we refresh the browser and getting a response from another instances with same JSESSIONID.

                 

                And also i have asked below information

                 

                 


                 

                 

                <Proxy balancer://mybalaner>

                 

                    BalancerMember ajp://localhost:8259/ route=node1

                    BalancerMember  ajp://localhost:8359/ route=node2

                #ProxyPass /icare2.0 ajp://127.0.0.1:8259/icare2.0/

                    #ProxyPassReverse /icare2.0 ajp://127.0.0.1:8259/icare2.0/

                    ProxySet lbmethod=byrequests

                </Proxy>

                ProxyPass / balancer://mybalaner/ stickysession=JSESSIONID|jsessionid nofailover=On

                 

                When we stop both servers and start one server, existing session is always getting Server Not Found error, where as a new WEB Page request result in Success..

                 

                You asked me to set retry=0 and also we set nofailover=on but still we are facing same problem..

                 

                 

                Can you please help us?

                 

                 

                Thanks and Regards,

                Mohammad

                • 5. Re: jboss clustering in wildfly8
                  rhusar

                  Paul was saying that you should leave it to Off which is the default and that in case you were to set it to On it would break.

                   

                  Excerpt from the docs:

                  If set to On the session will break if the worker is in error state or disabled. Set this value to On if backend servers do not support session replication.

                  • 6. Re: Re: jboss clustering in wildfly8
                    rhusar

                    You jvmRoute in JSESSIONID with instance-id still do not correspond. The mod_proxy module will use this information for session stickiness, it will extract the jvmRoute from the JSESSIONID (up to fist dot character) and then try match it to a worker.

                     

                    Let me paste my test config in case that would be helpful:

                     

                    LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
                    Listen 9090
                    <VirtualHost *:9090>
                        <Proxy balancer://wildflybalancer>
                            BalancerMember ajp://localhost:8009/ route=rhusar1 retry=5
                            BalancerMember ajp://localhost:8109/ route=rhusar2 retry=5
                            ProxySet lbmethod=byrequests
                            ProxySet failonstatus=404
                        </Proxy>
                        ProxyPass /clusterbench balancer://wildflybalancer/ stickysession=JSESSIONID|jsessionid
                        # Forward Proxy
                        ProxyRequests Off
                        <Location /balancer-manager>
                            SetHandler balancer-manager
                            Order deny,allow
                            Deny from none
                            Allow from none # CHANGE ME
                        </Location>
                    </VirtualHost>
                    

                     

                    Then start 2 WildFly instances like:

                     

                    ./bin/standalone.sh -c standalone-ha.xml -Djboss.node.name=rhusar1
                    

                    and with port offset

                     

                    ./bin/standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=100 -Djboss.node.name=rhusar2
                    

                     

                    To see how this is set to the instance-id in Web/Undertow subsystem I cna use the CLI:

                     

                    [standalone@localhost:9990 /] /subsystem=undertow/:read-attribute(name=instance-id)
                    {
                        "outcome" => "success",
                        "result" => expression "${jboss.node.name}"
                    }
                    

                     

                    (You need to adapt the steps for domain as these are for standalone.)

                     

                    Note that the configuration is still crappy:

                     

                    Lowering the timeout will allow the node to be used quickly after startup, but since the connector is started prior to web application deployment, the first request(s) will result in 404. The other modules are doing:

                    • mod_jk could retry on status, so 404 could be retried
                    • mod_cluster prevents this problem completely since the worker will be added to the pool when the application is already deployed
                    • 7. Re: jboss clustering in wildfly8
                      mohammad_89

                      Hello Radoslav Husar,

                       

                      i have started two instances in standalone.xml by using below command,

                      ./bin/standalone.bat -c standalone-ha.xml -Djboss.node.name=node1

                      and another instance start by same but with port offset

                      ./bin/standalone.bat -c standalone-ha.xml -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node2

                      i have configured my front end server (apache 2.4) by below configuration,

                      LoadModule proxy_balancer_module modules/mod_proxy_balancer.so

                      # Reverse Proxy

                      BalancerMember ajp://localhost:8009/ route=node1
                      BalancerMember ajp://localhost:8109/ route=node2
                      ProxySet lbmethod=byrequests
                      ProxySet failonstatus=404

                      ProxyPass / balancer://mybalancer/ stickysession=JSESSIONID|jsessionid

                      ProxyRequests Off

                      Order deny,allow
                      Deny from none
                      Allow from localhost

                      # Balancer-manager, for monitoring

                      SetHandler balancer-manager
                      Order deny,allow
                      Deny from none
                      Allow from none

                      it is working fine when we connect through the apache but if i stop node1 and restart it
                      i have got below warning

                      WARN [org.jgroups.protocols.TP$ProtocolAdapter] (INT-1,shared=udp) JGRP000031: node2/web: dropping unicast message to wrong destination 109fb003-b554-43f3-463f-183b82e95308

                      after that if i access the warfile through apache i have got an error like

                      ERROR [org.apache.jasper] (default task-1) JBWEB005015: The JSP container needs a valid work directory [E:\jboss_3\standalone\tmp\cluster-demo.war]
                      2014-06-24 12:31:10,318 ERROR [io.undertow.request] (default task-1) UT005023: Exception handling request to //cluster-demo/index.jsp: org.apache.jasper.JasperException: java.lang.ClassNotFoundException: org.apache.jsp.index_jsp

                      please help me to overcome from this issue..!

                      Thanks and Regards,

                      Mohammad

                      • 8. Re: jboss clustering in wildfly8
                        rhusar

                        WARN [org.jgroups.protocols.TP$ProtocolAdapter] (INT-1,shared=udp) JGRP000031: node2/web: dropping unicast message to wrong destination 109fb003-b554-43f3-463f-183b82e95308

                        This is a known issue, see [WFLY-2632] JGroups drops unicast messages after shutdown/restart - JBoss Issue Tracker

                        • 9. Re: jboss clustering in wildfly8
                          mohammad_89

                          Hello Radoslav husar,

                          Thanks for your time Radoslav Husar. Please let me know if you find solution for this issue.

                           

                          Thanks

                          Mohammad