-
1. Re: Haproxy / Mod_JK
rhusar Mar 3, 2012 3:43 PM (in response to wolfedale)Hi Pawel,
I m trying to figure this out. So the problem is that the node is pingable by mod_jk so it directs requests to that node even though the application is still not deployed?
If the clients are getting 404 -- connector is up but the context is not yet -- then you can workaround this using mod_jk property called
fail_on_status and add 404 as a status code to actually re-try on the other node.
From docs:
Set this value to the HTTP status code that will cause a worker to fail if returned from Servlet container. Use this directive to deal with cases when the servlet container can temporary return non-200 responses for a short amount of time, e.g during redeployment.
The error page, headers and status codes of the original response will not be send back to the client. Instead the request will result in a 503 response. If the worker is a member of a load balancer, the member will be put into an error state. Request failover and worker recovery will be handled with the usual load balancer procedures.
This feature has been added in jk 1.2.20.
Starting with jk 1.2.22 it is possible to define multiple status codes separated by space or comma characters. For example: worker.xxx.fail_on_status=500,503
Starting with jk 1.2.25 you can also tell the load balancer to not put a member into an error state, if a response returned with one of the status codes in fail_on_status. This feature gets enabled, by putting a minus sign in front of those status codes. For example: worker.xxx.fail_on_status=-404,-500,503
http://tomcat.apache.org/connectors-doc/reference/workers.html
HTH,
Rado
-
2. Re: Haproxy / Mod_JK
rhusar Mar 3, 2012 3:44 PM (in response to rhusar)PS: You might also want to migrate to mod_cluster which actually deals with these situation cleanly -- the /context is not registered until its completely deployed thus this problem does not happen.
Check out http://www.jboss.org/mod_cluster/