0 Replies Latest reply on Sep 25, 2002 10:29 AM by dward2

    mod_jk error

      I know this is more of a tomcat/mod_jk problem than jboss', but thought I'd give it a shot in this forum:

      In running an automated load test against our app, we get the following errors in or jk.log file:

      [jk_connect.c (143)]: jk_open_socket, connect() failed errno = 146
      [jk_ajp12_worker.c (152)]: In jk_endpoint_t::service, Error sd = -1
      [jk_connect.c (143)]: jk_open_socket, connect() failed errno = 146
      [jk_ajp12_worker.c (152)]: In jk_endpoint_t::service, Error sd = -1
      [jk_connect.c (143)]: jk_open_socket, connect() failed errno = 146
      [jk_ajp12_worker.c (152)]: In jk_endpoint_t::service, Error sd = -1
      [jk_connect.c (143)]: jk_open_socket, connect() failed errno = 146
      [jk_ajp12_worker.c (152)]: In jk_endpoint_t::service, Error sd = -1
      [jk_connect.c (143)]: jk_open_socket, connect() failed errno = 146
      [jk_ajp12_worker.c (152)]: In jk_endpoint_t::service, Error sd = -1
      [jk_connect.c (143)]: jk_open_socket, connect() failed errno = 146

      Anyone know what these errors are? I can't find anything conclusive about them searching various lists/forums.

      We are running Apache 1.3.12 on a Solaris 7 Ultra 10 using mod_jk/ajp12 to talk to JBoss-2.4.7_Tomcat-3.2.3 on a Solaris 7 E250, JDK 1.3.1_02.

      The symptoms we see are that the number of httpd processes maxes out on the web server box, though the CPU is almost completely idle. The weird thing is that the app server is mostly idle too, and doing a thread dump on the java (jboss+tomcat in same vm) process shows that there are lots of threads waiting for work to do. Once we stop the load test, things are still messed up until I restart apache, then we can access the app again. Note that I didn't have to touch the app server at all. Accessing URLs that aren't configured to go through mod_jk have no problem, until the max http children process gets reached, of course...

      Thanks all,
      David