You can get JK1 to load balance properly if you set 'local_worker=0' in your config for each server as displayed above. For some reason (which I have never comprehended) when there are servers marked as local, it always selects the first one in the properties file to send all new sessions to.
I use this idiosyncrasy to direct ALL new sessions to 1 node when I want to take servers out of the pool by marking 1 node as local and restarting Apache.
Just for the record, this has nothing to do with JBoss or Jetty.
Please try by changing this:
worker.list=srv1, srv2, loadbalancer
I've made both of the suggested changes and my sessions are now created properly on each server. I agree that the local worker setting is buggy since it only assumes that only a single worker will ever be local.
However, I'm now having an issue with regards to sticky sessions. A single user's session gets bounced back and forth between my two appservers. The session itself is being properly replicated (using Jetty's JGStore), so my application is functioning. However, not having sticky sessions is bad for performance due to the cache misses. Again, I'm using the JK module with Apache 1.3.27 and Jetty. Has anyone gotten sticky sessions to work with the JK module and Jetty? Are there any additional configurations which I'm missing?