Looks like you're using the non-clustered SingleSignOn valve. You need to use the ClusteredSingleSignOn valve (which can be found and uncommented in the standard server.xml file just below the non-clustered version).
thanks for your reply.
i had now a look at the server.xml, the ClusteredSingleSignOn valve ist activated:
it now seems, that the principal is replicated some seconds after the login, her is the scenario:
login form is delivered from server A
login form is sent to j_security_check to server B (server B is doing the login)
user is redirected to the login success page on server A. but at that moment, the user is not logged in on server A. when you wait some seconds an reload the login success page on server A, the user is also logged in on server A, and the console says:
07:13:21,102 INFO [Engine] SingleSignOn[localhost]: Found cached principal 'wolfuw' with auth type 'FORM'
so it seems, that the relplication has a delay of several seconds, but this is too slow for me
Looks like I need to change the log messages so I can distinguish the clustered version from the standalone :)
Is all this switching between servers because you're not using stick sessions in your load balancer?
When you say it takes several seconds, is that based on a controlled test, or basically manipulating the browser by hand? The reason I ask is a delay of several seconds is extreme; some hundreds of ms would not surprise me.
By default, the TreeCache used for SSO replication is configured for asynchronous replication. If you use a browser redirect after login, it's quite possible the redirect will beat the SSO replication. For this kind of scenario you need to configure the TreeCache to use REPL_SYNC. This can be configured in the tc5-cluster-service.xml file.
BUT, this same tree cache is also used for http session replication, so switching to REPL_SYNC can have a significant performance impact if you're using HttpSession replication.
It's possible to configure a separate TreeCache for the SSO, although that adds overhead to the overall system.
i´m using the all configuration and deploy my application in the farm directory. i´m using sticky sessions, but despite that, the apache redirects the requests to both jboss instances
the test that i did was manually by pressing the reload button of the browser, but the redirect after the j_security_check is done immediately
is there a possibility to set the cached user principal manually or to set a lock for the second server, so it has to wait until the principal is replicated?