2 Replies Latest reply on Jun 21, 2014 3:09 PM by pferraro

    Distributed sessions and batching

    ron.leisti

      Hi,

       

      When using distributed sessions, Wildfly starts a 'batch' on the session at the beginning of a request and then closes it at the end.  As a result, the session ID is locked for the duration of the request.  I don't think that there is a way to disable this through configuration, though I can disable it by overriding some of the internals of the DistributedSessionManager.  Would the Wildfly developers consider having a way of turning this off?

       

      On one hand, I have found that batching forces Wildfly to serialize requests by session ID.  So if we have a single user making several concurrent requests, they will only be fulfilled one at a time.  If there are many concurrent users with different sessions, then this wouldn't be a noticeable issue, but it would be if there are only a small number of concurrent users.  In practice, the session is not often modified so having it pessimistically locked is not ideal.

       

      I have also run into a crash due to batching.  If I use the HttpServletRequest's getRequestDispatcher and then use that to call from one deployment (WAR) to another, then I run into a session transaction timeout.  What happens, is that a request comes in for WAR "A" and then a batch is started for the session, which locks the session ID.  WAR "A" then calls into WAR "B" using the request dispatcher, and then I run into a problem.  When WAR "B" tries to access the session, it fails because the session ID is still locked by WAR "A" on a separate transaction.  This causes an immediate failure and then subsequent failures due to the state of the transaction.

       

      I would like to encourage some discussion about whether there are other ramifications to turning of batch updates that I have not considered, and whether having it as a configurable option is a good idea.

        • 1. Re: Distributed sessions and batching
          pferraro

          While pessimistic locking is the default behavior, you can configure the infinispan cache to use optimistic locking instead - in which case, locks will be acquired only when the batch is closed (i.e. at the end of the request).  You will also need to modify the transaction isolation to use READ_COMMITTED (instead of REPEATABLE_READ, the default) - otherwise, concurrent acess will result in write skews since each request updates the last modified timestamp.

          • 2. Re: Distributed sessions and batching
            pferraro

            I should also add that the purpose of the default behavior is to prevent concurrent access to a session by multiple nodes, which is explicitly forbidden by the servlet spec.