2 Replies Latest reply on Jun 21, 2014 3:09 PM by Paul Ferraro

    Distributed sessions and batching

    Ron Leisti Newbie

      Hi,

       

      When using distributed sessions, Wildfly starts a 'batch' on the session at the beginning of a request and then closes it at the end.  As a result, the session ID is locked for the duration of the request.  I don't think that there is a way to disable this through configuration, though I can disable it by overriding some of the internals of the DistributedSessionManager.  Would the Wildfly developers consider having a way of turning this off?

       

      On one hand, I have found that batching forces Wildfly to serialize requests by session ID.  So if we have a single user making several concurrent requests, they will only be fulfilled one at a time.  If there are many concurrent users with different sessions, then this wouldn't be a noticeable issue, but it would be if there are only a small number of concurrent users.  In practice, the session is not often modified so having it pessimistically locked is not ideal.

       

      I have also run into a crash due to batching.  If I use the HttpServletRequest's getRequestDispatcher and then use that to call from one deployment (WAR) to another, then I run into a session transaction timeout.  What happens, is that a request comes in for WAR "A" and then a batch is started for the session, which locks the session ID.  WAR "A" then calls into WAR "B" using the request dispatcher, and then I run into a problem.  When WAR "B" tries to access the session, it fails because the session ID is still locked by WAR "A" on a separate transaction.  This causes an immediate failure and then subsequent failures due to the state of the transaction.

       

      I would like to encourage some discussion about whether there are other ramifications to turning of batch updates that I have not considered, and whether having it as a configurable option is a good idea.