During a quite recent performance sprint in my team I found out (much to my surprise) that Seam itself (or our usage of) was the major performance bottleneck. I like to share what I found and how I refactored parts of the application to workaround it, possibly helping someone else not making the same mistakes as we did, possibly reclaiming a bunch of ms for you.
It all started when I realized that no matter how fast I made the logic preparing data for one of our
slower views it nevertheless took about 1200 ms to render it. To examine further I made an interceptor measuring call-times for all beans managed by seam in our application. Summing all call-times I landed at approximately 200 ms ... which indicated that 1000 ms got lost in Seam itself. Yes. One second.
To verify that suspicion I changed my interceptor so that it was mounted outside all internal interceptors of seam (actually it should have been all the time). Summing all call-times once again resulted in approx 1200 ms - hence confirming my suspicion that approx. one second in each request was lost in Seam.
The timing interceptor also showed that two methods of one of the view backing beans had been called 60 times each, each invocation taking about 8 ms. Voilà, there it was!, (almost) the explanation for the lost second.
I suspect that the bean is being rigged for each request, i.e. there is a lot of injections and preparations going on for every single call even though the view backing bean is in conversation scope. Please correct me if I am wrong.
Digging further I found that the view was looping over a collection and in each iteration calls were being made (via an innocent looking EL-expression) to the two methods on the view backing bean.
The first thought was to break out those calls within the view itself - the result from the methods were static per rendered view so there was no need to call them in the loop. JSTL and c:set seemed to be the way forward ... but to my surprise c:set only served as an EL-expression-alias and hence the calls still were being fired from within the loop. Equally slow. Bad.
Next thing I did was to let the view backing beans instead prepare data-holders containing all data for each view which were outjected into the request-scope. Those data-holders I annotated with
This solved it all for me. After I was done several views in our system became considerably faster (between 2 and up to 8 times). My interceptor now shows that the main part of each reqeust is due to our code, not due to
seam prep ... i.e as expected and as it should be.
I am writing this to shed light on a mistake that (frankly) is very easy to make (just an EL on the wrong place) not understanding the cost of a method call against a managed bean with several injections.
Probably there are other ways out of this performance problem than the one I suggest. If you have other suggestions please feel free to enlight me (and the forum).