1 2 Previous Next 19 Replies Latest reply on Dec 20, 2002 2:37 AM by juhalindfors

    Invokers, AOP and performance

    timfox

      Firstly, apologies for cross-posting, and for butting-in on a developer's forum, when I am not a developer.

      I realise I am breaking the etiquetee, and probably deserve a flaming for that!

      I'm reposting here, my original question from a thread I posted in the Lizzard's corner.

      I didn't really wanted to post here and be rude and interrupt you guys, but I'm really curious what you guys who are actually building it think, and I have had some indication that others are thinking the same way as myself.

      A quick summary of the post would be:
      Concept of interceptors and AOP completely rocks, but worried about the implementation, since it could stink in terms of performance.
      Suggestion would be to use byte code engineering (I know Bill seems to be using javassist for some stuff, but not all), to encapsulate the entire invoker stack and weaving through the pointcuts, thus giving kick asss performance.

      I realise Bill's code is now just a prototype so please forgive me if I am being a little premature here...

      Here's my original post:



      I've been sneaking around, listening in to the threads in the developer's Aspects forum.

      (The reason I'm posting here is because I didn't want to butt in, and it's a developer's only forum - as a mere mortal compared to the likes of Marc and Rickard I felt a little intimidated I must say)

      Anyway... this stuff has been fascinating me for a while, AOP IMO *is* the future and I find it fantastic that Jboss is starting to address it, not very suprised though since JBoss tends to lead the pack anyway in this matters....

      It seems to me that whether you want an ejb container or jdo, or logging or other "cross cutting" concerns, they simply become aspects of your objects. This is mind blowing stuff and will place JBoss light years ahead IMO *IF DONE CORRECTLY*.

      Now, someone put me straight here. I love JBoss's idea of dynamic interceptors, and I can understand that the ideas for a full-on AOP has come in part from a development of that.

      The only thing, that I didn't like about the way interceptors were implemented is that they involved this enormous long, slow call through the stack of invokeNext()....invokeNext()... blah blah, which let's face it guys really slows stuff down.

      Now, I know some people, like Dan O'Connor's Mvcsoft overcome this by bypassing your interceptors (his "lightweight" entity concept), anyway this is all a hack (a nice hack but still a hack), and wouldn't be necessary if the interceptors were quicker.

      To cut a long story short, I thought to myself, you don't want to go the weblogic route and precompile stubs, but you if you're willing to use a becl tool (which I believe you use anyway in your dynamic proxy implementation), then you can basically use becl plus your container config. info to create a new class at runtime (also hot swappable) that implements your entire interceptor functionality, but without that whole massive slow call throught the interceptor stack, since you've rewritten it all in the same method at run-time.

      To all intents and purposes, it looks like you're just dropping the class in and deploying it - if the becl bytecode rewriting could be done quick enough - no-one would ever know the difference, it would be transparent.

      Now take your interceptors, enrich them with more "AOP like" functionality, like regular expression matching on method signatures and, and all sorts of other things, and you a serious kick ass system that is not only infinitely flexible, but in terms of speed flys like shit off a shovel.

      Anyway, that's just my 2c. I'm sure you guys (Marc, Rickard, Adrian et al) have been around this loop before, and in a way HOW it is done is just an implementation detail, IMO it's really important since even though I LOVE your interceptor stack (I love the idea), i HATE it too since it is soo slow. (

      So my questions is: if you're going the AOP route, then are you going to use a chain still, or are you going to "rewrite classes (ie becl)" on the fly? In my mere moral opinion I think the chain would be tooooo slow to be useful.

      Only disadvantage I can think of right now of implementing AOP at bytecode level would be in debugging - when your program fails, your source doesn't correspond with the .class files - could make life difficult.

      Anyway guys, I'm so happy you are going in this direction, IMO this is definitely the way to go. I just worry about the way you're going to implement it........

      Good luck.



        • 1. Re: Invokers, AOP and performance

          > Firstly, Hiram, Bill, thanks for your replies.
          >
          > I'd like to say we're actually using JBoss 3.x in
          > production and it's doing fine. We have had some
          > performance problems with the interceptor chains,
          > which I do think is an issue especially considering
          > with the whole local interface thing, people are
          > using entity beans more and more for fine grained
          > domain objects, and really don't want that whole hit,
          > for a simple call on a getX() method.
          > Having said that, if what Hiram mentioned about
          > cacheing per method chains is going to happen in 4.x
          > then I'm happy. That sounds like a cool idea.
          > I can accept if you're in a container you're going to
          > have intercept methods in one way or another, whether
          > your AOP configuration is for entity beans or JDO, so
          > you're always going to have a hit, I can accept
          > that.
          > Quick idea: How about a configuration for those
          > people (most of us??) who are only using entity beans
          > behind a session facade, and don't need the
          > transactions (ie all the beans methods are marked
          > "Required"), so we can lose the transaction
          > interceptor (or at least have a light weight one?),
          > also lose the metrics interceptor and logging
          > interceptor. Let's call it a "lightweight entity
          > bean" container configuration.

          Prove that this is actually a performance hit. I've done ECPERF benchmarks on JBoss and JBoss performs just fine with interceptor chains and all.


          > I know a lot of people seem to get put off by the
          > JBoss performance because the default config. comes
          > with so much in the chain (that probably isn't
          > necessary) - and don't realise they can change it.

          If they're put off then they are just ignorant and need to do some real benchmarks.

          > Then JBoss loses out in the performance tests because
          > they haven't turned off the interceptors they don't
          > use.

          Bullshit. Again, I've actually done benchmarks and JBoss performs quite nicely.

          > On the question of unrolling the interceptors into a
          > single method using byte code engineering (whether

          More bullshit. Unmaintainable and unreadable code. Performance boost is negligable.

          > BECL or javassist) - I guess the performance gains of
          > avoiding a stack call would be small compared against
          > DB access etc., but again in fine grained entity bean
          > access could still be considerable. But again I
          > accept this is just a minor tweak, but still
          > something that could give JBoss just a little extra
          > edge.

          Disagree.

          > Hiram - you say this would be hard to re-engineer the
          > classes at runtime/deployment with BECL- I agree it's
          > not easy - but again not impossible - and hey you
          > guys are supposed to be the best ;). Anyway with
          > javassist I believe this kind of thing is actually
          > not that difficult - the rebuild performance hit
          > would be quite small and only occur at deploy-time or
          > if the interceptor/aspect configuration changes so
          > wouldn't really matter.
          > But as you say, loop unrolling (which is effectively
          > what I'm suggesting) is quite a detailed optimisation
          > probably quite far down the list of priorities.
          > I've had a look at Bill's code, and as far as I can
          > understand the pointcut's are being implemented on a
          > byte code level, and the byte code's only being
          > manipulated at deploy/change time so that's good. (I
          > think this is true for "global" but not sure for
          > "instance").
          > Looking at the interceptor chain in a stack trace I
          > have before me, the only other idea I had was for
          > avoiding the couple of reflection calls
          > (method.invoke()) that seem to occurr. I know a lot
          > of people give JBoss grief for using reflection when
          > weblogic etc. (since they precompile) don't because
          > of the heavy performance hit.

          With JDK 1.4 there is relatively no performance hit. Again, I've done benchmarking of JDK 1.3 vs. JDK 1.4

          JDK 1.3 reflection is 20 times slower than simulated compiled code.

          JDK 1.4 reflection is 2 times slower than simulated compiled code. If you look at the RICE benchmark of JBOss, you'll see that with JDK 1.4, the performance hit from reflection was less than 2% and that's if you're hitting cached beans. If you're using commit-option 'B' for instance, the reflection hit is truly insignificant because the JDBC calls become the source of all evils.

          > I was thinking if you use byte code engineering you
          > can avoid the reflection altogether.
          > It's just an idea and I haven't thought it through
          > properly. Basically it goes as follows:
          > If you have a class with 3 methods: doX(), doY() and
          > doZ(), you could byte code engineer a proxy so that
          > instead of having to lookup a Method object for, say
          > doX() and calling invoke() on it, you call a method
          > myInvoke(int methodNumber, Object[] args) on the
          > engineered proxy object. The byte code engineered
          > method myInvoke() would do the same as something like
          > the following written in java:
          > public Object myInvoke(int methodNumber, Object[]
          > args)
          > {
          > switch(methodNumber)
          > {
          > case 0:
          > return doX();
          > case 1:
          > return doY((String)args[0]);
          > case 2:
          > return doZ((Date)args[0], (String)args(1));
          > }
          > // etc.
          > }

          Again, with JDK 1.4, the above code you recommend is only 2 times faster than reflection. Try it out.

          > Clearly this is off the top of my head and is not
          > complete but I think it would let you avoid
          > reflection altogether.

          Reflection is not a problem.

          > Having said all this, as you know reflection in Sun
          > JVM 1.4+ is supposed to be faster, so maybe not a big

          Whoops, you know this.

          > issue, but not everyone is using a Sun JVM.

          Well fuck them. By the time JBoss 4.0 comes out, JDK 1.4+ will be the norm. Just as JDK 1.3 was the norm when JBoss 3 came out.

          > Essentially what I'm saying is you could get all the
          > benefits of, say, weblogic and their precompilation
          > of stubs (ejbc), avoid reflection and unroll, but

          I contend that precompilation gives you no benefits. We are in the process of putting together ECPERF benchmark results and you'll see that we kick everybody's ass, even with reflection!

          > without any of the drawbacks (ie extra step in build
          > process, not runtime changeable), by essentially
          > precompiling stubs (aka byte code engineering
          > proxies) at runtime/changetime/deploytime.
          > It would all be transparent and allow you hold a
          > finger to Weblogic etc. on one of the few things they

          Weblogic is dead. We will rule. Pay attention to what happens in 2003.

          > CAN criticise you on, and you would have to lose any
          > of your runtime configurability in the process.
          > Anyway, just some thoughts.
          > Good luck.
          > BTW I took the PURPLE pill a long time ago, it's to
          > be recommended. ;)

          Viagra? That that was the blue pill...


          • 2. Re: Invokers, AOP and performance
            hchirino

            Long AOP invocation chains can be reduced by cacheing per method invocation chains. This way only interceptors that will do some work in the method call will be included in the chain for a method call.

            I also thinkg that the interceptor invokeNext() overhead is small the proof being the JBoss 3.x ejb implementation. It is interceptor based too.

            I think that one the big features of asspect is that they give you the ability to change the behavior of a running system by chaning the interceptor chains at RUNTIME. If we BECL generate invocation chains this would be much harder to do.


            Regards,
            Hiram

            • 3. Re: Invokers, AOP and performance
              bill.burke

              Tim,

              Never shy away from commenting on Dev forums or dev list!

              Hiram is right. Overhead is quite small.

              Keep the ideas flowing,

              Bill

              • 4. Re: Invokers, AOP and performance
                timfox

                Firstly, Hiram, Bill, thanks for your replies.

                I'd like to say we're actually using JBoss 3.x in production and it's doing fine. We have had some performance problems with the interceptor chains, which I do think is an issue especially considering with the whole local interface thing, people are using entity beans more and more for fine grained domain objects, and really don't want that whole hit, for a simple call on a getX() method.
                Having said that, if what Hiram mentioned about cacheing per method chains is going to happen in 4.x then I'm happy. That sounds like a cool idea.
                I can accept if you're in a container you're going to have intercept methods in one way or another, whether your AOP configuration is for entity beans or JDO, so you're always going to have a hit, I can accept that.
                Quick idea: How about a configuration for those people (most of us??) who are only using entity beans behind a session facade, and don't need the transactions (ie all the beans methods are marked "Required"), so we can lose the transaction interceptor (or at least have a light weight one?), also lose the metrics interceptor and logging interceptor. Let's call it a "lightweight entity bean" container configuration.
                I know a lot of people seem to get put off by the JBoss performance because the default config. comes with so much in the chain (that probably isn't necessary) - and don't realise they can change it. Then JBoss loses out in the performance tests because they haven't turned off the interceptors they don't use.
                On the question of unrolling the interceptors into a single method using byte code engineering (whether BECL or javassist) - I guess the performance gains of avoiding a stack call would be small compared against DB access etc., but again in fine grained entity bean access could still be considerable. But again I accept this is just a minor tweak, but still something that could give JBoss just a little extra edge.
                Hiram - you say this would be hard to re-engineer the classes at runtime/deployment with BECL- I agree it's not easy - but again not impossible - and hey you guys are supposed to be the best ;). Anyway with javassist I believe this kind of thing is actually not that difficult - the rebuild performance hit would be quite small and only occur at deploy-time or if the interceptor/aspect configuration changes so wouldn't really matter.
                But as you say, loop unrolling (which is effectively what I'm suggesting) is quite a detailed optimisation probably quite far down the list of priorities.
                I've had a look at Bill's code, and as far as I can understand the pointcut's are being implemented on a byte code level, and the byte code's only being manipulated at deploy/change time so that's good. (I think this is true for "global" but not sure for "instance").
                Looking at the interceptor chain in a stack trace I have before me, the only other idea I had was for avoiding the couple of reflection calls (method.invoke()) that seem to occurr. I know a lot of people give JBoss grief for using reflection when weblogic etc. (since they precompile) don't because of the heavy performance hit.
                I was thinking if you use byte code engineering you can avoid the reflection altogether.
                It's just an idea and I haven't thought it through properly. Basically it goes as follows:
                If you have a class with 3 methods: doX(), doY() and doZ(), you could byte code engineer a proxy so that instead of having to lookup a Method object for, say doX() and calling invoke() on it, you call a method myInvoke(int methodNumber, Object[] args) on the engineered proxy object. The byte code engineered method myInvoke() would do the same as something like the following written in java:
                public Object myInvoke(int methodNumber, Object[] args)
                {
                switch(methodNumber)
                {
                case 0:
                return doX();
                case 1:
                return doY((String)args[0]);
                case 2:
                return doZ((Date)args[0], (String)args(1));
                }
                // etc.
                }
                Clearly this is off the top of my head and is not complete but I think it would let you avoid reflection altogether.
                Having said all this, as you know reflection in Sun JVM 1.4+ is supposed to be faster, so maybe not a big issue, but not everyone is using a Sun JVM.
                Essentially what I'm saying is you could get all the benefits of, say, weblogic and their precompilation of stubs (ejbc), avoid reflection and unroll, but without any of the drawbacks (ie extra step in build process, not runtime changeable), by essentially precompiling stubs (aka byte code engineering proxies) at runtime/changetime/deploytime.
                It would all be transparent and allow you hold a finger to Weblogic etc. on one of the few things they CAN criticise you on, and you would have to lose any of your runtime configurability in the process.
                Anyway, just some thoughts.
                Good luck.
                BTW I took the PURPLE pill a long time ago, it's to be recommended. ;)

                • 5. Re: Invokers, AOP and performance
                  bill.burke

                  > Firstly, Hiram, Bill, thanks for your replies.
                  >
                  > I'd like to say we're actually using JBoss 3.x in
                  > production and it's doing fine. We have had some
                  > performance problems with the interceptor chains,
                  > which I do think is an issue especially considering
                  > with the whole local interface thing, people are
                  > using entity beans more and more for fine grained
                  > domain objects, and really don't want that whole hit,
                  > for a simple call on a getX() method.
                  > Having said that, if what Hiram mentioned about
                  > cacheing per method chains is going to happen in 4.x
                  > then I'm happy. That sounds like a cool idea.
                  > I can accept if you're in a container you're going to
                  > have intercept methods in one way or another, whether
                  > your AOP configuration is for entity beans or JDO, so
                  > you're always going to have a hit, I can accept
                  > that.
                  > Quick idea: How about a configuration for those
                  > people (most of us??) who are only using entity beans
                  > behind a session facade, and don't need the
                  > transactions (ie all the beans methods are marked
                  > "Required"), so we can lose the transaction
                  > interceptor (or at least have a light weight one?),
                  > also lose the metrics interceptor and logging
                  > interceptor. Let's call it a "lightweight entity
                  > bean" container configuration.

                  Prove that this is actually a performance hit. I've done ECPERF benchmarks on JBoss and JBoss performs just fine with interceptor chains and all.


                  > I know a lot of people seem to get put off by the
                  > JBoss performance because the default config. comes
                  > with so much in the chain (that probably isn't
                  > necessary) - and don't realise they can change it.

                  If they're put off then they are just ignorant and need to do some real benchmarks.

                  > Then JBoss loses out in the performance tests because
                  > they haven't turned off the interceptors they don't
                  > use.

                  Bullshit. Again, I've actually done benchmarks and JBoss performs quite nicely.

                  > On the question of unrolling the interceptors into a
                  > single method using byte code engineering (whether

                  More bullshit. Unmaintainable and unreadable code. Performance boost is negligable.

                  > BECL or javassist) - I guess the performance gains of
                  > avoiding a stack call would be small compared against
                  > DB access etc., but again in fine grained entity bean
                  > access could still be considerable. But again I
                  > accept this is just a minor tweak, but still
                  > something that could give JBoss just a little extra
                  > edge.

                  Disagree.

                  > Hiram - you say this would be hard to re-engineer the
                  > classes at runtime/deployment with BECL- I agree it's
                  > not easy - but again not impossible - and hey you
                  > guys are supposed to be the best ;). Anyway with
                  > javassist I believe this kind of thing is actually
                  > not that difficult - the rebuild performance hit
                  > would be quite small and only occur at deploy-time or
                  > if the interceptor/aspect configuration changes so
                  > wouldn't really matter.
                  > But as you say, loop unrolling (which is effectively
                  > what I'm suggesting) is quite a detailed optimisation
                  > probably quite far down the list of priorities.
                  > I've had a look at Bill's code, and as far as I can
                  > understand the pointcut's are being implemented on a
                  > byte code level, and the byte code's only being
                  > manipulated at deploy/change time so that's good. (I
                  > think this is true for "global" but not sure for
                  > "instance").
                  > Looking at the interceptor chain in a stack trace I
                  > have before me, the only other idea I had was for
                  > avoiding the couple of reflection calls
                  > (method.invoke()) that seem to occurr. I know a lot
                  > of people give JBoss grief for using reflection when
                  > weblogic etc. (since they precompile) don't because
                  > of the heavy performance hit.

                  With JDK 1.4 there is relatively no performance hit. Again, I've done benchmarking of JDK 1.3 vs. JDK 1.4

                  JDK 1.3 reflection is 20 times slower than simulated compiled code.

                  JDK 1.4 reflection is 2 times slower than simulated compiled code. If you look at the RICE benchmark of JBOss, you'll see that with JDK 1.4, the performance hit from reflection was less than 2% and that's if you're hitting cached beans. If you're using commit-option 'B' for instance, the reflection hit is truly insignificant because the JDBC calls become the source of all evils.

                  > I was thinking if you use byte code engineering you
                  > can avoid the reflection altogether.
                  > It's just an idea and I haven't thought it through
                  > properly. Basically it goes as follows:
                  > If you have a class with 3 methods: doX(), doY() and
                  > doZ(), you could byte code engineer a proxy so that
                  > instead of having to lookup a Method object for, say
                  > doX() and calling invoke() on it, you call a method
                  > myInvoke(int methodNumber, Object[] args) on the
                  > engineered proxy object. The byte code engineered
                  > method myInvoke() would do the same as something like
                  > the following written in java:
                  > public Object myInvoke(int methodNumber, Object[]
                  > args)
                  > {
                  > switch(methodNumber)
                  > {
                  > case 0:
                  > return doX();
                  > case 1:
                  > return doY((String)args[0]);
                  > case 2:
                  > return doZ((Date)args[0], (String)args(1));
                  > }
                  > // etc.
                  > }

                  Again, with JDK 1.4, the above code you recommend is only 2 times faster than reflection. Try it out.

                  > Clearly this is off the top of my head and is not
                  > complete but I think it would let you avoid
                  > reflection altogether.

                  Reflection is not a problem.

                  > Having said all this, as you know reflection in Sun
                  > JVM 1.4+ is supposed to be faster, so maybe not a big

                  Whoops, you know this.

                  > issue, but not everyone is using a Sun JVM.

                  Well fuck them. By the time JBoss 4.0 comes out, JDK 1.4+ will be the norm. Just as JDK 1.3 was the norm when JBoss 3 came out.

                  > Essentially what I'm saying is you could get all the
                  > benefits of, say, weblogic and their precompilation
                  > of stubs (ejbc), avoid reflection and unroll, but

                  I contend that precompilation gives you no benefits. We are in the process of putting together ECPERF benchmark results and you'll see that we kick everybody's ass, even with reflection!

                  > without any of the drawbacks (ie extra step in build
                  > process, not runtime changeable), by essentially
                  > precompiling stubs (aka byte code engineering
                  > proxies) at runtime/changetime/deploytime.
                  > It would all be transparent and allow you hold a
                  > finger to Weblogic etc. on one of the few things they

                  Weblogic is dead. We will rule. Pay attention to what happens in 2003.

                  > CAN criticise you on, and you would have to lose any
                  > of your runtime configurability in the process.
                  > Anyway, just some thoughts.
                  > Good luck.
                  > BTW I took the PURPLE pill a long time ago, it's to
                  > be recommended. ;)

                  Viagra? That that was the blue pill...


                  • 6. Re: Invokers, AOP and performance
                    timfox

                    Ok, point taken, basically I'm prematurely suggesting a bunch of detailed optimisations that probably won't make a great deal of difference, specially as most people are using commit options B or C.
                    (Code doesn't have to be unmaintainable though if encapsulated well.)
                    Yes, the rest I agree with, you've made some good points. It's nice to know all the benchmarks and comparisons have been done.

                    As an aside, on your point about using caches, we're using commit option C in production (with 3.x), with instance per transaction coniguration - we'd like to use A, but can't handle queuedpessimistic locks, due to deadlock mainly, but also scaleability problems.
                    Are there any plans for some type of optimistic locking scheme in 4.x, so we can really use the cache as you, and Marc F want us to? That would be the icing on the cake. (I guess I could stop being lazy and write my own interceptor....)

                    Anyway, good luck again, and I look forward to seeing the firstcut of 4.0.

                    BTW It's not Valium either, although sometimes I think I need it...... (I didn't know Viagra was purple - who told you that? ;) )

                    • 7. Re: Invokers, AOP and performance

                      > Again, with JDK 1.4, the above code you recommend is
                      > only 2 times faster than reflection. Try it out.

                      This is relevant if your plan is to move the AOP framework outside the EJB into generic object use. In other words into invocations that do not involve serialization -- just normal object-to-object call. Only 2 times faster, or twice as slow. Depends which way you look at it.


                      • 8. Re: Invokers, AOP and performance
                        bill.burke

                        > Ok, point taken, basically I'm prematurely suggesting
                        > a bunch of detailed optimisations that probably won't
                        > make a great deal of difference, specially as most
                        > people are using commit options B or C.
                        > (Code doesn't have to be unmaintainable though if
                        > encapsulated well.)
                        > Yes, the rest I agree with, you've made some good
                        > points. It's nice to know all the benchmarks and
                        > comparisons have been done.
                        >
                        > As an aside, on your point about using caches, we're
                        > using commit option C in production (with 3.x), with
                        > instance per transaction coniguration - we'd like to
                        > use A, but can't handle queuedpessimistic locks, due
                        > to deadlock mainly, but also scaleability problems.

                        Are you using C for everything? If you are, you probably don't need to. Analyze each bean.

                        Have you read the documentation on locking policies? You can define read-only methods.

                        If you have high write concurrency, optimistic locking won't help you much anyways and will probably hurt you.


                        • 9. Re: Invokers, AOP and performance
                          timfox


                          > Are you using C for everything? If you are, you
                          > probably don't need to. Analyze each bean.

                          we have some read-only entities so we mark as read-only and use D, but yes we use C for the rest.

                          >
                          > Have you read the documentation on locking policies?
                          > You can define read-only methods.

                          ... an interesting point and something I might revisit, considering the read-mostly nature of the app. We did play around with this a while back - to be honest I can't remember why we didn't use this functionality... hmmmm... looks like an avenue to explore- thanks for the advice.

                          >
                          > If you have high write concurrency, optimistic
                          > locking won't help you much anyways and will probably
                          > hurt you.
                          >
                          >
                          The app is 95% read mostly so optimistic locking not a problem, but yes i get your point.

                          Am I right in saying the whole cacheing layer is getting rewritten anyway in 4.x on top of JavaGroups?

                          • 10. Re: Invokers, AOP and performance
                            marc.fleury

                            Interesting thread. I am telling you guys, these forums are ripe for public opening. I mean timfox may be wrong in some assumptions he makes it is still a kick ass thread.

                            Tim, first on the price of "stacks of interceptors". I do mention it in the blue paper.
                            1- As bill/juha said, the interceptors aren't really expensive, we are talking about *straight java calls*. On a black board it looks big but in performance it is negligeable. Removing interceptors won't really change the bottom line. In fact we see more people adding than removing. The "lightweight entity" from Dan is bullshit (seriously).

                            2- the price of the reflection. As pointed out the final reflection call is more expensive but not really in JDK1.4 and anyway, it is one of 60 calls that are going one. WE DON"T see significant problems with the current stacks and we have good ECPERF numbers.

                            3- WYSIWYG development. Tim you make the point yourself. Maintanance and readability is the key here. We can write javacode that is easily debugged, and will be easily written by end-users. Meaning that the dream of AOP is easily given to the mass as opposed to an API in javassist generation.

                            SInce the performance isn't really the issue then point 3 becomes overwhelming. Even if it was heavy I would argue that we still want it emphatically, I mean that is how we implement the whole EJB stack as outlined in blue already with mature AOP technology.

                            I am sure we want to generalize this stack.

                            Finally on the BCEL vs Dynamic Proxies. DP's are JDK standards and that means that we can simply ship these around as straight serialization of Java already deals with dp natively. This is not the case if we were to use BCEL. Bottom line is that you would get classcast exceptions on the client (actually it wouldn't really work at all). I think I remember making the point you make to Bill and Dain and both of them pointing out the problem in a jboss-group discussion (yet another example of why we must bring the development online).

                            So I say we use DP for everything that has an interface. For straight POJO and cache/persistence implementation. we still intercept the class for field following but the instances and really the proxies are not supposed to be travelling at all.

                            • 11. Re: Invokers, AOP and performance
                              cepage

                              Tim,

                              JBoss already ships with an extremely optimistic locking policy -- NoLockPolicy. In some cases, I have found this to be an acceptable solution when the QueuedPessimisticLockPolicy is killing me.

                              My experience has been that the read-only method stuff is flaky, and corrupts the data under certain scenarios. Perhaps you will have better luck with it...

                              • 12. Re: Invokers, AOP and performance
                                timfox

                                corby-
                                I had a look at nolockpolicy - but without instancepertransaction it seems to me, and in commit option A, all transactions are looking at the same instance, so when optimistic conflict is detected - I need to roll-back BOTH transactions (rather than the usual ONE)since this config. will have lost the Isolation out of ACID, which would leave the system in an inconsistent state.
                                I take your point though, in some app. scenarios rolling back both transactions may be preferable than the load hit of instancepertransaction config...
                                BTW Do you think this should be taken to a different forum... I think we have gone off-topic....



                                • 13. Re: Invokers, AOP and performance
                                  cepage

                                  Sorry about that, I will try to get back on-topic.

                                  Here are Rickard's blog comments on the performance issue.

                                  I saw some statement over at JBoss forums that using interceptor chains produces overhead that is not insignificant. Since we use interceptor chains in our AOP implementation, so I decided to try this hypothesis and added 50 no-op interceptors to a method and invoked it 100k times to see what the actual overhead was.

                                  Without those interceptors the overhead is about 0.009ms/call with JRockit 7 on my system (1.7GHz). With the interceptors the overhead is about 0.012ms/call, so it's an increase of 0.003ms/call. I guess it depends on how paranoid one is about performance, but to me that's quite acceptable. Typically there's about 3-10 interceptors on any given method, so the overhead is going to be even less in the normal case. What really can make things go slow is if those interceptors are poorly coded so that what they do take a lot of time. That's the real (potential) problem AFAICT


                                  I am assuming that Rickard's implementation is also done with Dynamic Proxies, so this argues heavily in favor of using a DP-only approach.

                                  Also, benefitting from Rickard's experience since he has already taken a go at this, he warns that the potential performance bottleneck is in instantiation of the Invokers:

                                  Try running a simple testcase that creates a thousand objects and invoke them. How many objects are created as a side-effect of those invokes? It needs to be close to the number of invocations, e.g. as in my case only the method argument array is created. Otherwise the GC goes crazy and the whole thing will stutter badly.

                                  Is it practical to create and maintain a queue of pooled invokers, to avoid endless instantiation? Does JBoss already do something like this?

                                  • 14. Re: Invokers, AOP and performance
                                    timfox

                                    hi Corby, yes, i saw rickard's comments.

                                    interesting results - i agree the overhead appears small - i assume compiler optimisations were turned off for the tests - since i'm not sure hotspot (dunno about jrockit) wouldn't inline something as simple as null interceptor invokeHome() call - which would clearly skew the results.

                                    i don't know if pooling is such a great idea especially with the new jvms that use generational gc, since effectively they do object pooling themselves - in fact sun actively recommends against object pooling in code with hotspot.
                                    http://java.sun.com/docs/hotspot/PerformanceFAQ.html#15
                                    It seems to me that you can remove most of the gc thrashing by tuning your gc properly (again i am assuming hotspot - i can't speak for jrockit or ibm) - i think there's a whole debate to be had on this subject alone.

                                    afaict 50 java virtual method call invokations (with the same params) are going to be 50 times slower than one java virtual method call
                                    invokation - i guess it all just depends on what you're app is doing - if it is doing jdbc and remote calls
                                    then who cares - if it is doing fine grained calls on entity bean getX() methods in some tight loop, then it becomes an issue, but then
                                    again if you're doing some number crunching with entity beans that that's probably a bad design
                                    decision in the first place.

                                    rickard's comment on "it all boils down to how the interceptors are written" seems clear IMO - the main
                                    danger is the code people put in interceptors - rather than the interceptor architecture per se.

                                    1 2 Previous Next