> As I understand the speed trials at the moment
> invoking methods on dynamic mbeans is somewhere
> around 10x faster than standard mbeans.
> Is this going to continue to be the case with jdk 1.4? (I saw
> a claim that 1.4 has tremendously sped up invocation
> through reflection).
I haven't tried 1.4 yet but yes the standard MBean invocation speed is dependent on the reflection performance, for now.
Hmm, I guess I should see if the claims are true...
> The reason I am asking is... xdoclet has a task for
> generating standard mbeans from annotated java files.
> I'm wondering if it is worth writing a (considerably
> more complicated) way of generating a dynamic mbean
> from a (more heavily annotated, with descriptions)
> java file. Is there another way to get the same
> effect (fast invocation, although the descriptions
> are nice too).
For now, the only way to get a fast invocation is to write an implementation of the invoke() method that doesn't rely on reflection.
> In particular, would the model mbean help?
As far as speed is concerned, Model MBeans have two options. You can either rely on the default automated and reflection based invocation, or override it by providing the implementation of the invoke() method that doesn't rely on reflection.
>(so far I don't understand what it is for).
The Model MBean meta data classes extend the Dynamic MBean classes with DescriptorAccess interface. This provides a somewhat standardized mechanism for configuring behavioral properties on the MBean (presentation, logging, caching, etc.) or adding any other additional metadata that might be useful. A Model MBean implementation may also help you with generating the meta data classes so you can avoid hand-coding the getMBeanInfo() method.
Doclet for Dynamic MBeans would work but you're somewhat restricted as you can't make use of the descriptors in the meta data objects.
Running the test.performance suite on JBossMX shows that for the hardest reflection based calls (mixed args):
JDK1.3 - 1231ms / 100000 invocations
JDK1.4 - 981ms / 100000 invocations
So, yes it's improved, but not by much. Compare it to <100ms per 100000 invocations on a dynamic mbean.
Juha, I've been keeping something under my hat but I might as well say it now. One of the reasons why I came up with *Providers was:
1 - move all meta-to-Method binding overhead to registration time.
2 - allow *Providers to decide how to dispatch a call.
Number 2 is the most interesting. What I *really* want to do is see if BCEL can generate Providers that do reflectionless dispatch of calls to stdMBeans. Any BCEL overhead could be absorbed at MBean registration time, obviously.
> Number 2 is the most interesting. What I *really*
> want to do is see if BCEL can generate Providers that
> do reflectionless dispatch of calls to stdMBeans.
> Any BCEL overhead could be absorbed at MBean
> registration time, obviously.
Hehe... you just keep reading my mind, that's why I was saying '...for now'.
YES, DO IT!!! :) :)
I was thinking of the exact same thing, I don't care much about standard mbeans, but would like that in a Model MBean implementation so that I neither have to implement the getMBeanInfo() (can do already without BCLE) or invoke() (need BCEL here) and still get good performance and can play with the descriptors.
Of course with Providers that benefit should automatically come to both std and model mbeans.
By the way,
in case David or anyone else was interested, running the test.performance suite against the RI gives the following result for the mixed args reflection based calls:
JDK1.3 - 7890ms / 100000 invocations
JDK1.4 - 4524ms / 100000 invocations
Not having examined the RI code to see how it dispatches invoke() on stdmbeans I'd guess that they are reflecting in the invoke().
Given that the RI also dispatches calls to DynamicMBeans in circa 100ms I'd say that David's impression that DynamicMBeans are 10x faster than StdMBeans is ever so slightly off the mark :D
I'm holding back on the feeling that "jboss-mx rocks" 'till we are feature complete. Still, it makes me grin to see those numbers...
there I said it
and I feel B-)
1. Soon, due to BCEL + Trevor, standard mbeans will be as fast as dynamic in jbossmx, so there's no pressing reason to write an xdoclet dynamic mbean generator. Is there some descriptor info that the ModelMBean could use that could be generated by xdoclet? If so, what is it?
2. What is missing to run jboss on jbossmx?
Ok "as fast as dynamic" will not be the case because we'll be comparing apples to oranges. I.e. it will depend on how a given DMBean dispatches it's calls compared to how we dispatch calls.
The most I'm hoping for is removal of the overhead for Method.invoke(). Blah blah blah.
WRT 2 - what's missing? Adrian Brock has said jboss-mx is barfing on something to do with SingleJBoss. Sorry but I can't tell you more than this.
Development tasks should all be listed on the sourceforge tasklist under JBossMX.
> Is there some descriptor info that the
> ModelMBean could use that could be generated by
> xdoclet? If so, what is it?
Well, it might be useful to be able to generate the XML that defines the management interface for an MBean via Xdoclet. Similar to how you generate the EJB XML descriptors based on javadoc tags, you could possibly generate the MBean XML that conforms to the XMBean DTD (http://www.jboss.org/xmbean.dtd ).
This way you'd no longer need to create XXXMBean interfaces for all MBeans.
However, I'm not sure which will be more convenient, using the javadoc approach or just extending the current ModelMBean to reflect on a class and look for a predefined method name (say 'getDescriptors()') that returns an array of descriptor objects that the MBean server can then match to each operation and attribute.
Both require a recompile (javadoc vs. javac) if you want to change the descriptor (to use a different policy for example).
Hmm, although you could just run javadoc once, get the XML and then hand-edit simple policy changes inside the XML. That would be the no compile approach to changing a policy.
So I guess the XML doclet generation might be useful.
Ok, more on 1.3 to 1.4 comparisons.
to recap the jboss-mx figures:
JDK1.3 - 1231ms / 100000 invocations
JDK1.4 - 981ms / 100000 invocations
It would appear that only 150ms of the difference can be attributed to improvements in Method.invoke().
However, if you just strip our test down to the direct method call vs the Method.invoke you get:
JDK1.3 - 3.1ms / 100000 direct invocations
JDK1.3 - 190ms / 100000 reflected invocations
JDK1.4 - 1.6ms / 100000 direct invocations
JDK1.4 - 45ms / 100000 reflected invocations
So, to put this into perspective, if we do non-reflective dispatch the absolute most(ish) we can expect to save is:
JDK1.3 - 185ms / 100000 invocations
JDK1.4 - 40ms / 100000 invocations
In absolute times we are talking about:
JDK1.3 - 1046ms / 100000 invocations
JDK1.4 - 941ms / 100000 invocations
Ok, so you're asking why can't it be 100ms like a DynamicMBean? Well, it turns out that our tests are skewed (apples/oranges). The DynamicMBeans we test don't bother trying to figure out how to dispatch the invoke() while the standardmbeans do.
For a given management bean I assert that a DynamicMBean implementation that is equally feature-complex as a StandardMBean implementation will only be marginally faster. In JDK1.4 probably as little as 5% faster.
To put it another way, a DMBean that actually routes the invoke()s would probably show closer to 1000ms rather than the current 100ms.
Bear in mind this excludes any actual business logic - it just measures the time required to dispatch an invoke() to a relevant method. If you take the time spent on business logic into account, the reflection vs non-reflection argument begins to look less important - at least in JDK1.4.
Maybe David's "10x faster" in the RI is actually spot-on. In jboss-mx that factor is different - I admit that I'm not 100% sure just what the difference is.
As it stands, jboss-mx has the potential to knock the socks off the RI without resorting to clever hacks like BCEL. IMHO time is better spent getting JBoss to use jboss-mx - I'm going to let BCEL wait 'till we've gone gold.
> To put it another way, a DMBean that actually routes
> the invoke()s would probably show closer to 1000ms
> rather than the current 100ms.
Not following... routes to where? To a different object? Matches metadata before invoke?
I've fixed the problem with SingleJBoss :-)
ObjectName on = createMBean(className, null, loader).getObjectName()
is starting to be used a fair amount in JBoss for
getting object names.
It didn't work correctly in JBoss-MX.
The ObjectInstance was created twice, the one
returned had the original (null) object name :-(
Now I've got some
invoke(blah, "getAttribute", new Object, new String)
to sort out :-)
> > To put it another way, a DMBean that actually
> > the invoke()s would probably show closer to 1000ms
> > rather than the current 100ms.
> Not following... routes to where? To a different
> object? Matches metadata before invoke?
I.e. resolve the invoke() args (operation name and String signature) to some sort of action and pass the Object to the code responsible for that action.
Ok, my assertions may be coloured by the idea of doing something straight forward - BCEL generating Providers. For that we're just talking about removing the Method.invoke() overhead. That still leaves the overhead of brute-force selecting a provider based on the entire(String, String) args.
Let me think on this a bit more. Perhaps if I BCEL something one level higher in the call chain... Something which used a divide and conquer heuristic on the (String, String) args in order to route the invoke().
I wonder how many times I can flip-flop on this in one day.
I'm not sure it would be worth the trouble to set up, but ternary search tries would probably be extremely fast for decoding the signature.
perhaps hashing and a case statement would be fast enough;-)
> 1. Soon, due to BCEL
proto should work now.