I'm glad you liked the TST. I thought it was really cool as soon as I saw it, but my first attempt to implement something based on it was vetoed by management;-)
I think your hashCode based tree is pretty good for something that can be written quickly. I was actually thinking of branching on the characters in the strings, requiring more setup time but no hashcode computations. I guess though with this extremely quick invocation it is probably not worth going to the extra complexity of a finer-grained TST.
I originally wanted to char-by-char.
One of the first things I tested was the overhead of charAt() or toCharArray() on the input strings. The charAt call seemed faster on average but also more sensitive to the length of the input string.
Neither's performance was inspiring.
In the end it was the fact that String caches the value of its hashcode (and that our callers are likely to reuse our inputs) which convinced me to go this route.
Very interesting what turns out to be faster... I would never have guessed. There's nothing like measuring, is there.
ok, one question
when you generate the new code, why do you still need to do a lookup on the operation providers? At the bytecode creation time you already know all the allowed MBean operations and should be able to hard code the logic in the invoke method.
Looking at the code, the approach I had in mind was to have the MBeanCapability to call the static create of StandardMBeanAdapter, and in this method generate a subclass of StandardMBeanAdapter that is specific to each MBean registered. In the generated subclass overwrite the invoke() method to call directly the defaultTarget.someMethod in an 'if - else' block that contains all the operations that can be found from the standard MBean's interface.
In this case, why is the operation provider lookup needed at all?
> defaultTarget.someMethod in an 'if - else' block that
> contains all the operations that can be found from
> the standard MBean's interface.
I'm approaching it this way for 2 reasons:
1) I think I can get reflected invokes to 200-300ms per 100000 - it'll be better in JDK1.4 and to be honest I think we won't care about paying 45ms per 100000.
2) If we do go non-reflected, this approach for identifying the opKEY can be altered so that it resolves to an int and Object pair (instead of Method/Object) so that the int can be used in a case statement. The important thing is that we can have the opKEY resolving code in a superclass and not have to write a generator for it.
> invoke's arguments include String opname, String
> signature. Togther they form what I'll call an
> opKEY. In the case of a reflected (std/model) mbean
> we are mapping an opKEY to a Method.
> The problem is that if you want to use an opKEY as
> the key for a hashmap (which it currently does) you
> need to assemble all of the opKEY bits into a single
I've made some tests also, and found out that if you want to keep a fair interface for the TST, like
then OptimizeIt says I spend 50% of the time normalizing the key, ie doing this:
String key = new String[signature.length + 1];
key = opname;
System.arraycopy(signature, 0, key, 1, signature.length);
I end up being 5 times faster than StringBuffer version instead of the 10 times you got.
I just wanted to know if you used an interface for search() like
search(String opname, String signature)
or you have been able to normalize the key keeping the performance.
I approached this from the perspective that the worst thing I could do was create intermediate objects.
So no, I don't normalise the opKEY because it's too expensive.
By the way Simone,
are you on the expert group for either 1.1 or 1.5? ]:)
> By the way Simone,
> are you on the expert group for either 1.1 or 1.5?
For JSR 160. I don't know which version will be, if 1.5 or 2.0 or whatever else.
Why the devil :) ?
> Why the devil :) ?
Urm, well *cough*. I'm curious about JSR160 - esp the proposed interceptors and client context.
The ]:) is just me turning coy because I figured you might be the one representing Compaq in the EG and like I said (without presuming anything), there are things I'm very curious about...