Where the state lives is a good topic
There are cases where we need the state to be given by the client and cases where we need the state in the interceptor.
Ex: the READ-AHEAD interceptor for CMP
the scenario is simple the read-ahead value is dictated by use cases. Meaning that a given client may want to display 20 lines, another one 50 etc. This is clearly in many framework how the "bucket" is done. Right now this data is only given through the static configuration of the interceptor in the container.
We need to have the information passed in the Invocation. Bill, I believe there are two ways programmatic ways for the client to do that.
1- the client casts his proxy to the generic (ClientProxy) interface. This interface needs to be added to the dynamic proxy at construction time. Then that interface supports
the implementation of the ClientProxy is in the first client interceptor or maybe its own client interceptor and it calls
this value is sent to the server or just passed on trhough the invocation down the chain. The server side CMP implementation recieves the "Invocation", he can then do
and do stuff with it. So really it means that while we would maintain the configuration as is, meaning that we would support the basic read-ahead setting as it is today, it allows for overridding of that property BY THE CLIENT.
2- The interceptor gives an interface. That interface is the (ReadAhead) interface that has setReadAhead(int value) in its signature. I believe that Hiram already has support for that interface declaration. The difference here is that we do need 2 interceptors when we go for client-server distributed. The client side interceptor would do the invocation.setValue() call and the server side one is the same as above in point 1 meaning it retrieves the value from the Invocation itself.
The state I am talking about here is almost "configuration" state. The state you mention in the clustering Bill, is clearly state that comes from the system. That we keep the state in interceptor or a central hashmap is irrelevant in this case. It is a matter of coding style and I believe that both coding styles (field variable vs invocation retrieval) are needed.
When the client passes the value (not applicable to the clustering stuff) then the invocation needs to be it. When we are talking about the clustering values then I am ok with both coding practices.
That being said, I like the idea of having the configuration centrally visible on a running system. We assemble the container and we keep a central hashmap of all the values. We can put that hashmap on the JMX bus for that logical container and modify/monitor it centrally which would be nice even for programmatic reasons. You can then modify the configuration without have to hold a direct reference to it, think administration purposes.
Let me give you an example. Option A/B/C/D let's say that we want to modify the option and maybe even modify the period of refresh for D. Today this is very static. We have that variable in an interceptor today, it is the commit-option field in the cache interceptor but the point is HOW DO WE GET AT THAT INTERCEPTOR in a standardized way.
Here is a thought. I believe that we can separate the 2 issues so that they become orthogonal. Let's assume we store that configuration for the whole stack of interceptors in a central hashmap. It will mean that we can change the commit option of the container through the administration interface. If you look at that configuration it will say
then that map has an adapter that retrieves the setters from the interceptors. So that cache interceptor has setCommitOption(). The adapter knows that the interceptors wants the "commitOption" call so when the value changes the adapter that holds the pointer to the interceptor (that adapater is built at deployment time) will call the interceptor with
the mapping of names to method needs to be something dumb like "remove - lowercase everyone and set the values".
This way we solve the problem of coding style while retaining the centralized approach to configuration. This is uber powerful. One of the applications of this is if we have a cluster running we can call the configuration on the whole cluster easily.
it is good infrastructure.
Do we see eye to eye on this one?
Configuring JUST the datasource?
1)configure the DataSource
from jboss/docs/examples, copy the oracle-ds.xml to your jboss/server/default/deploy directory
2)modify oracle-ds.xml to point to your OracleDatabase(ip,instance, username, password)
-take note of what the JNDI name is, you need it for step 3
3) modify jboss/server/default/conf/standardjbosscmp-jdbc.xml like so:
Use the jndi name for "java:/OracleDS" from step 2.
I hope you consider this "in detail". I'm sure I can break this down even more, but it'll be somewhat of an overkill for a forum reply. There are other ways to define the CMP-DB mapping, but this should work for a start.