-
1. 3856577
norbert Nov 23, 2004 4:25 AM (in response to nine_mirrors)Hi,
i am using jboss-AOP final release. Is there any way i can intercept arrays. I had tried using field interceptor but i am getting error stating interception failed while deploying.
And also i am using AOP for logging in my project. By this time i don't want AOP to handle Exceptions however i need to log errors when exception occurs. I handled this by adding try block within advice but i don't feel this one as good idea. is there any way i can get this out???
Help me, Please.
Thanks in advance -
2. Re: Possible newbie question but still a problem
norbert Nov 23, 2004 5:23 AM (in response to nine_mirrors)You are right, as of JBossCache 1.1 get(Fqn,Key) is actually only being called by the JUnit-testcases. In respect to TreeCache itself it's (yet) redundant.
Instead of returning null your CacheLoader should return a prefilled Map of Key-Value-pairs when get(Fqn) is being called
Have a look into the Cacheloaders comming with the JBossCache-source (e.g. look how org.jboss.cache.loader.FileCacheLoader implements get(Fqn)). -
3. Re: Possible newbie question but still a problem
nine_mirrors Nov 23, 2004 6:38 AM (in response to nine_mirrors)"norbert" wrote:
You are right, as of JBossCache 1.1 get(Fqn,Key) is actually only being called by the JUnit-testcases. In respect to TreeCache itself it's (yet) redundant.
Instead of returning null your CacheLoader should return a prefilled Map of Key-Value-pairs when get(Fqn) is being called
This is exactly NOT what I want to do. The table is way too large for being moved from the db to the cache and the individual entries are not guaranteed to be live either, which means that we would move unneccessary data from the db to the cache. Since the whole point is to cut down on the db accesses this does not make sense.
Also, if data is added to the db then I would have to get the whole table again everytime I get a cache miss. -
4. Re: Possible newbie question but still a problem
norbert Nov 23, 2004 7:00 AM (in response to nine_mirrors)It's absolutely unefficient to store a whole table in a single node - Transactional locking, eviction and Cacheloading work on Node-level, not key/value-pair level. As a result every 'put(Fqn,key,value)' would lock your entire data if the Fqn being used is a constant.
You are better of if you map the primary-keys of your table to different Fqn's (respectivly nodes in the tree) and the values from the rows of your table to the attributes stored in the node.
This way access to a node will load a single row from the db.
Then you may specify appropriate timeouts for evictionpolicy to evict data from rarely accessed rows by evicting the corresponding nodes. -
5. Re: Possible newbie question but still a problem
nine_mirrors Nov 23, 2004 7:33 AM (in response to nine_mirrors)"norbert" wrote:
It's absolutely unefficient to store a whole table in a single node - Transactional locking, eviction and Cacheloading work on Node-level, not key/value-pair level. As a result every 'put(Fqn,key,value)' would lock your entire data if the Fqn being used is a constant.
You are better of if you map the primary-keys of your table to different Fqn's (respectivly nodes in the tree) and the values from the rows of your table to the attributes stored in the node.
This way access to a node will load a single row from the db.
The database is at least 500k entries. At least 70% is live. Imagine what that will do to the node hiearchy. Also, in my view, it seems inefficient to me to store just one value in each hashmap.
I don't find one node per row to be an attractive solution.