Hi! as no answer has been send for your request! I will try some sugestion to se if we can find a soluction togeteher! (I am very interested in your example).
My first tougth was to have first level local isolated caches with second level replicated caches on each node. But the problem is that the 1st level caches are not updated.
My second tougth was to have a complex eviction policy beetwen the 1st and 2nd level caches, with some flag in a shared memory space. But it is not a very clean solution.
It seems that is not feasible to have a cache policy like the one you propose!
If you find an answer to this problem please posted here!
I will post back if I find a solution.
I have an idea based on your thoughts. It unfortunately does not eliminate backbone traffic, however it will save memory, if it works. The idea is this, the cache will paritioned by node; each node will have a region in the cache.
The eviction policy uses the node's identifier to either
a) evict all entities not in this node's region
b) defer to the LRU algorithm for the rest of the cache
Do you think this (admittedly sketchy) solution will work?
I should clarify. When I say node, I mean cluster node, not cache node.
I did not understand how this will safe memory. This beacause you will have the 2000 objects replicated...uhmm.. where you thinking to use the 2nd level cache in combination with this region separation?
Can you tell me if you are using some specific application or is a custom developed application.
This is a custom application.
Yes. I intend to use the 2nd level cache in combination with the region separation. I had deadlocking problems with the custom eviction policy. It's unintuitive why this was happening. So I'm now trying to dynamically manipulate the Region configuration during startup based on the node name.
The idea is that the configuration file would have a region setting for each cluster node. The time to live would be set to 1 second for each server node region name that is not the same as the cluster node name. It's a bit of hack, but...
This did work. I was able to define a regions by server node in the treecache.xml and dynamically control their properties at configuration time in a derived LRUPolicy class. Expiration for regions other than the one whose prefix matched the server node's name were set to 1 second.
I also overrode Hibernate's TreeCache class so that updates would do a remove across remote regions, i.e. invalidation.