Evictions are local to each instance and timestamps are not replicated across a cluster. This is because different instances in a cluster may access different nodes in the cache at different rates, so timestamps are only really relevant to a single cache instance.
What you can do is, if you know which node needs to be "kept alive" beyond the timeout, is to use:
This will prevent it from being evicted until the timeout or you call
(follow up from same dev group as szkazmi)
What versions of JBoss / JBossCache are required to make this call?
We are currently using JBoss 4.0.3SP1 and JBossCache 1.4.0SP1.
Currently we are using the TreeCacheMBean mechanism of interfacing with JBossCache.
Does the markNodeCurrentlyInUse call asynchronously update the nodes across the cluster?
We would like to be able to reset the ttl timestamp across the cluster but ideally we wouldn't want our transaction to wait for this update to complete.
All eviction logic, queues and timers are local to a single cache instance and is not shared across a cluster. Marking a node as in use, also, only affects a single cache instance.
Since you have your own eviction policy that actually does a remove instead of an evict, that's where you may have a problem. Perhaps using your own RPC call across a cluster to have all caches call markNodeCurrentlyInUse()? Why do you need a custom eviction policy that does a remove anyway, why not just a standard eviction policy? If you don't use a cache loader it has almost the same effect as a remove, and won't affect the entire cluster.
Oh and FYI this API call is available in 1.4.X - you just need to get a hold of the TreeCache, not just the TreeCacheMBean, and call getEvictionRegionManager().
Another not-too-elegant approach would be to start a transaction and read the node, then mark the node as in use. This way,
1) local evictions won't remove your node because you have marked it as in use.
2) remote evictions won't remove your node since your tx has a lock on it. Remote evictions will in fact fail and be put on a recycle queue if you are using synchronous replication, or fail quietly if you are using async replication.
When you are done with your update, call unmarkNodeCurrentlyInUse() and commit your transaction to release locks.