2 Replies Latest reply on Jan 18, 2008 12:59 PM by Nolan Johnson

    CacheLoaderInterceptor behavior

    Nolan Johnson Newbie

      We're using a custom CacheLoader in a somewhat non-standard way. Our CacheLoader is read-only, and it's designed to get a node from a different server if not on the current server (the difference between it and the TcpDelegatingCacheLoader is that this cacheloader figures out where to go and get it).

      What the CacheLoaderInterceptor is doing that we don't want it to do is that when we put an object into a cache and that node doesn't exist in the local cache, it goes to CacheLoader to get that node before proceeding with the put. I understand the thought process behind this - if the CacheLoader is being used as a backing store, then you want to get the node back into memory before adding another key/value pair. However, we're not using it this way. I won't get into the details on why, but what we'd like is to have the CacheLoader "get" only be invoked on an explicit "get", and never on a "put".

      Functionally, this won't make a difference, but there will be a performance difference.

      Any suggestions on how this can be accomplished?

        • 1. Re: CacheLoaderInterceptor behavior
          Elias Ross Master

          Take a look at the AsyncCacheLoader. I implemented a configuration key that changed the default behavior of the "put" and "remove" operations.

          If you can come up with a decent patch, create a JIRA issue, attach the patch and link the issue to this forum URL. You may also be asked to provide some documentation.

          Probably "put(k,v)" and "remove" should lose their return value, and if the value is of interest, a user can use "get(k)" anyway.

          • 2. Re: CacheLoaderInterceptor behavior
            Nolan Johnson Newbie

            Hmmm. Interesting. I see the general idea of what you're doing. We'll take a look at adding that functionality for synchronous gets. We also may end up declaring that the performance penalty for our expected use case is small enough that we'll live with it as it is. Thanks.