7 Replies Latest reply on Jul 21, 2005 11:22 AM by wynne_b

    poor man's cache invalidation solution?

    wynne_b

      Use case:

      We have an ASP solution and approx. 2000 concurrent users. All users have an organization, but most of the 2000 users will be in different organizations, so for example 1900 organizations per 2000 users. Organization is a large POJO, frequently referenced during a session. It can be updated by the user. In those cases where it is updated, we'd like any other user to have the copy on their node invalidated or updated.

      Options:

      What we don't want is 1900 orgs on every node. So what we'd like to do is add only locally, but update or remove globally across all nodes. Or alternatively detect that a cache add is a remote call and perhaps for this type we don't want to add it. Are either of these solutions possible in the current API? If they are I'm just too unfamiliar with the code base to see the solution and I apologize for that.

      Bill

        • 1. Re: poor man's cache invalidation solution?
          dnielben

          Hi! as no answer has been send for your request! I will try some sugestion to se if we can find a soluction togeteher! (I am very interested in your example).

          My first tougth was to have first level local isolated caches with second level replicated caches on each node. But the problem is that the 1st level caches are not updated.

          My second tougth was to have a complex eviction policy beetwen the 1st and 2nd level caches, with some flag in a shared memory space. But it is not a very clean solution.

          It seems that is not feasible to have a cache policy like the one you propose!

          If you find an answer to this problem please posted here!

          Daniel

          • 2. Re: poor man's cache invalidation solution?
            wynne_b

            I will post back if I find a solution.

            • 3. Re: poor man's cache invalidation solution?
              wynne_b

              Daniel,

              I have an idea based on your thoughts. It unfortunately does not eliminate backbone traffic, however it will save memory, if it works. The idea is this, the cache will paritioned by node; each node will have a region in the cache.

              The eviction policy uses the node's identifier to either

              a) evict all entities not in this node's region
              b) defer to the LRU algorithm for the rest of the cache


              Do you think this (admittedly sketchy) solution will work?

              • 4. Re: poor man's cache invalidation solution?
                wynne_b

                I should clarify. When I say node, I mean cluster node, not cache node.

                • 5. Re: poor man's cache invalidation solution?
                  dnielben

                  0k,

                  I did not understand how this will safe memory. This beacause you will have the 2000 objects replicated...uhmm.. where you thinking to use the 2nd level cache in combination with this region separation?

                  Can you tell me if you are using some specific application or is a custom developed application.

                  Daniel

                  • 6. Re: poor man's cache invalidation solution?
                    wynne_b

                    This is a custom application.

                    Yes. I intend to use the 2nd level cache in combination with the region separation. I had deadlocking problems with the custom eviction policy. It's unintuitive why this was happening. So I'm now trying to dynamically manipulate the Region configuration during startup based on the node name.

                    The idea is that the configuration file would have a region setting for each cluster node. The time to live would be set to 1 second for each server node region name that is not the same as the cluster node name. It's a bit of hack, but...

                    • 7. Re: poor man's cache invalidation solution?
                      wynne_b

                      This did work. I was able to define a regions by server node in the treecache.xml and dynamically control their properties at configuration time in a derived LRUPolicy class. Expiration for regions other than the one whose prefix matched the server node's name were set to 1 second.

                      I also overrode Hibernate's TreeCache class so that updates would do a remove across remote regions, i.e. invalidation.