5 Replies Latest reply on Nov 23, 2004 4:25 AM by norbert

    Possible newbie question but still a problem

    nine_mirrors

      Howdy all!

      background: In our db we have a rather large table (>500k entries). Said table is growing over time but not all entries are live. We are aware of this so we prune the table about once a week. The only live entries that we are sure of at run-time are those that are actually requested by the client systems. What I want to do is to cache only the requested entries.
      I do not want to lift the entire table into the cache.

      I therefore implemented a rudimentary cache loader and when my test app calls get(Fqn,Key) I expected the CacheLoader.get(Fqn,Key) to called. Instead get(Fqn) is called. I return an empty hash map since I can't tell what actual entry that is reqeusted.

      The actual call sequence seems to be:

      TreeCache.get(Fqn,Key)
      CacheLoader.exists(Fqn) (I return true)
      CacheLoader.get(Fqn) (return an empty hash map)

      And then TreeCache.get(Fqn,Key) returns null.

      I've been fippling with the various options in the config file to no avail and now I'm at my wits end.

      I'm grateful for any help.

      cheers

      Erik

      The relevant entries in the config file:

      false
      false
      false
      com.ongame.naps.cache.NapsCacheLoader


      My code:

      public class Cache {

      public Cache(){
      .
      .
      .
      fCache = new TreeCache();
      PropertyConfigurator pc = new PropertyConfigurator();
      pc.configure(fCache,conf_file);
      // the cache loader is set in the config file

      fCache.setFetchStateOnStartup(false);
      fCache.create();
      fCache.startService();
      .
      .
      }

      public Entry get(Fqn domain,long key){
      .
      .
      Entry entry = fCache.get(domain, new Long(key));
      .

      }

      }

        • 1. 3856577
          norbert

          Hi,

          i am using jboss-AOP final release. Is there any way i can intercept arrays. I had tried using field interceptor but i am getting error stating interception failed while deploying.

          And also i am using AOP for logging in my project. By this time i don't want AOP to handle Exceptions however i need to log errors when exception occurs. I handled this by adding try block within advice but i don't feel this one as good idea. is there any way i can get this out???

          Help me, Please.

          Thanks in advance

          • 2. Re: Possible newbie question but still a problem
            norbert

            You are right, as of JBossCache 1.1 get(Fqn,Key) is actually only being called by the JUnit-testcases. In respect to TreeCache itself it's (yet) redundant.

            Instead of returning null your CacheLoader should return a prefilled Map of Key-Value-pairs when get(Fqn) is being called

            Have a look into the Cacheloaders comming with the JBossCache-source (e.g. look how org.jboss.cache.loader.FileCacheLoader implements get(Fqn)).

            • 3. Re: Possible newbie question but still a problem
              nine_mirrors

               

              "norbert" wrote:
              You are right, as of JBossCache 1.1 get(Fqn,Key) is actually only being called by the JUnit-testcases. In respect to TreeCache itself it's (yet) redundant.

              Instead of returning null your CacheLoader should return a prefilled Map of Key-Value-pairs when get(Fqn) is being called


              This is exactly NOT what I want to do. The table is way too large for being moved from the db to the cache and the individual entries are not guaranteed to be live either, which means that we would move unneccessary data from the db to the cache. Since the whole point is to cut down on the db accesses this does not make sense.
              Also, if data is added to the db then I would have to get the whole table again everytime I get a cache miss.

              • 4. Re: Possible newbie question but still a problem
                norbert

                It's absolutely unefficient to store a whole table in a single node - Transactional locking, eviction and Cacheloading work on Node-level, not key/value-pair level. As a result every 'put(Fqn,key,value)' would lock your entire data if the Fqn being used is a constant.

                You are better of if you map the primary-keys of your table to different Fqn's (respectivly nodes in the tree) and the values from the rows of your table to the attributes stored in the node.

                This way access to a node will load a single row from the db.

                Then you may specify appropriate timeouts for evictionpolicy to evict data from rarely accessed rows by evicting the corresponding nodes.

                • 5. Re: Possible newbie question but still a problem
                  nine_mirrors

                   

                  "norbert" wrote:
                  It's absolutely unefficient to store a whole table in a single node - Transactional locking, eviction and Cacheloading work on Node-level, not key/value-pair level. As a result every 'put(Fqn,key,value)' would lock your entire data if the Fqn being used is a constant.

                  You are better of if you map the primary-keys of your table to different Fqn's (respectivly nodes in the tree) and the values from the rows of your table to the attributes stored in the node.

                  This way access to a node will load a single row from the db.


                  The database is at least 500k entries. At least 70% is live. Imagine what that will do to the node hiearchy. Also, in my view, it seems inefficient to me to store just one value in each hashmap.
                  I don't find one node per row to be an attractive solution.