7 Replies Latest reply on Mar 30, 2006 9:22 AM by Ben Wang

    CacheLoader efficiency

    Ben Wang Master

      A customer experiences cacheloader inefficiency during their load test env. Manik also opened this Jira for it: http://jira.jboss.com/jira/browse/JBCACHE-513

      He has also changed the syncrhonization on CacheStoreInterceptor from "this" to "loader" so for the get that can retrieve the in-memory copy won't block. That helps.

      However, the problem still persists if there are lots of cache put/remove operations. In this case, every one of these two ops will need to block on loader. (And we need to block to prevent a phantom "get".) As a result, it will become the bottleneck again. (Of course, one can argue that mostly write defeats the whole purpose of caching.)

      Solution as Bela indicated in a private email is to map the operation individually to a fqn map such that it only blocks on fqn (instead of the whole tree). This is again messy to implement though. But I just dont see a way out now.

        • 1. Re: CacheLoader efficiency
          Manik Surtani Master

          Thanks for starting up this thread, Ben.

          Yes, I do see this as a critical inefficiency with the CLI synchronisation and would very much like to clean this up. I haven't marked this as critical for 1.3.0 though, unless I can implement a fix quickly and easily enough with minimal risk.

          I too think the best way to deal with this is to maintain a map of locks keyed on Fqns. Messy, but not really - I mean, we do something similar anyway when accessing the tree directly.

          • 2. Re: CacheLoader efficiency
            Manik Surtani Master

            I've got a quick fix in place at the moment - let me know what you think of it.

            Instead of syncing the block, the start and end of the block have calls to obtainLoaderLock(Fqn) and releaseLoaderLock(Fqn). These methods simply put the Fqn in a lock table if not in the lock table otherwise will block until the Fqn is released from the lock table. Not very sophisticated - it could do better by taking isolation levels, and the method call itself into consideration - but this has improved concurrency and performance very significantly already (see org.jboss.cache.loader.InterceptorSynchronizationTest - contributed by a user)

            Can anyone see a good reason not to include this fix into 1.3.0 GA? It is in HEAD at the moment.


            • 3. Re: CacheLoader efficiency
              Manik Surtani Master

              I know it is very short in terms of timescale and doesn't give the feature enough time in public hands (no betas or CRs with it) but I'd really like to stick with the schedule of releasing 1.3.0 this Friday, the 31st of March. What do you think?

              • 4. Re: CacheLoader efficiency
                Ben Wang Master

                Since this only involves when cache loader is used, so I think the scope is much well defined. Therefore, I am not against releasing it. Maybe we should attach the jar to let the sf user try it out?

                • 5. Re: CacheLoader efficiency
                  Bela Ban Master

                  So for /a/b/c, does this place the follwoing elements into a lock table ?:
                  /a/b and
                  /a/b/c ?

                  • 6. Re: CacheLoader efficiency
                    Manik Surtani Master

                    No, I'd imagine just /a/b/c. This isn't like we're modifying data in-memory that we need to lock concurrent access. We're just trying to improve efficiency that 2 threads don't try and load the same fqn from the loader unnecessarily.

                    In-memory locking (/a, /a/b, /a/b/c) will happen anyway when loading a node in memory.

                    • 7. Re: CacheLoader efficiency
                      Ben Wang Master

                      Sorry but have you tried to address the synchronization block in CacheStoreInterceptor as well? There, we seem to synchronize every call on "loader" as well for very put/remove. So if we have a mostly write case, this will be slow.