13 Replies Latest reply on Jan 10, 2007 7:36 AM by manik

    PojoCache uses a special node factory

    brian.stansberry

      Quick brain dump...

      I think PojoCache has some issue with its _JBossInternal_ node(s) -- things like data versioning changes with optimistic locking, potential lock contention due to the large number of child nodes, etc.

      One thing I noticed is the Configuration.getRuntimeConfig() object now exposes the NodeFactory as a property. This means PojoCache can set the NodeFactory to something specific to PojoCache. This opens up the possibility of having a specialized factory create specialized nodes for _JBossInternal_. For example, one whose data version is always 0, or that has a more complex structure for holding child nodes to avoid lock conflicts, etc.

        • 1. Re: PojoCache uses a special node factory
          manik

           



          One thing I noticed is the Configuration.getRuntimeConfig() object now exposes the NodeFactory as a property.



          I added this so that the NodeFactory is constructed and registered when the Cache is created, and gets a reference to CacheImpl when constructed such that methods that call createNode() need not worry about passing in details such as node type (as this is gleaned from the configuration attached to the cache) or bother passing in a cache reference.


          This opens up the possibility of having a specialized factory create specialized nodes for _JBossInternal_. For example, one whose data version is always 0, or that has a more complex structure for holding child nodes to avoid lock conflicts, etc.


          This possibility always was there, even the way the NodeFactory was used before - and still is in JBC 1.x.y - as a Singleton. That's the whole purpose of a factory.

          The first step would be to identify how a _JBossInternal_ node would differ from normal nodes, wrt. concurrency with optimistic and pessimistic locking, data versioning and BR as well.

          • 2. Re: PojoCache uses a special node factory
            brian.stansberry

             

            "manik.surtani@jboss.com" wrote:

            This possibility always was there, even the way the NodeFactory was used before - and still is in JBC 1.x.y - as a Singleton. That's the whole purpose of a factory.


            I'm not sure how it can be done in 1.x, as NodeFactory doesn't expose any way to change the singleton.

            The first step would be to identify how a _JBossInternal_ node would differ from normal nodes, wrt. concurrency with optimistic and pessimistic locking, data versioning and BR as well.


            Yep. Assuming Ben finds the concept useful :-)

            • 3. Re: PojoCache uses a special node factory
              manik

               

              "bstansberry@jboss.com" wrote:
              "manik.surtani@jboss.com" wrote:

              This possibility always was there, even the way the NodeFactory was used before - and still is in JBC 1.x.y - as a Singleton. That's the whole purpose of a factory.


              I'm not sure how it can be done in 1.x, as NodeFactory doesn't expose any way to change the singleton.


              You'd have to:

              1) Subclass Node to add more specialised behaviour, maps, locks, etc. tuned for this (InternalNode?) Perhaps even add more specialised checks that "user" methods like get() and put() from the Node interface throw exceptions so end-users don't mess with these regions?

              2) The factory would have to instantiate the appropriate class, based on the Fqn requested.

              Changing the NodeFactory wouldn't help anyway, without 1) and 2) above, and wouldn't really add much specific benefit given how few InternalNodes would ever be created (just /_JBoss_Internal_ and /_BuddyBackup_? I don't think sub-nodes under /_JBoss_Internal_ and /_BuddyBackup_ needs any further special behaviour?)



              • 4. Re: PojoCache uses a special node factory

                This is interesting proposal, Brian. :-)

                But I am still pondering will it really help. I have been grapling with this problem for a while now since I encountered this in the "stress" test env.

                Basically, in release 2.0, since we move to the flat-space approach as all the real POJO attachment happens at "__JBossInternal__" node. E.g., we will store the real POJO under, say, "__JBossInterna__/e4xx99ssjswi" node.

                This imposes the fqn "__JBossInternal__" as the bottleneck as whenever I need to map another POJO, I will need to obtain a WL on "__JBossInternal__" first. And note that WL is needed for both attach and detach (a la, remove) as well.

                So question is what options that we have to improve the concurrency while maintaining the correctness? I am finding not a lot, other than using the Region concept to improve it somewhat.

                Can a specialized Node for "__JBossInternal__" help? During the creation and removal of child node, actually I can forgo the interceptor chain and just rely on the Node ConcurrentHashMap to provide synchonization. But I will run into problem when I need to rollback either attach or detach operations (unless I still go thru the interceptor stack).

                Any thought?

                • 5. Re: PojoCache uses a special node factory
                  manik

                  Just a thought, a hash bucket approach (that I suggested to Hibernate some while back) to reduce contention on a parent node may help here as well. This may be encapsulated into an 'InternalNode' or wired manually in PojoCache code (for now, maybe).

                  The basic idea is that when you need to create
                  /_JBossInternal_/Node1
                  /_JBossInternal_/Node2
                  /_JBossInternal_/Node10
                  /_JBossInternal_/Node11

                  you actually create:

                  /_JBossInternal_/Bucket0-9/Node1
                  /_JBossInternal_/Bucket0-9/Node2
                  /_JBossInternal_/Bucket10-19/Node10
                  /_JBossInternal_/Bucket10-19/Node11

                  which will reduce the contention on _JBossInternal_ as a direct parent. Perhaps this is behaviour we could add (in the 3.0 timeframe?) to JBoss Cache's core Node impls, so all user data gets to benefit from this as well?

                  Cheers,
                  Manik

                  • 6. Re: PojoCache uses a special node factory
                    genman

                    One thing that I took a look at was the GUID used for some Pojo node data. The GUID itself can be divided into two parts. One is the machine/VM instance ID and the other is the per machine unique ID. Admittedly, this wouldn't help contention...

                    NodeFactory should probably come from the RegionManager.

                    • 7. Re: PojoCache uses a special node factory
                      manik

                       


                      NodeFactory should probably come from the RegionManager.


                      This only makes sense post-3.0, when we regionalise the entire cache. At the moment it won't make much sense.

                      • 8. Re: PojoCache uses a special node factory

                         

                        "manik.surtani@jboss.com" wrote:
                        you actually create:

                        /_JBossInternal_/Bucket0-9/Node1
                        /_JBossInternal_/Bucket0-9/Node2
                        /_JBossInternal_/Bucket10-19/Node10
                        /_JBossInternal_/Bucket10-19/Node11

                        which will reduce the contention on _JBossInternal_ as a direct parent.


                        This will certainly help. Like you mentioned, this is problem more to the Cache usage itself. But it still will create contention though, and worse still can cause some lock timeout, say, when "/_JBossInternal_/Bucket0-9/Node1" tries to create another sub-node at "/_JBossInternal_/Bucket10-19/Node10", and meanwhile "/_JBossInternal_/Bucket10-19/Node11" tries to create a sub-node at "/_JBossInternal_/Bucket0-9/Node2" within the same transaction.

                        I am thinking another solution (in addition to the above option) is to allow user-specified bucket. For example, if you are using the core Cache, and you want to reduce lock contention and have the freedom to organize your fqn, then, the following will have high concurrency,

                        // pre-create the node "/a", and "/b" first.
                        
                        // From thread 1
                        loop(i=1:100)
                        {
                        cache.put("/a/1", k1, o1);
                        }
                        
                        
                        // From thread 2
                        loop(i=1:100)
                        {
                        cache.put("/b/1", k1, o1);
                        }
                        


                        Granted not all use case can have this leeway to specify fqn like this. But when you can, this solution can perform well. So in similar token, I am thinking to allow an option for PojoCache user to pre-create the sub-tree in "__JBossInternal__". Let's take an example,

                        pojoCache.attach("hr/joe", joe);
                        pojoCache.attach("eng/ben", ben);
                        
                        //from thread 1
                        loop(i=0:100)
                        {
                         pojoCache.attach("hr/joe", joe);
                        }
                        
                        //from thread 2
                        loop(i=0:100)
                        {
                         pojoCache.attach("eng/ben", ben);
                        }
                        


                        Now, when we map them into "__JBossInternal__", we will map as:
                        cache.put("/__JBossInternal__/hr/joe, xxx)
                        

                        and
                        cache.put("/__JBossInternal__/eng/ben, xxx)
                        

                        respectively. That is we will add the prefix such as "hr/joe" and "eng/ben" under the internal fqn. In this case, except during the pre-creation stage, there won't be write lock contention. Of course, we pay the penalty of creating extra fqn hierarchy.

                        Thought?


                        • 9. Re: PojoCache uses a special node factory
                          manik

                           



                          This will certainly help. Like you mentioned, this is problem more to the Cache usage itself. But it still will create contention though, and worse still can cause some lock timeout, say, when "/_JBossInternal_/Bucket0-9/Node1" tries to create another sub-node at "/_JBossInternal_/Bucket10-19/Node10", and meanwhile "/_JBossInternal_/Bucket10-19/Node11" tries to create a sub-node at "/_JBossInternal_/Bucket0-9/Node2" within the same transaction.



                          This same deadlock can occur without buckets, even if nodes 1, 2, 10 and 11 had the same direct parent.

                          • 10. Re: PojoCache uses a special node factory
                            manik

                            Rather than allowing users to pre-create buckets in an internal node (makes me shudder!) how about doing something like allowing users to somehow provide information on the bucket a node should belong to, a bit like Object.hashCode()?

                            We would probably want to use hashCode() on the Fqn as the primary way of determining which bucket the node goes in, but perhaps a mechanism of allowing the user to add a 'hint' to this would help.

                            • 11. Re: PojoCache uses a special node factory

                               

                              "manik.surtani@jboss.com" wrote:

                              This same deadlock can occur without buckets, even if nodes 1, 2, 10 and 11 had the same direct parent.


                              Sorry, but don't get it here. The first to access would have the WL on the parent and therefore creating a subsequent child node will go ahead becuase of the same transaction context. So only the other thread will block untill the first transaction is completed. Where is the deadlock here?


                              • 12. Re: PojoCache uses a special node factory

                                 

                                "manik.surtani@jboss.com" wrote:
                                Rather than allowing users to pre-create buckets in an internal node (makes me shudder!) how about doing something like allowing users to somehow provide information on the bucket a node should belong to, a bit like Object.hashCode()?

                                We would probably want to use hashCode() on the Fqn as the primary way of determining which bucket the node goes in, but perhaps a mechanism of allowing the user to add a 'hint' to this would help.


                                Relaxed. I am not saying user will create internal node directly. If you read my post, I was suggesting to have a flag to use the id field as a prefix under "__JBossInternal__". :-)

                                Hashcode is fine but that belongs to the Cache layer again of distributing them into hashed bucket.

                                Two different ways of optimizing the node creation for user to pick.


                                • 13. Re: PojoCache uses a special node factory
                                  manik

                                   



                                  Sorry, but don't get it here. The first to access would have the WL on the parent and therefore creating a subsequent child node will go ahead becuase of the same transaction context. So only the other thread will block untill the first transaction is completed. Where is the deadlock here?



                                  Ok, I was assuming Nodes 1 and 2 were being read, not written. E.g.,

                                  Tx1
                                  ___
                                  Read Node1
                                  Write Node10

                                  Tx2
                                  ___
                                  Read Node2
                                  Write Node11

                                  That would deadlock.