1 2 Previous Next 24 Replies Latest reply on Dec 19, 2011 2:21 PM by jamat Go to original post
      • 15. Re: 2nd level cache for JPA connector and LargeValues
        rhauch

        Thanks for providing these traces. It does show that the OOM exception happens when a large value is being loaded from the database, and it is through the RequestProcessor.process(Request) method, which we were discussing earlier. OOM is a RuntimeException, but at that point there's little hope for recovery as the JVM has no more memory on the heap.

         

        My hunch that the indexing logic is inadvertantly holding onto the subgraph read requests was correct, so I've logged this as a blocking bug in MODE-1350. I'm already testing a fix, so this particular issue should be addressed in 2.7.

        • 16. Re: 2nd level cache for JPA connector and LargeValues
          rhauch

          Both MODE-1349 and MODE-1350 have been fixed in the codebase for the upcoming 2.7 release. Feel free to try the improvements by obtaining the 'master' branch and building locally. If you do this, please let us know what you find.

          • 17. Re: 2nd level cache for JPA connector and LargeValues
            jamat

            Randall Hauch wrote:

             

            Thanks for providing these traces. It does show that the OOM exception happens when a large value is being loaded from the database, and it is through the RequestProcessor.process(Request) method, which we were discussing earlier. OOM is a RuntimeException, but at that point there's little hope for recovery as the JVM has no more memory on the heap.

            I do agree this that. My only 'complain' was that modeshape could be more vocal, and by this I mean at least some log. Doing so will have help me a lot.

            • 18. Re: 2nd level cache for JPA connector and LargeValues
              jamat

              Randall Hauch wrote:

               

              Both MODE-1349 and MODE-1350 have been fixed in the codebase for the upcoming 2.7 release. Feel free to try the improvements by obtaining the 'master' branch and building locally. If you do this, please let us know what you find.

              OK I will do it on Monday.

               

              Thank you for looking into this so quickly.

              • 19. Re: 2nd level cache for JPA connector and LargeValues
                jamat

                I have tried and it did not help in my case.

                • 20. Re: 2nd level cache for JPA connector and LargeValues
                  jamat

                  One more thing. I was curious to know why we were loading all those binaries every time I was deleting a node that was totally unrelated.

                  I have added some logs in LuceneSearchEngine.java and what I have noticed is that when we delete a node we receive a ChangeRequest with type DELETE_BRANCH. And in this case modeshape will reindex the whole workspace ! This is clearly not good.

                   

                  And BTW this may explain why some deletions are taking a long time. And it does not matter if the indexing is synchronous or not, as even when it is not synchronous we are blocked waiting for the end of the indexing.

                   

                  • 21. Re: 2nd level cache for JPA connector and LargeValues
                    rhauch

                    When we're deleting a branch, we'd like to push the delete to the database, but JPA doesn't support "DELETE FROM table WHERE ..." type statements. So we have to do a significant amount of work: we're building a temporary table with the UUIDs of the nodes that are to be removed, and then removing the nodes, and removing the temporary table. This is faster on some databases, and slower on others. (And again, JPA is getting in our way in this regard, too.) I suspect that the process of building the temp tables is somehow causing the LargeValueEntity records to be loaded into Hibernate rather than just getting the UUIDs of the nodes.

                     

                    As for reindexing, what gets re-indexed should depend on the node that was deleted. For example, because of same-name-siblings, we are reindexing the parent of the deleted node, so if the deleted node is near the top of your hiearchy, this could in fact result in reindexing most (if not all) of the workspace.

                    • 22. Re: 2nd level cache for JPA connector and LargeValues
                      jamat

                      Some questions.

                      Why does the 'same name sibling' reason does not also apply when we add a node then ?

                      And I really had the impression that the whole workspace was being traversed/reindexed and not only the parent. (I have try it again and I still have the

                      same impression)

                      • 23. Re: 2nd level cache for JPA connector and LargeValues
                        rhauch

                        The same thing does have to happen when an appplication moves a node (e.g.,  using the "orderBefore" method on Node), but I don't think applications use that too much.

                         

                        I know it doesn't help 2.x, but in 3.0 we've found ways of minimizing the amount of reindexing we have to do. But again, a lot of these kinds of improvements were made because we intentionally designed it to not have the shortcomings we experienced with 2.x.

                        • 24. Re: 2nd level cache for JPA connector and LargeValues
                          jamat

                          OK. Thank you.

                          I hope that the 3.0 version will come really soon.

                          1 2 Previous Next