3 Replies Latest reply on Dec 8, 2011 11:09 AM by clau_babau

    Modeshape 2.6 batch job ...

    clau_babau

      Hi,

       

        Environment: JBoss 6.1 final configured with Modeshape 2.6 and JPA connector with Oracle 11g server.

         When putting JBoss logging in DEBUG mode, we see that from time to time (30 seconds period), there are some SQL batched statements run by Hibernate in order to get the repository data in memory (this is what we think it does!).

        Some SQL statements are listed below:

       


      2011-12-06 21:39:35,784 DEBUG [org.hibernate.SQL] (pool-10-thread-2) select * from ( select nodeentity0_.ID as ID4_, nodeentity0_.ALLOWS_SNS as ALLOWS2_4_, nodeentity0_.CHILD_NAME_LOCAL as CHILD3_4_, nodeentity0_.CHILD_NAME_NS_ID as CHILD12_4_, nodeentity0_.COMPRESSED as COMPRESSED4_, nodeentity0_.DATA as DATA4_, nodeentity0_.CHILD_INDEX as CHILD6_4_, nodeentity0_.NODE_UUID as NODE7_4_, nodeentity0_.PARENT_ID as PARENT13_4_, nodeentity0_.NUM_PROPS as NUM8_4_, nodeentity0_.ENFORCEREFINTEG as ENFORCER9_4_, nodeentity0_.SNS_INDEX as SNS10_4_, nodeentity0_.WORKSPACE_ID as WORKSPACE11_4_ from MODE_SIMPLE_NODE nodeentity0_ where nodeentity0_.WORKSPACE_ID=? and nodeentity0_.NODE_UUID=? ) where rownum <= ?
       
      2011-12-06 21:39:35,785 DEBUG [org.hibernate.SQL] (pool-10-thread-2) select children0_.PARENT_ID as PARENT13_4_1_, children0_.ID as ID1_, children0_.ID as ID4_0_, children0_.ALLOWS_SNS as ALLOWS2_4_0_, children0_.CHILD_NAME_LOCAL as CHILD3_4_0_, children0_.CHILD_NAME_NS_ID as CHILD12_4_0_, children0_.COMPRESSED as COMPRESSED4_0_, children0_.DATA as DATA4_0_, children0_.CHILD_INDEX as CHILD6_4_0_, children0_.NODE_UUID as NODE7_4_0_, children0_.PARENT_ID as PARENT13_4_0_, children0_.NUM_PROPS as NUM8_4_0_, children0_.ENFORCEREFINTEG as ENFORCER9_4_0_, children0_.SNS_INDEX as SNS10_4_0_, children0_.WORKSPACE_ID as WORKSPACE11_4_0_ from MODE_SIMPLE_NODE children0_ where children0_.PARENT_ID=? order by children0_.CHILD_INDEX asc
       
      2011-12-06 21:39:35,796 DEBUG [org.hibernate.SQL] (pool-10-thread-2) insert into MODE_SUBGRAPH_NODES ( ID, QUERY_ID, UUID, DEPTH, PARENT_NUM, CHILD_NUM ) select hibernate_sequence.nextval, subgraphno1_.QUERY_ID as col_0_0_, nodeentity0_.NODE_UUID as col_1_0_, subgraphno1_.DEPTH+1 as col_2_0_, subgraphno1_.CHILD_NUM as col_3_0_, nodeentity0_.CHILD_INDEX as col_4_0_ from MODE_SIMPLE_NODE nodeentity0_, MODE_SUBGRAPH_NODES subgraphno1_, MODE_SIMPLE_NODE nodeentity2_ where nodeentity0_.PARENT_ID=nodeentity2_.ID and nodeentity0_.WORKSPACE_ID=? and nodeentity2_.NODE_UUID=subgraphno1_.UUID and subgraphno1_.QUERY_ID=? and subgraphno1_.DEPTH=?
       
      2011-12-06 21:39:35,800 DEBUG [org.hibernate.SQL] (pool-10-thread-2) select nodeentity0_.ID as ID4_, nodeentity0_.ALLOWS_SNS as ALLOWS2_4_, nodeentity0_.CHILD_NAME_LOCAL as CHILD3_4_, nodeentity0_.CHILD_NAME_NS_ID as CHILD12_4_, nodeentity0_.COMPRESSED as COMPRESSED4_, nodeentity0_.DATA as DATA4_, nodeentity0_.CHILD_INDEX as CHILD6_4_, nodeentity0_.NODE_UUID as NODE7_4_, nodeentity0_.PARENT_ID as PARENT13_4_, nodeentity0_.NUM_PROPS as NUM8_4_, nodeentity0_.ENFORCEREFINTEG as ENFORCER9_4_, nodeentity0_.SNS_INDEX as SNS10_4_, nodeentity0_.WORKSPACE_ID as WORKSPACE11_4_ from MODE_SIMPLE_NODE nodeentity0_, MODE_SUBGRAPH_NODES subgraphno1_ where nodeentity0_.WORKSPACE_ID=? and nodeentity0_.NODE_UUID=subgraphno1_.UUID and subgraphno1_.QUERY_ID=? and subgraphno1_.DEPTH>=? and subgraphno1_.DEPTH<=? order by subgraphno1_.DEPTH, subgraphno1_.PARENT_NUM, subgraphno1_.CHILD_NUM
       
      2011-12-06 21:39:35,806 DEBUG [org.hibernate.SQL] (pool-10-thread-2) delete from MODE_SUBGRAPH_QUERIES where ID=?
      
      
      
      

       

        We are storing a lot of binary data (large files e.g. of 5 MB in size) and it's kind of frustrating to see the used heap space goes from 1G to 2G in 30 seconds and then reverts back to 1G and the cycle goes on and on ...

         Another problem would be that when deleting nodes containg binaru data and the above job's SQL statements are loaded, we are presented with an Oracle exception:

       

      Caused by: java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource

       

      , and of course the nodes are not removed.

         Is there a way to find out how to disable the job as we do not see any benefit in having it?

       

      Regards,

        Claudiu