4 Replies Latest reply on Oct 31, 2014 2:20 PM by Richard Lucas

    Potential issue with concurrent writes resulting in node loss

    Richard Lucas Apprentice

      I am seeing an issue with node loss when performing concurrent writes to the same parent node using the ModeShape 4.0.0.Final subsystem in Wildfly 8.1.


      I currently have the following node structure which I create at startup:

       

           /default/jobs

       

      I then add 10 child nodes concurrently under jobs using multiple threads via REST API calls (approx. 3 threads writing concurrently). This results in only 5 to 6 of the child nodes being created, the remainimg ones appear to be lost.

       

      Each child node is added in it's own session/transaction using an EJB. No errors occur when adding the nodes. Below is my Infinispan configuration:

       

                <cache-container name="modeshape" default-cache="my-repo" module="org.modeshape">
                      <transport lock-timeout="60000"/>
                      <replicated-cache name="mye-repo" mode="SYNC">
                          <transaction mode="NON_XA" locking="PESSIMISTIC"/>
                          <string-keyed-jdbc-store shared="true" preload="false" passivation="false" purge="false" datasource="java:jboss/datasources/MyDS">
                              <string-keyed-table prefix="JDG_MC_SK">
                                  <id-column name="id" type="VARCHAR(200)"/>
                                  <data-column name="datum" type="LONGBLOB"/>
                                  <timestamp-column name="version" type="BIGINT"/>
                              </string-keyed-table>
                          </string-keyed-jdbc-store>
                      </replicated-cache>
                  </cache-container>
      

       

      Looking through previous posts/issues it appears a similar issue was addressed and fixed in Modeshape 3.x a couple of years ago, with the caveat being that the infinispan cache be configured to use PESSIMISTIC locking. Is this still the case or is any additional configuration needed?