10 Replies Latest reply on Feb 24, 2012 10:33 AM by sannegrinovero

    Lock Exception using Infinispan as Directory Provider

    grigor.tonkov

      Hello Infinispan Lucene Guru !

       

      We use following JBoss 6.1.0  hibernate search 3.4.1 Final + lucene (lucene-core-3.1.0.jar)  + infinispan (infinispan-core-4.2.1.FINAL.jar) configuration.

       

      in persistence XML is configured :

       

        <!-- add infinispan as cache provider -->
       
        <property name="hibernate.session_factory_name" value="SessionFactories/infinispan" />
        <property name="hibernate.cache.region_prefix" value="infinispan" />
        <property name="hibernate.cache.region.factory_class" value="org.hibernate.cache.infinispan.InfinispanRegionFactory" />
        <property name="hibernate.search.default.directory_provider" value="infinispan"/>
        <property name="hibernate.search.infinispan.configuration_resourcename" value="infinispan-configs-lucene.xml"/>
        <!--<property name="hibernate.search.infinispan.cachemanager_jndiname" value="java:CacheManager/entity"/>-->
        <!--  THIS IS VERY IMPORTANT: in cluster mode only one node could be master for hibernate search...
                                      otherwise we get locks and broken index.
              The value should be set depending on that if jboss instance is in master mode. (zur laufzeit) -->
       
      <!--
        <property name="hibernate.search.worker.backend" value="jgroupsSlave"/>
        <property name="hibernate.search.worker.backend" value="jgroupsMaster"/>
      -->
        

       

       

       

      our

       

      infinispan-configs-lucene.xml

       

       

      <?xml version="1.0" encoding="UTF-8"?>

      <infinispan

          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

          xsi:schemaLocation="urn:infinispan:config:4.2 http://www.infinispan.org/schemas/infinispan-config-4.2.xsd"

          xmlns="urn:infinispan:config:4.2">

       

      <!-- working example from http://community.jboss.org/thread/177148?tstart=0 -->

         <global>

            <globalJmxStatistics enabled="false"/>

            <!-- <transport clusterName="${jboss.partition.name:DefaultPartition}-HAPartition"

                       distributedSyncTimeout="50000"

                       transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport">

                <properties>

                  <property name="configurationFile" value="jgroups-s3_ping-aws.xml"/>

               </properties>

            </transport>

            -->

             <transport clusterName="${jboss.partition.name:DefaultPartition}-HAPartition" distributedSyncTimeout="17500">

                <properties>

                  <property name="stack" value="${jboss.default.jgroups.stack:tcp}"/>

                </properties>

              </transport>

             

            <shutdown hookBehavior="DONT_REGISTER"/>

         </global>

       

         <!-- *************************** -->

         <!-- Default "template" settings -->

         <!-- *************************** -->

         <default>

            <locking

                  lockAcquisitionTimeout="20000"

                  writeSkewCheck="false"

                  concurrencyLevel="500"

                  useLockStriping="false" />

       

              <!-- Invocation batching is required for use with the Lucene Directory -->

              <invocationBatching

                  enabled="true" />

       

              <!-- This element specifies that the cache is clustered. modes supported: distribution

                  (d), replication (r) or invalidation (i). Don't use invalidation to store Lucene indexes (as

                  with Hibernate Search DirectoryProvider). Replication is recommended for best performance of

                  Lucene indexes, but make sure you have enough memory to store the index in your heap.

                  Also distribution scales much better than replication on high number of nodes in the cluster. -->

              <clustering

                  mode="distribution">

       

                  <!-- Prefer loading all data at startup than later -->

                  <stateRetrieval

                      timeout="20000"

                      logFlushTimeout="30000"

                      fetchInMemoryState="false"

                      alwaysProvideInMemoryState="true" />

       

                  <!-- Network calls are synchronous by default -->

                  <sync

                      replTimeout="20000" />

              </clustering>

       

              <jmxStatistics

                  enabled="false" />

       

              <eviction

                  maxEntries="-1"

                  strategy="NONE" />

       

              <expiration

                  maxIdle="-1" />

         </default>

       

         <!-- *************************************** -->

          <!--  Cache to store Lucene's file metadata  -->

          <!-- *************************************** -->

          <namedCache

              name="LuceneIndexesMetadata">

              <loaders passivation="false" shared="true">

                  <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">

                      <properties>

                          <property name="location" value="${jboss.server.data.dir}${/}lucene"/>

                      </properties>

                  </loader>

              </loaders>

              <clustering

                  mode="distribution">

                  <stateRetrieval

                      fetchInMemoryState="false"

                      logFlushTimeout="30000" />

                  <sync

                      replTimeout="25000" />

              </clustering>

          </namedCache>

       

          <!-- **************************** -->

          <!--  Cache to store Lucene data  -->

          <!-- **************************** -->

          <namedCache

              name="LuceneIndexesData">

              <loaders passivation="false" shared="true">

                  <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">

                      <properties>

                          <property name="location" value="${jboss.server.data.dir}${/}lucene"/>

                      </properties>

                  </loader>

              </loaders>

             

              <clustering

                  mode="distribution">

                  <stateRetrieval

                      fetchInMemoryState="false"

                      logFlushTimeout="30000" />

                  <sync

                      replTimeout="25000" />

              </clustering>

          </namedCache>

       

          <!-- ***************************** -->

          <!--  Cache to store Lucene locks  -->

          <!-- ***************************** -->

          <namedCache

              name="LuceneIndexesLocking">

              <loaders passivation="false" shared="true">

                  <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">

                      <properties>

                          <property name="location" value="${jboss.server.data.dir}${/}lucene"/>

                      </properties>

                  </loader>

              </loaders>

              <clustering

                  mode="distribution">

                  <stateRetrieval

                      fetchInMemoryState="false"

                      logFlushTimeout="30000" />

                  <sync

                      replTimeout="25000" />

              </clustering>

          </namedCache>

      </infinispan>

       

       

       

      The Problem:

       

      When we run 2 Nodes (Clustered) then we get exceptions in any of the node when writing to the index: (See exceptions bellow)

      .

      In some VERY BAD  cases the index is broken and lucene cannot find some of the index files.

       

      How to solve this? In Hibernate Search Docs there is some Idea about master / slave configuration.

       

       

      11:46:29,163 INFO  [org.jboss.bootstrap.impl.base.server.AbstractServer] Stopped: JBossAS [6.1.0.Final "Neo"] in 6s:203ms

      ent.FutureTask$Sync.innerRun(FutureTask.java:303) [:1.6.0_29]

          at java.util.concurrent.FutureTask.run(FutureTask.java:138) [:1.6.0_29]

          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_29]

          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_29]

          at java.lang.Thread.run(Thread.java:662) [:1.6.0_29]

      Caused by: java.lang.NullPointerException

          at org.hibernate.search.backend.impl.lucene.works.DeleteExtWorkDelegate.performWork(DeleteExtWorkDelegate.java:72) [:3.4.1.Final]

          ... 7 more

       

      2012-02-14 11:43:26,519 (Hibernate Search: Directory writer-1) WARN  [org.hibernate.search.backend.Workspace] going to force release of the IndexWriter lock

      2012-02-14 11:43:26,503 (Hibernate Search: Directory writer-1) ERROR [org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor] Unexpected error in Lucene Backend: : org.hibernate.search.SearchException: Unable to remove class com.agimatec.nucleus.persistence.model.ParcelAnnouncement#804 from index.

          at org.hibernate.search.backend.impl.lucene.works.DeleteExtWorkDelegate.performWork(DeleteExtWorkDelegate.java:77) [:3.4.1.Final]

          at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:106) [:3.4.1.Final]

          at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) [:1.6.0_29]

          at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [:1.6.0_29]

          at java.util.concurrent.FutureTask.run(FutureTask.java:138) [:1.6.0_29]

          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_29]

          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_29]

          at java.lang.Thread.run(Thread.java:662) [:1.6.0_29]

      Caused by: java.lang.NullPointerException

          at org.hibernate.search.backend.impl.lucene.works.DeleteExtWorkDelegate.performWork(DeleteExtWorkDelegate.java:72) [:3.4.1.Final]

          ... 7 more

       

      2012-02-14 11:43:26,519 (org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor#1d0ac2b-9) INFO  [com.agimatec.utility.TransactionUtils] ** transaction committed **

      2012-02-14 11:43:26,535 (Hibernate Search: Directory writer-1) ERROR [org.hibernate.search.exception.impl.LogErrorHandler] Exception occurred org.hibernate.search.SearchException: Unable to remove class com.agimatec.nucleus.persistence.model.ParcelAnnouncement#804 from index.

      Primary Failure:

          Entity com.agimatec.nucleus.persistence.model.ParcelAnnouncement  Id 804  Work Type  org.hibernate.search.backend.DeleteLuceneWork

      Subsequent failures:

          Entity com.agimatec.nucleus.persistence.model.ParcelAnnouncement  Id 804  Work Type  org.hibernate.search.backend.AddLuceneWork

          Entity com.agimatec.nucleus.persistence.model.ParcelAnnouncement  Id 804  Work Type  org.hibernate.search.backend.AddLuceneWork

      : org.hibernate.search.SearchException: Unable to remove class com.agimatec.nucleus.persistence.model.ParcelAnnouncement#804 from index.

          at org.hibernate.search.backend.impl.lucene.works.DeleteExtWorkDelegate.performWork(DeleteExtWorkDelegate.java:77) [:3.4.1.Final]

          at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:106) [:3.4.1.Final]

          at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) [:1.6.0_29]

          at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [:1.6.0_29]

          at java.util.concurrent.FutureTask.run(FutureTask.java:138) [:1.6.0_29]

          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_29]

          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_29]

          at java.lang.Thread.run(Thread.java:662) [:1.6.0_29]

      Caused by: java.lang.NullPointerException

          at org.hibernate.search.backend.impl.lucene.works.DeleteExtWorkDelegate.performWork(DeleteExtWorkDelegate.java:72) [:3.4.1.Final]

          ... 7 more

       

      2012-02-14 11:43:26,535 (Hibernate Search: Directory writer-1) WARN  [org.hibernate.search.backend.Workspace] going to force release of the IndexWriter lock

      2012-02-14 11:43:26,581 (org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor#1d0ac2b-10) INFO  [org.jboss.resource.connectionmanager.TxConnectionManager] throwable from unregister connection: java.lang.IllegalStateException: Trying to return an unknown connection2! org.jboss.resource.adapter.jdbc.jdk6.WrappedConnectionJDK6@1353d85

          at org.jboss.resource.connectionmanager.CachedConnectionManager.unregisterConnection(CachedConnectionManager.java:330) [:6.1.0.Final]

          at org.jboss.resource.connectionmanager.TxConnectionManager$TxConnectionEventListener.connectionClosed(TxConnectionManager.java:787) [:6.1.0.Final]

          at org.jboss.resource.adapter.jdbc.BaseWrapperManagedConnection.closeHandle(BaseWrapperManagedConnection.java:364) [:6.1.0.Final]

          at org.jboss.resource.adapter.jdbc.WrappedConnection.close(WrappedConnection.java:165) [:6.1.0.Final]

          at org.hibernate.connection.DatasourceConnectionProvider.closeConnection(DatasourceConnectionProvider.java:97) [:3.6.6.Final]

          at org.hibernate.jdbc.ConnectionManager.closeConnection(ConnectionManager.java:474) [:3.6.6.Final]

          at org.hibernate.jdbc.ConnectionManager.aggressiveRelease(ConnectionManager.java:429) [:3.6.6.Final]

          at org.hibernate.jdbc.ConnectionManager.afterStatement(ConnectionManager.java:304) [:3.6.6.Final]

          at org.hibernate.jdbc.ConnectionManager.flushEnding(ConnectionManager.java:503) [:3.6.6.Final]

          at com.agimatec.dbhistory.HibernateFlushEventListener._performExecutions(HibernateFlushEventListener.java:66) [:]

          at com.agimatec.dbhistory.HibernateFlushEventListener.performExecutions(HibernateFlushEventListener.java:29) [:]

          at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) [:3.6.6.Final]

          at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1216) [:3.6.6.Final]

          at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:383) [:3.6.6.Final]

          at org.hibernate.transaction.synchronization.CallbackCoordinator.beforeCompletion(CallbackCoordinator.java:117) [:3.6.6.Final]

          at org.hibernate.transaction.synchronization.HibernateSynchronizationImpl.beforeCompletion(HibernateSynchronizationImpl.java:51) [:3.6.6.Final]

          at com.arjuna.ats.internal.jta.resources.arjunacore.SynchronizationImple.beforeCompletion(SynchronizationImple.java:97) [:6.1.0.Final]

          at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.beforeCompletion(TwoPhaseCoordinator.java:274) [:6.1.0.Final]

          at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.end(TwoPhaseCoordinator.java:94) [:6.1.0.Final]

          at com.arjuna.ats.arjuna.AtomicAction.commit(AtomicAction.java:159) [:6.1.0.Final]

          at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1158) [:6.1.0.Final]

          at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:119) [:6.1.0.Final]

          at com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.commit(BaseTransactionManagerDelegate.java:75) [:6.1.0.Final]

          at org.jboss.tm.usertx.client.ServerVMClientUserTransaction.commit(ServerVMClientUserTransaction.java:162) [:6.1.0.Final]

          at com.agimatec.utility.TransactionUtils.commit(TransactionUtils.java:108) [:]

          at com.agimatec.nucleus.esb.service.RuleBaseServiceBean.commitAndBeginTransaction(RuleBaseServiceBean.java:1357) [:]

          at com.agimatec.nucleus.esb.service.RuleBaseServiceBean.boxAction(RuleBaseServiceBean.java:1300) [:]

          at com.agimatec.nucleus.esb.Rule_BoxAction_0.consequence(Rule_BoxAction_0.java:15)

          at com.agimatec.nucleus.esb.Rule_BoxAction_0ConsequenceInvoker.evaluate(Rule_BoxAction_0ConsequenceInvoker.java:24)

          at org.drools.common.DefaultAgenda.fireActivation(DefaultAgenda.java:554) [:4.0.7]

          at org.drools.common.DefaultAgenda.fireNextItem(DefaultAgenda.java:518) [:4.0.7]

          at org.drools.common.AbstractWorkingMemory.fireAllRules(AbstractWorkingMemory.java:475) [:4.0.7]

          at org.drools.common.AbstractWorkingMemory.fireAllRules(AbstractWorkingMemory.java:439) [:4.0.7]

          at com.agimatec.messageflow.nucleus.drools.DroolsEndpoint$1.execute(DroolsEndpoint.java:110) [:]

          at com.agimatec.messageflow.nucleus.Transactor.executeWithTransaction(Transactor.java:25) [:]

          at com.agimatec.messageflow.nucleus.drools.DroolsEndpoint.send(DroolsEndpoint.java:107) [:]

          at com.agimatec.messageflow.components.FlowEndpoint.process(FlowEndpoint.java:25) [:]

          at com.agimatec.messageflow.DefaultFlowContainer.process(DefaultFlowContainer.java:191) [:]

          at com.agimatec.messageflow.DefaultFlowContainer.process(DefaultFlowContainer.java:186) [:]

          at com.agimatec.messageflow.DefaultFlowContainer.processAndWait(DefaultFlowContainer.java:164) [:]

          at com.agimatec.messageflow.FlowClient.sendSync(FlowClient.java:72) [:]

          at com.agimatec.messageflow.nucleus.TanResequencer.process(TanResequencer.java:202) [:]

          at com.agimatec.messageflow.nucleus.TanResequencer.processExchanges(TanResequencer.java:164) [:]

          at com.agimatec.messageflow.nucleus.TanResequencer.resequence(TanResequencer.java:109) [:]

          at com.agimatec.nucleus.jbi.ExchangeServiceBean.resequence(ExchangeServiceBean.java:91) [:]

          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [:1.6.0_29]

          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [:1.6.0_29]

          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [:1.6.0_29]

          at java.lang.reflect.Method.invoke(Method.java:597) [:1.6.0_29]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeTarget(MethodInvocation.java:122) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:111) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.interceptors.container.ContainerMethodInvocationWrapper.invokeNext(ContainerMethodInvocationWrapper.java:72) [:1.1.3]

          at org.jboss.ejb3.interceptors.aop.InterceptorSequencer.invoke(InterceptorSequencer.java:76) [:1.1.3]

          at org.jboss.ejb3.interceptors.aop.InterceptorSequencer.aroundInvoke(InterceptorSequencer.java:62) [:1.1.3]

          at sun.reflect.GeneratedMethodAccessor467.invoke(Unknown Source) [:1.6.0_29]

          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [:1.6.0_29]

          at java.lang.reflect.Method.invoke(Method.java:597) [:1.6.0_29]

          at org.jboss.aop.advice.PerJoinpointAdvice.invoke(PerJoinpointAdvice.java:174) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.interceptors.aop.InvocationContextInterceptor.fillMethod(InvocationContextInterceptor.java:74) [:1.1.3]

          at org.jboss.aop.advice.org.jboss.ejb3.interceptors.aop.InvocationContextInterceptor_z_fillMethod_10509251.invoke(InvocationContextInterceptor_z_fillMethod_10509251.java) [:]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.interceptors.aop.InvocationContextInterceptor.setup(InvocationContextInterceptor.java:90) [:1.1.3]

          at org.jboss.aop.advice.org.jboss.ejb3.interceptors.aop.InvocationContextInterceptor_z_setup_10509251.invoke(InvocationContextInterceptor_z_setup_10509251.java) [:]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.async.impl.interceptor.AsynchronousServerInterceptor.invoke(AsynchronousServerInterceptor.java:128) [:1.7.21]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.connectionmanager.CachedConnectionInterceptor.invoke(CachedConnectionInterceptor.java:62) [:1.7.21]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.entity.TransactionScopedEntityManagerInterceptor.invoke(TransactionScopedEntityManagerInterceptor.java:56) [:1.7.21]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.AllowedOperationsInterceptor.invoke(AllowedOperationsInterceptor.java:47) [:1.7.21]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.tx.NullInterceptor.invoke(NullInterceptor.java:42) [:1.0.4]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.stateless.StatelessInstanceInterceptor.invoke(StatelessInstanceInterceptor.java:68) [:1.7.21]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.core.context.SessionInvocationContextAdapter.proceed(SessionInvocationContextAdapter.java:95) [:1.7.21]

          at org.jboss.ejb3.tx2.impl.CMTTxInterceptor.invokeInCallerTx(CMTTxInterceptor.java:223) [:0.0.2]

          at org.jboss.ejb3.tx2.impl.CMTTxInterceptor.required(CMTTxInterceptor.java:353) [:0.0.2]

          at org.jboss.ejb3.tx2.impl.CMTTxInterceptor.invoke(CMTTxInterceptor.java:209) [:0.0.2]

          at org.jboss.ejb3.tx2.aop.CMTTxInterceptorWrapper.invoke(CMTTxInterceptorWrapper.java:52) [:0.0.2]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.aspects.tx.TxPropagationInterceptor.invoke(TxPropagationInterceptor.java:76) [:1.0.0.GA]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.tx.NullInterceptor.invoke(NullInterceptor.java:42) [:1.0.4]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.security.Ejb3AuthenticationInterceptorv2.invoke(Ejb3AuthenticationInterceptorv2.java:182) [:1.7.21]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.ENCPropagationInterceptor.invoke(ENCPropagationInterceptor.java:41) [:1.7.21]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.BlockContainerShutdownInterceptor.invoke(BlockContainerShutdownInterceptor.java:67) [:1.7.21]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.core.context.CurrentInvocationContextInterceptor.invoke(CurrentInvocationContextInterceptor.java:47) [:1.7.21]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.aspects.currentinvocation.CurrentInvocationInterceptor.invoke(CurrentInvocationInterceptor.java:67) [:1.0.1]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.interceptor.EJB3TCCLInterceptor.invoke(EJB3TCCLInterceptor.java:86) [:1.7.21]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.session.SessionSpecContainer.invoke(SessionSpecContainer.java:333) [:1.7.21]

          at org.jboss.ejb3.session.SessionSpecContainer.invoke(SessionSpecContainer.java:390) [:1.7.21]

          at sun.reflect.GeneratedMethodAccessor466.invoke(Unknown Source) [:1.6.0_29]

          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [:1.6.0_29]

          at java.lang.reflect.Method.invoke(Method.java:597) [:1.6.0_29]

          at org.jboss.ejb3.proxy.impl.handler.session.SessionLocalProxyInvocationHandler$LocalContainerInvocation.invokeTarget(SessionLocalProxyInvocationHandler.java:184) [:1.0.11]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:111) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.async.impl.interceptor.AsynchronousClientInterceptor.invoke(AsynchronousClientInterceptor.java:143) [:1.7.21]

          at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:102) [jboss-aop.jar:2.2.2.GA]

          at org.jboss.ejb3.proxy.impl.handler.session.SessionLocalProxyInvocationHandler$LocalInvokableContextHandler.invoke(SessionLocalProxyInvocationHandler.java:159) [:1.0.11]

          at $Proxy359.invoke(Unknown Source)    at org.jboss.ejb3.proxy.impl.handler.session.SessionProxyInvocationHandlerBase.invoke(SessionProxyInvocationHandlerBase.java:185) [:1.0.11]

          at $Proxy403.resequence(Unknown Source)    at com.agimatec.messageflow.nucleus.ResequencerEndpoint$1.execute(ResequencerEndpoint.java:68) [:]

          at com.agimatec.messageflow.nucleus.ResequencerEndpoint$1.execute(ResequencerEndpoint.java:66) [:]

          at com.agimatec.messageflow.nucleus.Transactor.executeWithTransaction(Transactor.java:25) [:]

          at com.agimatec.messageflow.nucleus.ResequencerEndpoint.send(ResequencerEndpoint.java:66) [:]

          at com.agimatec.messageflow.components.FlowEndpoint.process(FlowEndpoint.java:25) [:]

          at com.agimatec.messageflow.DefaultFlowContainer.process(DefaultFlowContainer.java:191) [:]

          at com.agimatec.messageflow.DefaultFlowContainer.process(DefaultFlowContainer.java:186) [:]

          at com.agimatec.messageflow.DefaultFlowContainer.processTargetOf(DefaultFlowContainer.java:174) [:]

          at com.agimatec.messageflow.components.AsyncFlowExecutor$FlowTask.run(AsyncFlowExecutor.java:97) [:]

          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_29]

          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_29]

          at java.lang.Thread.run(Thread.java:662) [:1.6.0_29]

       

      2012-02-14 11:43:27,066 (Hibernate Search: Directory writer-1) ERROR [org.hibernate.search.exception.impl.LogErrorHandler] Exception occurred java.io.FileNotFoundException: Error loading medatada for index file: _9.fdt|M|com.agimatec.nucleus.persistence.model.ParcelDetail

      : java.io.FileNotFoundException: Error loading medatada for index file: _9.fdt|M|com.agimatec.nucleus.persistence.model.ParcelDetail

          at org.infinispan.lucene.InfinispanDirectory.openInput(InfinispanDirectory.java:300) [:4.2.1.FINAL]

          at org.apache.lucene.index.CompoundFileWriter.copyFile(CompoundFileWriter.java:218) [:3.1.0 1085809 - 2011-03-26 17:59:57]

          at org.apache.lucene.index.CompoundFileWriter.close(CompoundFileWriter.java:188) [:3.1.0 1085809 - 2011-03-26 17:59:57]

          at org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:571) [:3.1.0 1085809 - 2011-03-26 17:59:57]

          at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3331) [:3.1.0 1085809 - 2011-03-26 17:59:57]

          at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3296) [:3.1.0 1085809 - 2011-03-26 17:59:57]

          at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3159) [:3.1.0 1085809 - 2011-03-26 17:59:57]

          at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3232) [:3.1.0 1085809 - 2011-03-26 17:59:57]

          at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3214) [:3.1.0 1085809 - 2011-03-26 17:59:57]

          at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3198) [:3.1.0 1085809 - 2011-03-26 17:59:57]

          at org.hibernate.search.backend.Workspace.commitIndexWriter(Workspace.java:220) [:3.4.1.Final]

          at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:109) [:3.4.1.Final]

          at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) [:1.6.0_29]

          at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [:1.6.0_29]

          at java.util.concurrent.FutureTask.run(FutureTask.java:138) [:1.6.0_29]

          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_29]

          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_29]

          at java.lang.Thread.run(Thread.java:662) [:1.6.0_29]

        • 1. Re: Lock Exception using Infinispan as Directory Provider
          sannegrinovero

          Hello Grigor,

          thanks for all the details.

           

          As you say, "In Hibernate Search Docs there is some Idea about master / slave configuration."  you can't ignore that, unless you have a single node writing on the index.

          The master/slave configuration in Hibernate Search makes sure that only one node will open an IndexWriter, and if the other nodes (the slaves) need to apply some changes, they will send a message to the master either using JGroups or using a JMS queue.

          If you have both nodes writing on the same index at the same time, then you will get the index corruptions you are experiencing.

           

          Finally, I'd suggest to upgrade to JBoss 7.1, Infinispan 5.1.1 and Hibernate Search 4.1 as we fixed several issues already in the older versions you're deploying.

          If you can't upgrade, especially the JGroups backend included in Hibernate Search won't work with the version of JGroups required by Infinispan so you'll have to patch the Hibernate Search code (trivial as it's a single line) - or use JMS. https://hibernate.onjira.com/browse/HSEARCH-975

          1 of 1 people found this helpful
          • 2. Re: Lock Exception using Infinispan as Directory Provider
            grigor.tonkov

            Hello Sanne,

            thank you for the fast reply.

            If it is possible we would like to use this hsearch configuration.

             

            What exactly should be patched in the HSEARCH Code ?

            When I use the master slave configuration the distribution of the index seems to work properly but the index is not persisted or not loaded properly.

             

            I put on the one node master and on the other node slave as shown bellow. (persistence.xml)

             

              <property name="hibernate.search.worker.backend" value="jgroupsSlave"/>
              <property name="hibernate.search.worker.backend" value="jgroupsMaster"/>

             

            but this setting is causing that my index is not reloaded when i start all jboss instances.

            If i do not use this setting the index is loaded as expected.

             

             

            Could you help?

             

            Thank you!

            Grigor

            • 3. Re: Lock Exception using Infinispan as Directory Provider
              sannegrinovero

              what do you mean by "index is not reloaded" ?

               

              Regarding the issue with JGroups compatibility:

              please checkout the sources from git://github.com/hibernate/hibernate-search.git , you'll see that the Maven project depends on a version of JGroups 2.8.x used to compile the main core - as it's the last version compatible with JDK5 as we require from the core module - and a JGroups version 2.12.x which is used for the Infinispan module (as it's needed by Infinispan - supported only on Java6 anyway).

               

              If you just remove the older JGroups version and use the 2.12. as well to compile the core, you'll see a single JGroups method changed signature. I don't remember exactly which method and how it should be changed, but it should be easy to spot as it won't compile otherwise! Also if you fix it there are tests to tell if it's correct.

              • 4. Re: Lock Exception using Infinispan as Directory Provider
                sannegrinovero

                Hi Grigor,

                sorry I received a notification of a reply from you, but I'm not able to read it here. If you deleted it, just ignore me, otherwise could you please post it again?

                • 5. Re: Lock Exception using Infinispan as Directory Provider
                  grigor.tonkov

                  Hello Sanne,

                  yes i deleted my last post. Sorry about this.

                  The trouble we have now is that when we have master slave configuration.

                  The master is working as expected, gets all events from slave and keeps the index up to date.

                  The trouble is that the slave is not indexing at all. (or very late, i still do not know when).

                  If update data on slave, the master is indexed ok but not the slave node.

                   

                  Here ist the configuration we have: (persistence.xml, the values are set in the run.conf.bat and looks that they are properly working on master and slave as well)

                   

                   

                    <property name="hibernate.search.worker.backend" value="${nucleus.search.backend}"/>
                    <property name="hibernate.search.worker.execution" value="async"/>

                   

                   
                     <property name="hibernate.search.default.directory_provider" value="${nucleus.search.directory_provider}"/>
                    <!--
                    <property name="hibernate.search.default.directory_provider" value="org.hibernate.search.store.FSDirectoryProvider"/>
                    <property name="hibernate.search.default.directory_provider" value="filesystem-master" />
                    -->
                    <property name="hibernate.search.default.sourceBase" value ="${nucleus.search.sourceBase}" />
                    <property name="hibernate.search.default.refresh" value="1800" />
                   

                   

                  any idea ?

                  • 6. Re: Lock Exception using Infinispan as Directory Provider
                    sannegrinovero

                    what's the value for the nucleus.search.xxx variables?

                     

                    When using the Infinispan DirectoryProvider, you need to set the JGroups or JMS backend, but you should not set the master/slave configuration on the directory provider: you either sync the indexes using filesystem sharing, or using Infinispan; you can't apply both.

                    • 7. Re: Lock Exception using Infinispan as Directory Provider
                      grigor.tonkov

                      Hello Again!

                      so we managed to setup the master slave configuration. Thats good news. Thank you very much for the advices. They really helped!

                      One hopefully last question.

                      Is there possibility to get the updates on the slaves faster than after the index is beeing copied from the master to the slave.

                      We use copy job to copy the changed master copy to the slave. Is there any better alternative to speed up indexing on the slave machines?

                       

                      Our copy job you could find here:

                       

                      /usr/local/bin/rsync \-vlgotr \--delete /home/jboss/server/nucleus/data/lucene/master/ jboss@123.123.123.123:/home/jboss/server/nucleus/data/lucene/master/

                      • 8. Re: Lock Exception using Infinispan as Directory Provider
                        sannegrinovero

                        Hi Grigor,

                        your last post confused me. Didn' t you say that you where using the Infinispan DirectoryProvider? In that case index replication would be instantly available to all slaves, without need of using rsync.

                         

                        If you' re using rsync, a valid solution is to execute the rsync job more frequently.

                        • 9. Re: Lock Exception using Infinispan as Directory Provider
                          grigor.tonkov

                          Hello Sanne,

                          sorry for confusing you.

                           

                          we changed from Infinispan DirectoryProvider to Master / Slave File system directory provider.

                          The admins don't like the nfs share therefore we use the rsync from copiing the master copy of the index.

                           

                          We could try again to use infinispan but for me there are two problems on my opinion:

                          - The index is saved in infinispan cache format and not lucene format. That means it is not readable by luke (tool for reading the index if corrupted).

                          - Locking Problems could appear if the index is updated from the other instances and infinspan on the same time.

                          • 10. Re: Lock Exception using Infinispan as Directory Provider
                            sannegrinovero
                            we changed from Infinispan DirectoryProvider to Master / Slave File system directory provider.

                            Ah ok, now I understand the configuration.

                             

                            The admins don't like the nfs share therefore we use the rsync from copiing the master copy of the index.

                            Looks like a good idea, I like rsync myself much more as well. Any sharing will do, so we suggest shared filesystem as that's the common approach.

                             

                             

                            - The index is saved in infinispan cache format and not lucene format. That means it is not readable by luke (tool for reading the index if corrupted).

                             

                            That's a very valid point. I don't expect the index to be of any use if corrupted, but being able to use Luke is a mayor tool to debug the applications.

                             

                            Starting an InfinispanDirectory directly (i.e. without Hibernate Search) is trivial, and while I didn't look into the Luke sources I guess it's working a Directory instance .. it wouldn' t take long to make such a tool by patching/extending Luke.. even better one could have the tool connect to the cluster while run from the dev machine, that sounds like very useful.

                             

                            Could you contribute such a tool? Those who need a tool are always in the best position to make it a good one . I'm willing to help in case of trouble, but it would be nice if you could lead this and I really think it won't take you long.

                             

                             

                            - Locking Problems could appear if the index is updated from the other instances and infinspan on the same time.

                            Let's explore this concern.

                            1. Having a master and several slaves, only the master would write.
                            2. The index is still locked, by default by a org.infinispan.lucene.locking.BaseLockFactory , but because of the master/slave I think the lock could be removed. I've left it there as an additional guarantee since people might do some mistake in the configuration and boot multiple masters: in that case - and only that case - you'll have some nodes timeout waiting for the lock. So you should get locking problems only with incorrect configurations, it should not happen at runtime for a running system even under high load.

                             

                            Does this clarify, or are there other locking problems?

                            1 of 1 people found this helpful