2 Replies Latest reply on Jun 14, 2011 5:12 AM by Nicolas Bittner

    Infinispan in Lucene : Unable to acquire lock with modified LuceneDemo.jar

    Nicolas Bittner Newbie

      I am currently testing InfinispanDirectory for Lucene with a modified "infinispan-lucene-demo.jar" ( zipped in the infinispan-4.2.1.FINAL-all.zip).

      I have added 1 method in DemoAction.java :

       

      public void addNewDocument ( int numIte, int lenght, int delay )
                  throws IOException, InterruptedException
          {
      
              ArrayList<String> array = new ArrayList<String> ();
      
              for ( int i = 0; i < numIte; i++ )
              {
                  array.add ( randomString ( lenght) );
              }
              long deb = System.nanoTime (); //for benchmarking
              IndexWriter iw = new IndexWriter ( index, analyzer,
                      MaxFieldLength.UNLIMITED );
              try
              {
                  for ( int i = 0; i < numIte; i++ )
                  {
      
                      Document doc = new Document ();
                      Field field = new Field ( MAIN_FIELD, array.get ( i ),
                              Store.YES, Index.ANALYZED );
                      doc.add ( field );
                      iw.addDocument ( doc );
                      iw.commit ();
                      Thread.sleep ( delay );
      
                  }
      
              } finally
              {
                  iw.close ();
              }
              long duree = ( System.nanoTime () - deb ) / 1000000; // ms
      
              System.out.println ( String.valueOf ( duree ) );
          }
      

       

      This method is very similar to the original addNewDocument method, I just added the loop for each commit, a delay and a benchmark.

      I use the default configuration for the demo :

       

      <?xml version="1.0" encoding="UTF-8"?>
      <infinispan
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:infinispan:config:4.0 http://www.infinispan.org/schemas/infinispan-config-4.0.xsd"
         xmlns="urn:infinispan:config:4.0">
      
         <global>
            <transport clusterName="luceneDemoCluster" />
            <globalJmxStatistics enabled="true"/>
         </global>
      
         <default>
          <locking useLockStriing = "false" />
            <jmxStatistics enabled="true"/>
            <clustering mode="distribution">
               <l1 enabled="true" lifespan="60000" />
               <hash numOwners="2" rehashRpcTimeout="120000" />
               <sync />
            </clustering>
            <invocationBatching enabled="true" />
         </default>
      
      </infinispan>
      

       

       

      I started 3 instances luceneDemo with runLuceneDemo.sh on 3 nodes.

      Every nodes saw 3 members.

      But when I was testing the method above with 100 iterations (=100 commit), 1000bytes of data with 100ms between each iteration, this exception occurs :

       

      2011-06-09 17:19:17,758 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (main) Execution error: 
      org.infinispan.util.concurrent.TimeoutException: Unable to acquire lock after [10 seconds] on key [*|] for requestor [GlobalTransaction:<NB01VMLX-14600>:114:remote]! Lock held by [GlobalTransaction:<NB01VMLX-14600>:113:remote]
          at org.infinispan.container.EntryFactoryImpl.acquireLock(EntryFactoryImpl.java:228)
          at org.infinispan.container.EntryFactoryImpl.wrapEntryForWriting(EntryFactoryImpl.java:155)
          at org.infinispan.container.EntryFactoryImpl.wrapEntryForWriting(EntryFactoryImpl.java:107)
          at org.infinispan.interceptors.LockingInterceptor.visitPutKeyValueCommand(LockingInterceptor.java:296)
          at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:80)
          at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
          at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:132)
          at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:58)
          at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:80)
          at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
          at org.infinispan.interceptors.TxInterceptor.visitPrepareCommand(TxInterceptor.java:94)
          at org.infinispan.interceptors.DistTxInterceptor.visitPrepareCommand(DistTxInterceptor.java:59)
          at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:120)
          at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
          at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:132)
          at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:106)
          at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:120)
          at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
          at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:95)
          at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:62)
          at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:106)
          at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:120)
          at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118)
          at org.infinispan.interceptors.BatchingInterceptor.handleDefault(BatchingInterceptor.java:60)
          at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:106)
          at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:120)
          at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:273)
          at org.infinispan.commands.tx.PrepareCommand.perform(PrepareCommand.java:111)
          at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:142)
          at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:156)
          at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithRetry(InboundInvocationHandlerImpl.java:266)
          at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:129)
          at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommand(CommandAwareRpcDispatcher.java:159)
          at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:144)
          at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:577)
          at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:488)
          at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:364)
          at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:770)
          at org.jgroups.JChannel.up(JChannel.java:1484)
          at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1074)
          at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:429)
          at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:263)
          at org.jgroups.protocols.FRAG2.up(FRAG2.java:189)
          at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
          at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
          at org.jgroups.protocols.pbcast.GMS.up(GMS.java:891)
          at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:246)
          at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:575)
          at org.jgroups.protocols.UNICAST.up(UNICAST.java:294)
          at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:703)
          at org.jgroups.protocols.BARRIER.up(BARRIER.java:119)
          at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:177)
          at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:275)
          at org.jgroups.protocols.MERGE2.up(MERGE2.java:209)
          at org.jgroups.protocols.Discovery.up(Discovery.java:291)
          at org.jgroups.protocols.PING.up(PING.java:66)
          at org.jgroups.protocols.TP.passMessageUp(TP.java:1102)
          at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1658)
          at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1640)
          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
          at java.lang.Thread.run(Thread.java:662)
      2011-06-09 17:19:17,764 ERROR [org.infinispan.transaction.xa.TransactionXaAdapter] (main) Error while processing PrepareCommand
      

       

      After this exception, the lock is sometimes released, or insertion/delete are blocked on each nodes.

       

      Why this exception ? I just wanted to see multiple iteration of commit in Lucene...

      Can you tell me if there are any configuration tweaks to add ?

       

      You can test my fonctions with the jar in attachment. (replace in modules\lucene-directory-demo directory)

       

      Thanks

        • 1. Re: Infinispan in Lucene : Unable to acquire lock with modified LuceneDemo.jar
          Sanne Grinovero Master

          I've tried to reproduce this error, but I could only get it when attempting to force the "benchmark" to write concurrently on multiple nodes, is that how you where testing it?

          Consider that while it's safe to run searches in parallel on multiple nodes, you still have to make sure that a single application opens an IndexWriter; this is a Lucene limitation: multiple and concurrent readers, but a single writer on the same index. There are many patterns around this, we can discuss some if that's what you're looking for. A lock is put in place to make sure you don't write from multiple nodes at the same time, but as your benchmarks keeps the IndexWriter open for a long time, the lock that other nodes are attempting to acquire is timing out.

           

          I've created a similar test which does constantly write on the shared index from one node, and at the same time keeps adding and removing other nodes dyamically while they all perform search operations, you can find this code in the Hibernate Search project as we're using this Infinispan Lucene Directory to share the index across Hibernate Search nodes:

          https://github.com/hibernate/hibernate-search/blob/master/hibernate-search-infinispan/src/test/java/org/hibernate/search/infinispan/LiveRunningTest.java

           

          Still, it runs a single index writer; also it's not suited as a benchmark as it's constantly adding/removing nodes, which is an expensive replication operation. If you're interested in running performance tests I'd suggest to have a look into PerformanceCompareStressTest, in the testsuite of the infinispan-lucene-directory module, just make sure you load some more data than what's shown in the committed version.

           

          About your performance test, to get some high throughput you would need to set some IndexWriter options and make sure you add multiple documents for each commit (adding a document is cheap, committing is not); with an average laptop you should be able to fly around 10,000 documents per second (as order of magnitude, YMMV depending on actual parameters and hardware), assuming they are as simple as your benchmark is using.

           

          Also your test is using Thread.sleep() while timing the result, what's the idea of this sleep? As it's not a very exact operation at all, it's going to skew your results even if you remove the sleep time; I'd suggest to remove that.

          • 2. Re: Infinispan in Lucene : Unable to acquire lock with modified LuceneDemo.jar
            Nicolas Bittner Newbie

            Thanks for your reply,

             

            Indeed this test works with 100 ms between each commit, but not below 30ms. If you want to test my benchmark you must type "6" in the menu (respectively followed by values 100, 1000,10).

             

             

            I've tried to reproduce this error, but I could only get it when attempting to force the "benchmark" to write concurrently on multiple nodes, is that how you where testing it?

            Consider that while it's safe to run searches in parallel on multiple nodes, you still have to make sure that a single application opens an IndexWriter; this is a Lucene limitation: multiple and concurrent readers, but a single writer on the same index. There are many patterns around this, we can discuss some if that's what you're looking for. A lock is put in place to make sure you don't write from multiple nodes at the same time, but as your benchmarks keeps the IndexWriter open for a long time, the lock that other nodes are attempting to acquire is timing out.

             

            I was writing only from one node (1 IndexWriter opened) while the two others were in the test menu (not writing or reading index).

             

             

             

             

            I've created a similar test which does constantly write on the shared index from one node, and at the same time keeps adding and removing other nodes dyamically while they all perform search operations, you can find this code in the Hibernate Search project as we're using this Infinispan Lucene Directory to share the index across Hibernate Search nodes:

            https://github.com/hibernate/hibernate-search/blob/master/hibernate-search-infinispan/src/test/java/org/hibernate/search/infinispan/LiveRunningTest.java

             

            I will try it, but I wanted to have a very simple benchmark using only Infinispan and Lucene.

             

             

             

            About your performance test, to get some high throughput you would need to set some IndexWriter options and make sure you add multiple documents for each commit (adding a document is cheap, committing is not); with an average laptop you should be able to fly around 10,000 documents per second (as order of magnitude, YMMV depending on actual parameters and hardware), assuming they are as simple as your benchmark is using.

             

             

            Why I couldn't commit on each document assuming it's slower than commit multiple document ? Is there any pool or queue to buffer commit operations ? Which setting in IndexWriterConfig should I modify to successful run my benchmark ? The goal to have 1 commit for each document is to be able to see modification in the index in real time.

             

             

             

            Also your test is using Thread.sleep() while timing the result, what's the idea of this sleep? As it's not a very exact operation at all, it's going to skew your results even if you remove the sleep time; I'd suggest to remove that.

             

            Yes, indeed the time result doesn't matter, the sleep operation is to delay commit, maybe there is another method to do it correctly.

             

            Best regards