1 2 Previous Next 21 Replies Latest reply on Oct 13, 2010 5:39 PM by Franck Garcia

    copying Lucene FSDirectory to InfinispanDirectory issues

    Franck Garcia Newbie

      I'm in the process of establishing a proof of concept that aims to replace my current Lucene FSDirectory by the InfinispanDirectory implementation.

       

      My current index holds around 7 millions of documents and has a size of 2 GB on the file system.

       

      I wanted first to have a try with an index snapshot of around 850000 documents for a size of 200 MB on disk.

       

      In order to accomplish that my first step would be to dump my current Lucene index into Infinispan where the data in the grid would be backed up by
      a jdbc store (xml config at the end of this post).

       

      I configure infinispan for data distribution across the grid but for this stage I intend to use a single node.

       

      So I created a simple Java Standalone app where I use the Lucene Directory.copy API method using the FSDirectory as the source and InfinispanDirectory
      as the target. I set the heap size to 2GB (-Xmx2048m).

       

      1) Right after the copy I stop my cache using the cache.stop() method (Is it the right way to shutdown a grid?)
      and I expect the remaining in memory data to be dumped to my db store.
      The connection pool C3P0 does not seem happy about this and issue a WARN message.

       

      2010-07-07 10:29:58,364 WARN  (com.mchange.v2.resourcepool.BasicResourcePool)[CoalescedAsyncStore-2:] 
      com.mchange.v2.resourcepool.BasicResourcePool@150f0a7 
      -- an attempt to checkout a resource was interrupted, and the pool is still live: some other thread must have either interrupted the Thread attempting checkout!
      java.lang.InterruptedException
          at java.lang.Object.wait(Native Method)
          at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1315)
          at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557)
          at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:477)
          at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:525)
          at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)
          at org.infinispan.loaders.jdbc.connectionfactory.PooledConnectionFactory.getConnection(PooledConnectionFactory.java:102)
          at org.infinispan.loaders.jdbc.binary.JdbcBinaryCacheStore.loadBucket(JdbcBinaryCacheStore.java:213)
          at org.infinispan.loaders.bucket.BucketBasedCacheStore.storeLockSafe(BucketBasedCacheStore.java:59)
          at org.infinispan.loaders.LockSupportCacheStore.store(LockSupportCacheStore.java:147)
          at org.infinispan.loaders.decorators.AbstractDelegatingStore.store(AbstractDelegatingStore.java:46)
          at org.infinispan.loaders.decorators.AsyncStore.applyModificationsSync(AsyncStore.java:180)
          at org.infinispan.loaders.decorators.AsyncStore$AsyncProcessor.put(AsyncStore.java:386)
          at org.infinispan.loaders.decorators.AsyncStore$AsyncProcessor.run0(AsyncStore.java:370)
          at org.infinispan.loaders.decorators.AsyncStore$AsyncProcessor.run(AsyncStore.java:312)
          at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
          at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
          at java.util.concurrent.FutureTask.run(FutureTask.java:138)
          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
          at java.lang.Thread.run(Thread.java:619)
      

       


      The main thread is waiting eternally (JGroups communication?). After a while I have to kill the process to stop the JVM.

       

      Nevertheless I did the reverse operation (copy the just created InfinispanDirectory backed by mySQL db into another brand new FSDirectory)
      to verify that no documents were missing (with Luke).
      Everything is just there (this time the cache.stop does not generate any errors and the JVM ends properly on my main thread with System.exit,
      the JGoups Transport also logs a clean disconnection).

       

      2) I then run the exact same process with the entire Lucene file (7 millions doc.) and get an OutOfMemory exception:

      Exception in thread "luceneIndex-JdbcBinaryCacheStore-0" java.lang.OutOfMemoryError: Java heap space
          at com.mysql.jdbc.Buffer.getBytes(Buffer.java:124)
          at com.mysql.jdbc.Buffer.readLenByteArray(Buffer.java:282)
          at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:947)
          at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:293)
          at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1239)
          at com.mysql.jdbc.Connection.execSQL(Connection.java:2051)
          at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1496)
          at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:76)
          at org.infinispan.loaders.jdbc.binary.JdbcBinaryCacheStore.purgeInternal(JdbcBinaryCacheStore.java:280)
          at org.infinispan.loaders.AbstractCacheStore$2.run(AbstractCacheStore.java:84)
          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
          at java.lang.Thread.run(Thread.java:619)
      2010-07-07 10:56:41,807 ERROR (com.fedextnc.hscode.index.Main)[main:] Error caught in main method 
      java.lang.OutOfMemoryError: Java heap space
          at org.infinispan.lucene.InfinispanIndexIO$InfinispanIndexOutput.newChunk(InfinispanIndexIO.java:217)
          at org.infinispan.lucene.InfinispanIndexIO$InfinispanIndexOutput.writeBytes(InfinispanIndexIO.java:240)
          at org.apache.lucene.store.IndexOutput.writeBytes(IndexOutput.java:43)
          at org.apache.lucene.store.Directory.copy(Directory.java:197)
          at com.fedextnc.hscode.index.tools.LuceneDirectoryCopy.copy(LuceneDirectoryCopy.java:34)
          at com.fedextnc.hscode.index.Main.main(Main.java:100)
      
      

      I understood (maybe wrongly) that Infinispan manages to avoid getting out of memory and rely for that on the eviction strategy.
      I use -1 as the maximumEntry of entries in memory for maximum performance, I also tried to specify a maximumEntry limit without success.

       

      Can someone tell me what is wrong in my approach?

       

      Env:

      Linux Ubuntu 32 bits.
      Java 1.6_018 Hotspot
      Infinispan 4.1.0.BETA2

       

       

      here after my xml configuration:

      <?xml version="1.0" encoding="UTF-8"?>
      <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="urn:infinispan:config:4.0 http://www.infinispan.org/schemas/infinispan-config-4.1.xsd"
          xmlns="urn:infinispan:config:4.0">
      
          <global>
              <transport clusterName="lucene-cluster"
                  transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport" />
          </global>
      
          <namedCache name="luceneIndex">
              <loaders passivation="false" shared="true" preload="true">
                  <loader
                      fetchPersistentState="false" ignoreModifications="false"
                      purgeOnStartup="false">
                      <properties>
                          <property name="bucketTableNamePrefix" value="xx_finder" />
                          <property name="idColumnName" value="ID_COLUMN" />
                          <property name="dataColumnName" value="DATA_COLUMN" />
                          <property name="timestampColumnName" value="TIMESTAMP_COLUMN" />
                          <property name="timestampColumnType" value="BIGINT" />
                          <property name="connectionFactoryClass"
                              value="org.infinispan.loaders.jdbc.connectionfactory.PooledConnectionFactory" />
                          <property name="connectionUrl" value="jdbc:mysql:///infinispan" />
                          <property name="userName" value="infinispan" />
                          <property name="driverClass" value="com.mysql.jdbc.Driver" />
                          <property name="idColumnType" value="VARCHAR(256)" />
                          <property name="dataColumnType" value="BLOB" />
                          <property name="dropTableOnExit" value="false" />
                          <property name="createTableOnStart" value="true" />
                      </properties>
                      <async enabled="true" flushLockTimeout="15000" threadPoolSize="3" />                
                  </loader>
              </loaders>
      
              <eviction wakeUpInterval="5000" maxEntries="-1" strategy="UNORDERED" />
      
              <clustering mode="distribution">
                  <l1 enabled="true" lifespan="600000" />
                  <hash numOwners="2" />
                  <sync />
              </clustering>
              
              <invocationBatching enabled="true" />
              <transaction syncCommitPhase="true" syncRollbackPhase="true"
                  transactionManagerLookupClass="org.infinispan.transaction.lookup.JBossStandaloneJTAManagerLookup"
                  useEagerLocking="true" />
          </namedCache>
      </infinispan>
      
        • 1. Re: copying Lucene FSDirectory to InfinispanDirectory issues
          Sanne Grinovero Master

          hello, about stopping the cache: do you also stop the cachemanager? if not you should register a shutdown hook, in the global section add.

          Also your configuration is using an async store, that's great for performance but before shutting down you should make sure that the store did finish storing all data; AFAIK it should be enough to stop the cachemanager, which should block until alla data is stored, but I'll have to check with the experts.

           

          Did you set the SerialMergeScheduler on the IndexWriter as warned on the wiki? the default merger is spawning secondary threads; he shouldn't be guilty of the shutdown problem assuming you closed the IndexWriter - just checl that as you pointed out to not have problems when reading only.

           

          About the OOM: did you optimize the index?

          What's the size of chunking you selected?

          Keep in mind that a 2GB index won't fit in a 2GB heap, as memory is needed for other purposes too, and there's a small overhead - so you're right you should be tuning the maxEntries settings.

           

          Which version are you using? make sure to try out the 4.1.0.CR1 released yesterday, as it included many improvements relevant to Lucene (and more are coming for CR2); also transactions are not needed anymore, while it's mandatory to enable batching - sorry I'll update the wiki later today.

           

          thanks for all the feedback, keep it going

          • 2. Re: copying Lucene FSDirectory to InfinispanDirectory issues
            Manik Surtani Master

            A few questions -

             

            • Why JDBC as a CacheStore?  For storing byte array chunks (fragments of an index) this may be less than efficient given that you'd frequently need several contiguous chunks.  Perhaps a FileCacheStore or BdbjeCacheStore may be more efficient?
            • You mentioned the main thread blocking due to JGroups communication.  Do you have logs or a thread dump to demonstrate this?  Even though you have clustering enabled, with only one server in the cluster most clustering codepaths are bypassed.
            • You definitely want to set a maxEntries limit in your eviction configuration otherwise your in-memory cache can grow indefinitely and you will OOM.  Eviction is pretty efficient in 4.1.x and has minimal overhead over a no-eviction configuration thanks to bounded internal containers.
            • 3. Re: copying Lucene FSDirectory to InfinispanDirectory issues
              Franck Garcia Newbie

              Thank you guys for the responses. I'll definitely try CR1 if there are changes for Lucene.

              1) Manik, I don't use FileCacheStore because because reading the WIKI it is mentionned it should not be used for production at all.

              I might try BDJE if I have performance issue at query time.

              2) I've just reran the step which block the jvm and as suggested I attach a jvm dump if it can help.

              3) I don't know (yet) what is a bounded internal containers but I trust you on that one.

              I'll do my homework and let you know.

              • 4. Re: copying Lucene FSDirectory to InfinispanDirectory issues
                Sanne Grinovero Master

                hi Franck, did you try latest version?

                I did a lot of tests on the Infinispan Directory latest month, especially using an async store on database and fixed some issues; they're all part of Infinispan 4.1.0.FINAL which is working great here.

                Also in some of the CRs a new feature was added to improve the performance when using a jdbc store: it's now possible to use the org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore as a special purpose key2StringMapperClass is now included in the lucene-directory module:

                 

                 

                <namedCache name="ScarletIndexes">
                      <clustering mode="replication">
                         <stateRetrieval fetchInMemoryState="true" />
                         <async useReplQueue="true" replQueueInterval="300" asyncMarshalling="false" />
                      </clustering>
                      <invocationBatching enabled="true" />
                      <jmxStatistics enabled="true" />
                      <loaders passivation="false" shared="true" preload="true">
                         <loader class="org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore" fetchPersistentState="true" ignoreModifications="false" purgeOnStartup="false">
                
                            <!-- See the documentation for more configuration examples and flags. -->
                            <properties>
                               <property name="key2StringMapperClass" value="org.infinispan.lucene.LuceneKey2StringMapper" />
                               <property name="createTableOnStart" value="true" />
                
                               <!-- Settings for MySQL: -->
                               <property name="datasourceJndiLocation" value="java:comp/env/jdbc/JiraDS" />
                               <property name="connectionFactoryClass" value="org.infinispan.loaders.jdbc.connectionfactory.ManagedConnectionFactory" />
                               <property name="dataColumnType" value="BLOB" />
                
                               <property name="idColumnType" value="VARCHAR(256)" />
                               <property name="idColumnName" value="idCol" />
                               <property name="dataColumnName" value="dataCol" />
                               <property name="stringsTableNamePrefix" value="SCARLET" />
                
                               <property name="timestampColumnName" value="timestampCol" />
                               <property name="timestampColumnType" value="BIGINT" />
                            </properties>
                            <async enabled="true" flushLockTimeout="25000" shutdownTimeout="7200" threadPoolSize="5" />
                         </loader>
                      </loaders>
                      <eviction maxEntries="-1" strategy="NONE" />
                      <expiration maxIdle="-1" />
                   </namedCache>
                
                • 5. Re: copying Lucene FSDirectory to InfinispanDirectory issues
                  Franck Garcia Newbie

                  Sanne, thanks for the notification. I will definitelly give it a try, it's on my list of TODO things. I was out for a while and works gets accumulated....I'll let you know how it goes, Rgds,

                  • 6. Re: copying Lucene FSDirectory to InfinispanDirectory issues
                    Franck Garcia Newbie

                    I retry to copy the lucene index snapshot to Infinispan with version 4.1.0.FINAL (I keep the exact same configuration I used to have in 4.1.0.BETA2) backed up by a mysql table manage asynchronously.

                     

                    I tried this step several times and got no error on the 1st try, an exception on the 2nd (when stopping the cache manager)

                     

                    2010-09-15 17:25:27,287 ERROR (org.infinispan.loaders.jdbc.DataManipulationHelper)[luceneIndex-JdbcBinaryCacheStore-0:] Failed clearing JdbcBinaryCacheStore
                    java.sql.SQLException: Operation not allowed after ResultSet closed
                        at com.mysql.jdbc.ResultSet.checkClosed(ResultSet.java:3604)
                        at com.mysql.jdbc.ResultSet.next(ResultSet.java:2469)
                        at com.mchange.v2.c3p0.impl.NewProxyResultSet.next(NewProxyResultSet.java:2859)
                        at org.infinispan.loaders.jdbc.binary.JdbcBinaryCacheStore.purgeInternal(JdbcBinaryCacheStore.java:281)
                        at org.infinispan.loaders.AbstractCacheStore$2.run(AbstractCacheStore.java:84)
                        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
                        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
                        at java.lang.Thread.run(Thread.java:619)
                    2010-09-15 17:25:27,295 ERROR (org.infinispan.loaders.AbstractCacheStore)[luceneIndex-JdbcBinaryCacheStore-0:] Problems encountered while purging expired
                    org.infinispan.loaders.CacheLoaderException: Failed clearing JdbcBinaryCacheStore
                        at org.infinispan.loaders.jdbc.DataManipulationHelper.logAndThrow(DataManipulationHelper.java:249)
                        at org.infinispan.loaders.jdbc.binary.JdbcBinaryCacheStore.purgeInternal(JdbcBinaryCacheStore.java:298)
                        at org.infinispan.loaders.AbstractCacheStore$2.run(AbstractCacheStore.java:84)
                        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
                        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
                        at java.lang.Thread.run(Thread.java:619)
                    Caused by: java.sql.SQLException: Operation not allowed after ResultSet closed
                        at com.mysql.jdbc.ResultSet.checkClosed(ResultSet.java:3604)
                        at com.mysql.jdbc.ResultSet.next(ResultSet.java:2469)
                        at com.mchange.v2.c3p0.impl.NewProxyResultSet.next(NewProxyResultSet.java:2859)
                        at org.infinispan.loaders.jdbc.binary.JdbcBinaryCacheStore.purgeInternal(JdbcBinaryCacheStore.java:281)
                        ... 4 more

                     

                    another one on the third:

                    2010-09-15 17:31:41,285 ERROR (org.infinispan.loaders.jdbc.DataManipulationHelper)[CoalescedAsyncStore-0:] sql failure while inserting bucket: Bucket{entries={_9.cfs|M|fxe_ca=ImmortalCacheEntry{cacheValue=ImmortalCacheValue{value=FileMetadata{lastModified=1284586301284, size=212992}}} ImmortalCacheEntry{key=_9.cfs|M|fxe_ca}}, bucketName='-1169968817'}
                    java.sql.SQLException: null,  message from server: "Duplicate entry '-1169968817' for key 'PRIMARY'"
                        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:1876)
                        at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1098)
                        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1192)
                        at com.mysql.jdbc.Connection.execSQL(Connection.java:2051)
                        at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1680)
                        at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:1527)
                        at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:105)
                        at org.infinispan.loaders.jdbc.binary.JdbcBinaryCacheStore.insertBucket(JdbcBinaryCacheStore.java:166)
                        at org.infinispan.loaders.bucket.BucketBasedCacheStore.storeLockSafe(BucketBasedCacheStore.java:67)
                        at org.infinispan.loaders.LockSupportCacheStore.store(LockSupportCacheStore.java:151)
                        at org.infinispan.loaders.decorators.AbstractDelegatingStore.store(AbstractDelegatingStore.java:46)
                        at org.infinispan.loaders.decorators.AsyncStore.applyModificationsSync(AsyncStore.java:204)
                        at org.infinispan.loaders.decorators.AsyncStore$AsyncProcessor.put(AsyncStore.java:360)
                        at org.infinispan.loaders.decorators.AsyncStore$AsyncProcessor.innerRun(AsyncStore.java:344)
                        at org.infinispan.loaders.decorators.AsyncStore$AsyncProcessor.run(AsyncStore.java:269)
                        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
                        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
                        at java.lang.Thread.run(Thread.java:619)

                     

                    The positive note is that the number of rows in my table is consistent between each try (10505)....

                     

                    After that I try the reverse operation (dump the infinispan back to a lucene file system to check nothing is missing)...But the process froze on startup (when preloading from the asynchstore). I attach a dump of the locked jvm.

                    (Note that the only time I was succesful with those 2 operations and without any errors was with version 4.1.0.BETA 2 and this patch

                    https://jira.jboss.org/browse/ISPN-545 from this discussion).

                    Thanks,

                    • 7. Re: copying Lucene FSDirectory to InfinispanDirectory issues
                      Sanne Grinovero Master

                      Hi Franck,

                      thanks for reporting; about the errors:

                       

                      1) the first one: doesn't seem important, it failed to perform a cleanup of expired entries. It's bad that it shows a stacktrace, but not really to worry about as Lucene directory doesn't use expired entries. I assume this is a bug in the order in which services are stopped, but anyway no data would be lost and even if you had expired entries they would be cleaned up in future.

                       

                      2) second one: this is a bad thing, apparently this Store is in a race condition with itself. Is this a multi-node setup, and in that case did you enable

                      shared="true"

                      ?

                      As it's a shared database, enabling shared is mandatory or every node will try to store the same values and could be the reason for this duplicate key.

                       

                      3) about the threaddump, there are at least three threads alive waiting for a connection to the database; so it was still busy loading all state from the database. Maybe your threadpool had exausted all connections, or the size of the pool was too small, or the database is refusing more connections?

                      I've been testing it with a huge database, of course my application appears to freeze when loading initially as I've configured it to preload all the data.

                       

                      Generally speaking, with version 4.1.0.Final it's now possible to use the

                      JdbcStringBasedCacheStore

                      instead of the binary one; this is recommended: could you please test with the configuration I posted above? That one is using the new LuceneKey2StringMapper.

                      • 8. Re: copying Lucene FSDirectory to InfinispanDirectory issues
                        Franck Garcia Newbie

                        Thanks Sanne, I use your configuration and it's working fine (no errors) it is also very fast ( I remove the transaction section). The problem with the connection pooling was due to the fact that mysql db has been updated with my linux distro recently. The default engine storage had been reset to MyISAM so I reset it to InnoDB and this problem disappear. I now plan to dump my whole index to infinispan and if I don't have any issue then test the search phase in my distributed grid......I'll keep you posted. Thanks again,

                        • 9. Re: copying Lucene FSDirectory to InfinispanDirectory issues
                          Franck Garcia Newbie

                          I have some problems when copying my current 2GB lucene FS to Infinispan.

                           

                          I'm using the org.apache.lucene.store.Directory.copy(Directory src, Directory dest, boolean closeDirSrc) method.

                           

                          Here after the configuration I'm using:

                           

                          <?xml version="1.0" encoding="UTF-8"?>
                          <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                              xsi:schemaLocation="urn:infinispan:config:4.0 http://www.infinispan.org/schemas/infinispan-config-4.1.xsd"
                              xmlns="urn:infinispan:config:4.0">
                          
                              <global>
                                  <transport clusterName="lucene-cluster"
                                      transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport" />
                              </global>
                          
                              <namedCache name="luceneIndex">
                                  <loaders passivation="false" shared="true" preload="true">
                                      <loader class="org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore"
                                          fetchPersistentState="false" ignoreModifications="false"
                                          purgeOnStartup="false">
                                          <properties>
                                              <property name="stringsTableNamePrefix" value="POC_CA_IDX" />
                                              <property name="key2StringMapperClass" value="org.infinispan.lucene.LuceneKey2StringMapper" />
                                              <property name="idColumnName" value="ID_COLUMN" />
                                              <property name="dataColumnName" value="DATA_COLUMN" />
                                              <property name="timestampColumnName" value="TIMESTAMP_COLUMN" />
                                              <property name="timestampColumnType" value="BIGINT" />
                                              <property name="connectionFactoryClass"
                                                  value="org.infinispan.loaders.jdbc.connectionfactory.PooledConnectionFactory" />
                                              <property name="connectionUrl" value="jdbc:mysql:///infinispan" />
                                              <property name="userName" value="infinispan" />
                                              <property name="driverClass" value="com.mysql.jdbc.Driver" />
                                              <property name="idColumnType" value="VARCHAR(256)" />
                                              <property name="dataColumnType" value="BLOB" />
                                              <property name="dropTableOnExit" value="false" />
                                              <property name="createTableOnStart" value="true" />
                                          </properties>
                                          <async enabled="true" flushLockTimeout="15000" threadPoolSize="10" />                
                                      </loader>
                                  </loaders>
                          
                                  <eviction wakeUpInterval="2000" maxEntries="1000" strategy="UNORDERED" />
                          
                                  <clustering mode="distribution">
                                      <l1 enabled="true" lifespan="600000" />
                                      <hash numOwners="2" />
                                      <sync />
                                  </clustering>
                                  
                                  <invocationBatching enabled="true" />
                              </namedCache>
                          </infinispan>
                          

                           

                          After a while the process raises exceptions (but still goes on):

                           

                          2010-09-17 09:45:21,779 ERROR (org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore)[CoalescedAsyncStore-2:] Error while storing string keys to database
                          java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
                              at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1075)
                              at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3566)
                              at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3498)
                              at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1959)
                              at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2113)
                              at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2568)
                              at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2113)
                              at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2409)
                              at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2327)
                              at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2312)
                              at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:105)
                              at org.infinispan.loaders.jdbc.stringbased.JdbcStringBasedCacheStore.storeLockSafe(JdbcStringBasedCacheStore.java:205)
                              at org.infinispan.loaders.LockSupportCacheStore.store(LockSupportCacheStore.java:151)
                              at org.infinispan.loaders.decorators.AbstractDelegatingStore.store(AbstractDelegatingStore.java:46)
                              at org.infinispan.loaders.decorators.AsyncStore.applyModificationsSync(AsyncStore.java:204)
                              at org.infinispan.loaders.decorators.AsyncStore$AsyncProcessor.put(AsyncStore.java:360)
                              at org.infinispan.loaders.decorators.AsyncStore$AsyncProcessor.innerRun(AsyncStore.java:344)
                              at org.infinispan.loaders.decorators.AsyncStore$AsyncProcessor.run(AsyncStore.java:269)
                              at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
                              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
                              at java.lang.Thread.run(Thread.java:619)
                          

                           

                          I have tens of errors like this before my main thread finally ends the cache manager and exit properly.

                          The issue is that I have 125508 rows in my database but I have some chunk id (eg '_at.cfs|130000|poc_ca') exceeding this number making me think that some chunks were not dump into the database.......

                          • 10. Re: copying Lucene FSDirectory to InfinispanDirectory issues
                            Sanne Grinovero Master

                            Hi Franck, this means you have more than 130,000 chunks in a single segment!

                            That's a very high number, I deduct from it that you're using the default chunk size.

                             

                            Please set a bigger buffer size for each chunk, so it will be more manageable: http://docs.jboss.org/infinispan/4.1/apidocs/org/infinispan/lucene/InfinispanDirectory.html#InfinispanDirectory%28org.infinispan.Cache,%20java.lang.String,%20int%29

                            The default is very small, as it should always work - in your case it seems you're killing the database with a too high load, so you could help it by reducing the number of keys.

                             

                            Also query performance is best when the chunk size is bigger, you should set it as big as possible, but low enough to have the index spread around the different nodes and avoid out of memory issues. I've had good results with sizes between 10MB and 100MB.

                             

                            Keep in mind MySQL won't accept such big packet sizes, if you didn't reconfigure the default MySQL settings you'll get another warning - sorry I don't remember the setting right now, but the exception is self-explanatory and will mention exactly what you need to set.

                            • 11. Re: copying Lucene FSDirectory to InfinispanDirectory issues
                              Franck Garcia Newbie

                              1 - As suggested, I increased the chunk size to 30MB [for the record MySql --> "max_allowed_packet= 64M", also DATA_COLUMN type becomes LONGBLOB].
                              But I still have the "java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction" exception raised.
                              I have the following 3 lucenes file that I try to dump in infinispan:
                              - segments_8    --> 59 Bytes
                              - segments.gen  --> 20 Bytes
                              - _at.cfs       -->  2 GB
                              From what I understood in the source code (org.apache.lucene.store.IndexOutput), each data chunk (30MB) + an update on the file metadata (key="_at.cfs|M|myLucene") is put in infinispan as a single batch (i.e.transaction).

                              I suspect (?) there is a contention (a deadlock?) on the file metadata when several batches are flushed to the db at the same time. I also tried to increase the mysql innodb_lock_wait_timeout to 120 seconds but still have the exception. I ran the simulation several times but in the end I consistently have 74 records in my db (68 for the data which seems to match the 68*30MB=2GB).

                               

                              Note that when I run the simulation with asynch disabled I have no exception at all (however it took 2.5 more times).

                               

                              2 - When I do the opposite operation (dump InfinispanDirectory to FSDirectory), I ran in an OOM exception (I have a single node).
                              I really don't understand how the eviction feature is working.

                                  <eviction wakeUpInterval="1000" maxEntries="10" strategy="UNORDERED" />

                              I also try to set the preload attribute to "false" on the loader component without success (If I have preload true is eviction disabled?).

                              • 12. Re: copying Lucene FSDirectory to InfinispanDirectory issues
                                Sanne Grinovero Master

                                1 - I'm using MySQL too, with InnoDB, and have no such issues. you're using async store right? In this case Infinispan will acquire a lock on the container for each key it intends to write to the store, so no node should be writing to the same database key. Also each change is a different operation, the async persister is not using batches nor transactions, so I doubt it could deadlock. I think this is a MySQL configuration issue.

                                Also when several batches of updates are sent to the store only the latest version of the file metadata is sent, you shouldn't have an high contention on it; some contention is still possible if the database is extremely slow; could you try having a single thread configured in the async store threadPoolSize ? Having 10 threads there might overrun your database write capabilities.

                                 

                                Also while Infinispan protects a single key to be written from more than one thread, as far as I remember in MySQL only InnoDB supports row-locking (what are you using?), if you're not using row-locking this might result in timeouts while writing to a different key.

                                 

                                2 - we should ask some of the evition experts. My guess is that the wakeUpInterval could need some tuning, so you might occasionally have more than 10 entries in memory, and this means up to 300MB + a bit of overhead. I don't fully understand that, even with quite a delay it should fit in your memory; you mentioned 2GB heap in the first post, did you check what is your effective heap size? Today on the mailing list it was mentioned that Infinispan seems to not evict entries, it's not confirmed yet but maybe this is related.

                                • 13. Re: copying Lucene FSDirectory to InfinispanDirectory issues
                                  Franck Garcia Newbie

                                  I finally had some times available to work a bit on that,

                                   

                                  1 - (dumping FSDirectory to InfinispanDirectory) I used innodb engine which provides row-locking support but for some reason I noticed that having LONG BLOB field kills the performance on my db. I switched to DB2 (iSeries) version and this problem disappear.

                                   

                                  2 - (OOM Exception when copying InfinispanDirectory back to FSDirectory).

                                  The OOM occurs when the close() method is called on the InfinispanIndexInput instance. I suspected the eviction manager to be problematic but I attach a listener for EntryEviction and it's working well. Actually it's working so well that it evicts the FileReadLockKey that is put at the opening of my huge segment file to protect against concurrent delete.

                                   

                                  Because this key is not in the cache anymore (nor in the store i.e. Flag.SKIP_CACHE_STORE) when closing, then a delete of my entire segment (2GB) is scheduled in a single batch (transaction). So all the chunks are loaded back in memory but the OOM occurs before the chunks are physically removed from the store. Your thoughts?

                                  • 14. Re: copying Lucene FSDirectory to InfinispanDirectory issues
                                    Sanne Grinovero Master

                                    Hi Franck,

                                    1 - great; I'm using MEDIUMBLOB, I should inspect if I'm suffering some bad database performance too.

                                     

                                    2 - You might have pointed me to the right direction, thank you for the great analysis. As you see I'm using .withFlags(Flag.SKIP_REMOTE_LOOKUP).removeAsync(chunkKey) to remove the segments; looking into the RemoveCommand I'm getting the idea that it might fail to respect the SKIP_REMOTE_LOOKUP; I need to create a unit test for this and talk to the others.

                                    While I look into this, could you please try commenting out the startBatch()/endBatch() ? That should avoid OOM, if this is the problem.

                                    1 2 Previous Next