1 Reply Latest reply on Oct 22, 2012 3:44 AM by Horia Chiorean

    Open File Leak?

    Brian Wallis Master

      This may be a bug or something wrong with the way my code is working. If a bug, I'll create a ticket with a test case attached. I'm running this against 3.0.0.CR2 and am not overriding any other versions (ie: infinispan, et.al. are all as specified by the Modeshape pom). Java veersion is 1.6.0_37 (CR3 is not available in the maven repos yet).

       

      If I run the following function

       

      {code}

          /**

           * Create a nt:folder node called testRoot under the root node and then create

           * "parentcount" nt:folder nodes under that node and then create

           * "sibcount" nt:file nodes under each of those folder nodes and create a

           * nt:resource node under each child with a jcr:data binary value of size "binarysize".

           * So this ends up 4 levels deep:

           * <pre>

           * nt:folder(testRoot) -> nt:folder(parent{n}) -> nt:file(child{m}) -> nt:resource(jcr:content)

           * </pre>

           * where

           * <dl>

           * <dt>{n}</dt><dd>goes from 0 to parentcount-1</dd>

           * <dt>{m}</dt><dd>goes from 0 to sibscount-1</dd>

           * </dl>

           * Note: the binary data is randomized so that each one is likely to be unique.

           *

           * @param parentcount number  of parent nt:folder nodes to create under testRoot

           * @param sibscount number of child nt:file nodes to create under each parent node

           * @param binarysize size of the binary attribute in each nt:resource node that is created under each nt:file node, can be 0

           * @return

           */

          public Map<String, List<String>> create(int parentcount,

                                                  int sibscount,

                                                  int binarysize)

          {

              Session                   session = null;

              Map<String, List<String>> nodeIds = new HashMap<String, List<String>>(parentcount);

              byte[]                    bytes   = new byte[binarysize];

       

              Arrays.fill(bytes, (byte) 'x');

       

              Random rand = new Random();

       

              try

              {

                  session = repository.login();

       

                  ValueFactory valueFactory = session.getValueFactory();

       

                  Node testRoot = session.getRootNode().addNode("testRoot", "nt:folder");

       

                  for(int i = 0; i < parentcount; i++)

                  {

                      nodeIds.put(testRoot.addNode("parent" + i, "nt:folder").getIdentifier(), new ArrayList<String>(sibscount));

                  }

                  session.save();

                 

                  for(Map.Entry<String, List<String>> entry : nodeIds.entrySet())

                  {

                      List<String> childrenIDs = entry.getValue();

                      Node         parent      = session.getNodeByIdentifier(entry.getKey());

       

                      for(int c = 0; c < sibscount; c++)

                      {

                          Node child = parent.addNode("child" + c, "nt:file");

                          childrenIDs.add(child.getIdentifier());

       

                          Node subchild = child.addNode("jcr:content", "nt:resource");

                          // randomise the data a little so each binary property is different

                          for(int i = 0; i < Math.min(30, binarysize); i++)

                          {

                              bytes[i] = (byte) rand.nextInt(256);

                          }

       

                          Binary data = valueFactory.createBinary(new ByteArrayInputStream(bytes));

                          subchild.setProperty("jcr:data", data);

                          data.dispose();

       

                          subchild.setProperty("jcr:mimeType", "application/binary" + c);

                          subchild.setProperty("jcr:lastModified", Calendar.getInstance());

                         

                          session.save();

                      }

                      session.save();

                  }

                  session.save();

              }

              catch(LoginException e)

              {

                  throw new RuntimeException("Failed to login to repository", e);

              }

              catch(RepositoryException e)

              {

                  throw new RuntimeException("Error using repository", e);

              }

              finally

              {

                  if(session != null)

                  {

                      session.logout();

                  }

              }

       

              return nodeIds;

          }

      {code}

       

      3 times with the parameters 10,200,0 then I get an error from infinispan (config shown at end) where the root cause seems to be too many open files. It looks as if lucene is not closing files correctly.

       

      {code}

      125 [main] INFO au.com.infomedix.mode_1678.LeakTest  - Start create cycle 0

      672 [main] INFO org.infinispan.factories.GlobalComponentRegistry  - ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.2.FINAL

      2643 [main] INFO org.hibernate.search.Version  - HSEARCH000034: Hibernate Search 4.1.1.Final

      2664 [main] INFO org.hibernate.annotations.common.Version  - HCANN000001: Hibernate Commons Annotations {4.0.1.Final}

      2732 [main] WARN org.hibernate.search.store.impl.DirectoryProviderHelper  - HSEARCH000041: Index directory not found, creating: '/users/bwallis/InfoMedix/JBoss/ModeShape/testworkspace/1678-Test/DataRepository/indexes'

      2786 [main] WARN org.hibernate.search.store.impl.DirectoryProviderHelper  - HSEARCH000041: Index directory not found, creating: '/users/bwallis/InfoMedix/JBoss/ModeShape/testworkspace/1678-Test/DataRepository/indexes/nodeinfo'

      3020 [main] INFO org.hibernate.search.indexes.serialization.avro.impl.AvroSerializationProvider  - HSEARCH000079: Serialization protocol version 1.0

      100256 [main] INFO au.com.infomedix.mode_1678.LeakTest  - create returned 10 create results

      100256 [main] INFO au.com.infomedix.mode_1678.LeakTest  - Start create cycle 1

      187864 [main] INFO au.com.infomedix.mode_1678.LeakTest  - create returned 10 create results

      187864 [main] INFO au.com.infomedix.mode_1678.LeakTest  - Start create cycle 2

      243310 [Hibernate Search: Index updates queue processor for index nodeinfo-1] ERROR org.hibernate.search.exception.impl.LogErrorHandler  - HSEARCH000058: Exception occurred org.hibernate.search.SearchException: Unable to add to Lucene index: class org.modeshape.jcr.query.lucene.basic.NodeInfo#86cfc447505d648f42bc28-a095-4cee-9b02-94b8f19eb7dd

      Primary Failure:

                Entity org.modeshape.jcr.query.lucene.basic.NodeInfo  Id 86cfc447505d6467bd26cf-919d-42bb-a680-08a47fb1344e  Work Type  org.hibernate.search.backend.AddLuceneWork

      Subsequent failures:

                Entity org.modeshape.jcr.query.lucene.basic.NodeInfo  Id 86cfc447505d6467bd26cf-919d-42bb-a680-08a47fb1344e  Work Type  org.hibernate.search.backend.AddLuceneWork

                Entity org.modeshape.jcr.query.lucene.basic.NodeInfo  Id 86cfc447505d64febda6f0-b222-4c31-9612-09681275c156  Work Type  org.hibernate.search.backend.UpdateLuceneWork

                Entity org.modeshape.jcr.query.lucene.basic.NodeInfo  Id 86cfc447505d648f42bc28-a095-4cee-9b02-94b8f19eb7dd  Work Type  org.hibernate.search.backend.AddLuceneWork

                Entity org.modeshape.jcr.query.lucene.basic.NodeInfo  Id 86cfc447505d64febda6f0-b222-4c31-9612-09681275c156  Work Type  org.hibernate.search.backend.UpdateLuceneWork

                Entity org.modeshape.jcr.query.lucene.basic.NodeInfo  Id 86cfc447505d648f42bc28-a095-4cee-9b02-94b8f19eb7dd  Work Type  org.hibernate.search.backend.AddLuceneWork

       

      org.hibernate.search.SearchException: Unable to add to Lucene index: class org.modeshape.jcr.query.lucene.basic.NodeInfo#86cfc447505d648f42bc28-a095-4cee-9b02-94b8f19eb7dd

                at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:74)

                at org.hibernate.search.backend.impl.lucene.SingleTaskRunnable.run(SingleTaskRunnable.java:48)

                at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)

                at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

                at java.util.concurrent.FutureTask.run(FutureTask.java:138)

                at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                at java.lang.Thread.run(Thread.java:680)

      Caused by: java.io.FileNotFoundException: /Users/bwallis/InfoMedix/JBoss/ModeShape/testworkspace/1678-Test/DataRepository/indexes/nodeinfo/_49j.fdt (Too many open files)

                at java.io.RandomAccessFile.open(Native Method)

                at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)

                at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:441)

                at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:306)

                at org.apache.lucene.index.FieldsWriter.<init>(FieldsWriter.java:83)

                at org.apache.lucene.index.StoredFieldsWriter.initFieldsWriter(StoredFieldsWriter.java:65)

                at org.apache.lucene.index.StoredFieldsWriter.finishDocument(StoredFieldsWriter.java:108)

                at org.apache.lucene.index.StoredFieldsWriter$PerDoc.finish(StoredFieldsWriter.java:152)

                at org.apache.lucene.index.DocumentsWriter$WaitQueue.writeDocument(DocumentsWriter.java:1404)

                at org.apache.lucene.index.DocumentsWriter$WaitQueue.add(DocumentsWriter.java:1424)

                at org.apache.lucene.index.DocumentsWriter.finishDocument(DocumentsWriter.java:1043)

                at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:772)

                at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2066)

                at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:70)

                ... 7 more

      243313 [Hibernate Search: Index updates queue processor for index nodeinfo-1] WARN org.hibernate.search.backend.impl.lucene.IndexWriterHolder  - HSEARCH000052: Going to force release of the IndexWriter lock

      243504 [Hibernate Search: Index updates queue processor for index nodeinfo-1] ERROR org.hibernate.search.exception.impl.LogErrorHandler  - HSEARCH000058: HSEARCH000117: IOException on the IndexWriter

      java.io.FileNotFoundException: /Users/bwallis/InfoMedix/JBoss/ModeShape/testworkspace/1678-Test/DataRepository/indexes/nodeinfo/_41n_2.del (Too many open files)

                at java.io.RandomAccessFile.open(Native Method)

                at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)

                at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.<init>(SimpleFSDirectory.java:70)

                at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.<init>(SimpleFSDirectory.java:97)

                at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:92)

                at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:79)

                at org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:345)

                at org.apache.lucene.util.BitVector.<init>(BitVector.java:266)

                at org.apache.lucene.index.SegmentReader.loadDeletedDocs(SegmentReader.java:159)

                at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:119)

                at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:705)

                at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:680)

                at org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:245)

                at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3651)

                at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3417)

                at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3524)

                at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3506)

                at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3490)

                at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.commitIndexWriter(IndexWriterHolder.java:139)

                at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.commitIndexWriter(IndexWriterHolder.java:152)

                at org.hibernate.search.backend.impl.lucene.ExclusiveIndexWorkspaceImpl.afterTransactionApplied(ExclusiveIndexWorkspaceImpl.java:44)

                at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:138)

                at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)

                at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)

                at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

                at java.util.concurrent.FutureTask.run(FutureTask.java:138)

                at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                at java.lang.Thread.run(Thread.java:680)

      243525 [Hibernate Search: Index updates queue processor for index nodeinfo-1] ERROR org.hibernate.search.exception.impl.LogErrorHandler  - HSEARCH000058: HSEARCH000117: IOException on the IndexWriter

      java.io.FileNotFoundException: /Users/bwallis/InfoMedix/JBoss/ModeShape/testworkspace/1678-Test/DataRepository/indexes/nodeinfo/_49q.frq (Too many open files)

                at java.io.RandomAccessFile.open(Native Method)

                at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)

                at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:441)

                at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:306)

                at org.apache.lucene.index.FormatPostingsDocsWriter.<init>(FormatPostingsDocsWriter.java:47)

                at org.apache.lucene.index.FormatPostingsTermsWriter.<init>(FormatPostingsTermsWriter.java:33)

                at org.apache.lucene.index.FormatPostingsFieldsWriter.<init>(FormatPostingsFieldsWriter.java:51)

                at org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)

                at org.apache.lucene.index.TermsHash.flush(TermsHash.java:113)

                at org.apache.lucene.index.DocInverter.flush(DocInverter.java:70)

                at org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:60)

                at org.apache.lucene.index.DocumentsWriter.flush(DocumentsWriter.java:581)

                at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3623)

                at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3417)

                at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3524)

                at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3506)

                at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3490)

                at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.commitIndexWriter(IndexWriterHolder.java:139)

                at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.commitIndexWriter(IndexWriterHolder.java:152)

                at org.hibernate.search.backend.impl.lucene.ExclusiveIndexWorkspaceImpl.afterTransactionApplied(ExclusiveIndexWorkspaceImpl.java:44)

                at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:138)

                at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)

                at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)

                at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

                at java.util.concurrent.FutureTask.run(FutureTask.java:138)

                at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                at java.lang.Thread.run(Thread.java:680)

      243538 [Hibernate Search: Index updates queue processor for index nodeinfo-1] ERROR org.hibernate.search.exception.impl.LogErrorHandler  - HSEARCH000058: Exception occurred org.hibernate.search.SearchException: Unable to add to Lucene index: class org.modeshape.jcr.query.lucene.basic.NodeInfo#86cfc447505d6438e4d1bc-b95e-4181-9f0e-20db0bd657d5

      Primary Failure:

                Entity org.modeshape.jcr.query.lucene.basic.NodeInfo  Id 86cfc447505d64febda6f0-b222-4c31-9612-09681275c156  Work Type  org.hibernate.search.backend.UpdateLuceneWork

      Subsequent failures:

                Entity org.modeshape.jcr.query.lucene.basic.NodeInfo  Id 86cfc447505d64febda6f0-b222-4c31-9612-09681275c156  Work Type  org.hibernate.search.backend.UpdateLuceneWork

                Entity org.modeshape.jcr.query.lucene.basic.NodeInfo  Id 86cfc447505d64b2493e3f-b583-46c7-8967-18496aa63382  Work Type  org.hibernate.search.backend.AddLuceneWork

                Entity org.modeshape.jcr.query.lucene.basic.NodeInfo  Id 86cfc447505d6438e4d1bc-b95e-4181-9f0e-20db0bd657d5  Work Type  org.hibernate.search.backend.AddLuceneWork

                Entity org.modeshape.jcr.query.lucene.basic.NodeInfo  Id 86cfc447505d64b2493e3f-b583-46c7-8967-18496aa63382  Work Type  org.hibernate.search.backend.AddLuceneWork

                Entity org.modeshape.jcr.query.lucene.basic.NodeInfo  Id 86cfc447505d6438e4d1bc-b95e-4181-9f0e-20db0bd657d5  Work Type  org.hibernate.search.backend.AddLuceneWork

       

      org.hibernate.search.SearchException: Unable to add to Lucene index: class org.modeshape.jcr.query.lucene.basic.NodeInfo#86cfc447505d6438e4d1bc-b95e-4181-9f0e-20db0bd657d5

                at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:74)

                at org.hibernate.search.backend.impl.lucene.SingleTaskRunnable.run(SingleTaskRunnable.java:48)

                at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)

                at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

                at java.util.concurrent.FutureTask.run(FutureTask.java:138)

                at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                at java.lang.Thread.run(Thread.java:680)

      Caused by: java.io.FileNotFoundException: /Users/bwallis/InfoMedix/JBoss/ModeShape/testworkspace/1678-Test/DataRepository/indexes/nodeinfo/_49t.fdt (Too many open files)

                at java.io.RandomAccessFile.open(Native Method)

                at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)

                at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:441)

                at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:306)

                at org.apache.lucene.index.FieldsWriter.<init>(FieldsWriter.java:83)

                at org.apache.lucene.index.StoredFieldsWriter.initFieldsWriter(StoredFieldsWriter.java:65)

                at org.apache.lucene.index.StoredFieldsWriter.finishDocument(StoredFieldsWriter.java:108)

                at org.apache.lucene.index.StoredFieldsWriter$PerDoc.finish(StoredFieldsWriter.java:152)

                at org.apache.lucene.index.DocumentsWriter$WaitQueue.writeDocument(DocumentsWriter.java:1404)

                at org.apache.lucene.index.DocumentsWriter$WaitQueue.add(DocumentsWriter.java:1424)

                at org.apache.lucene.index.DocumentsWriter.finishDocument(DocumentsWriter.java:1043)

                at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:772)

                at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2066)

                at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:70)

                ... 7 more

      243540 [Hibernate Search: Index updates queue processor for index nodeinfo-1] WARN org.hibernate.search.backend.impl.lucene.IndexWriterHolder  - HSEARCH000052: Going to force release of the IndexWriter lock

      243539 [DataRepository-FileCacheStore-0] ERROR org.infinispan.loaders.file.FileCacheStore  - ISPN000062: Error while reading from file: /users/bwallis/InfoMedix/JBoss/ModeShape/testworkspace/1678-Test/DataRepository/storage/DataRepository/-1997204480

      java.io.FileNotFoundException: DataRepository/storage/DataRepository/-1997204480 (Too many open files)

                at java.io.FileInputStream.open(Native Method)

                at java.io.FileInputStream.<init>(FileInputStream.java:120)

                at org.infinispan.loaders.file.FileCacheStore.loadBucket(FileCacheStore.java:311)

                at org.infinispan.loaders.file.FileCacheStore.doPurge(FileCacheStore.java:254)

                at org.infinispan.loaders.file.FileCacheStore.purgeInternal(FileCacheStore.java:233)

                at org.infinispan.loaders.AbstractCacheStore$2.run(AbstractCacheStore.java:106)

                at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                at java.lang.Thread.run(Thread.java:680)

      243541 [Hibernate Search: Index updates queue processor for index nodeinfo-1] ERROR org.hibernate.search.exception.impl.LogErrorHandler  - HSEARCH000058: HSEARCH000117: IOException on the IndexWriter

      org.apache.lucene.store.LockReleaseFailedException: Cannot forcefully unlock a NativeFSLock which is held by another indexer component: DataRepository/indexes/nodeinfo/write.lock

                at org.apache.lucene.store.NativeFSLock.release(NativeFSLockFactory.java:294)

                at org.apache.lucene.index.IndexWriter.unlock(IndexWriter.java:4654)

                at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.forceLockRelease(IndexWriterHolder.java:187)

                at org.hibernate.search.backend.impl.lucene.ExclusiveIndexWorkspaceImpl.afterTransactionApplied(ExclusiveIndexWorkspaceImpl.java:40)

                at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:138)

                at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67)

                at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)

                at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

                at java.util.concurrent.FutureTask.run(FutureTask.java:138)

                at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                at java.lang.Thread.run(Thread.java:680)

      243545 [DataRepository-FileCacheStore-0] WARN org.infinispan.loaders.file.FileCacheStore  - ISPN000060: Problems purging file DataRepository/storage/DataRepository/-1997204480

      org.infinispan.loaders.CacheLoaderException: Error while reading from file

                at org.infinispan.loaders.file.FileCacheStore.loadBucket(FileCacheStore.java:317)

                at org.infinispan.loaders.file.FileCacheStore.doPurge(FileCacheStore.java:254)

                at org.infinispan.loaders.file.FileCacheStore.purgeInternal(FileCacheStore.java:233)

                at org.infinispan.loaders.AbstractCacheStore$2.run(AbstractCacheStore.java:106)

                at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                at java.lang.Thread.run(Thread.java:680)

      Caused by: java.io.FileNotFoundException: DataRepository/storage/DataRepository/-1997204480 (Too many open files)

                at java.io.FileInputStream.open(Native Method)

                at java.io.FileInputStream.<init>(FileInputStream.java:120)

                at org.infinispan.loaders.file.FileCacheStore.loadBucket(FileCacheStore.java:311)

                ... 6 more

       

      {code}

       

      My config is

       

      {code}

      DataRepository.json:

       

      {

          "name" : "DataRepository",

          "transactionMode" : "auto",

          "monitoring" : {

              "enabled" : true,

          },

          "workspaces" : {

              "default" : "default",

              "allowCreation" : false,

              "cacheConfiguration" : "workspace_cache_config.xml",

          },

          "storage" : {

              "cacheName" : "DataRepository",

              "cacheConfiguration" : "infinispan_configuration.xml",

              "transactionManagerLookup" : "org.infinispan.transaction.lookup.DummyTransactionManagerLookup",

              "binaryStorage" : {

                  "type" : "file",

                  "directory" : "DataRepository/binaries",

                  "minimumBinarySizeInBytes" : 4096

              }

          },

          "query" : {

              "enabled" : true,

              "rebuildUponStartup" : "if_missing",

              "indexStorage" : {

                  "type" : "filesystem",

                  "location" : "DataRepository/indexes",

                  "lockingStrategy" : "native",

                  "fileSystemAccessType" : "auto"

              },

          }

      }

       

      infinispan_configuration.xml:

       

      <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

          xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"

          xmlns="urn:infinispan:config:5.1">

       

          <global>

          </global>

       

          <default>

          </default>

       

          <namedCache name="DataRepository">

              <eviction strategy="LIRS" maxEntries="1000" />       

              <loaders passivation="false" shared="false" preload="false">

                <loader class="org.infinispan.loaders.file.FileCacheStore"

                        fetchPersistentState="false"

                        purgeOnStartup="false">

                   <properties>

                      <property name="location" value="DataRepository/storage"/>

                   </properties>

                </loader>

              </loaders>

              <transaction

                  transactionManagerLookupClass="org.infinispan.transaction.lookup.DummyTransactionManagerLookup"

                  transactionMode="TRANSACTIONAL" lockingMode="OPTIMISTIC" />

          </namedCache>

      </infinispan>

       

      workspace_cache_config.xml:

       

      <?xml version="1.0" encoding="UTF-8"?>

      <infinispan

              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

              xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"

              xmlns="urn:infinispan:config:5.1">

       

          <global/>

         

          <default>

              <clustering mode="LOCAL"/>

              <eviction maxEntries="100" strategy="LIRS"/>

              <expiration lifespan="120000" maxIdle="60000"/> 

          </default>

         

      </infinispan>

      {code}