1 2 Previous Next 18 Replies Latest reply on Jan 27, 2010 12:46 PM by manik

    Load data from store only from cold start (only member in a cluster)

    donguidou

      Hi,

       

      I configured a replicated cluster with a persistent file store.  During application startup, I would like to load the data from the store only when there is no member available in a cluster.  If the cluster is already up, the data is fetched from members of the cluster.

       

      What is the most reliable way to implement this feature?

       

      Huu-Dong Quach

        • 1. Re: Load data from store only from cold start (only member in a cluster)
          manik
          Do you mean fetching lazily as data is needed, or at the time of cache startup as a pre-load?
          • 2. Re: Load data from store only from cold start (only member in a cluster)
            donguidou

            Yes, I would like to pre-load the data but only when the cluster contains one and only one member.

             

            I have some issues with the current implementation of FileCacheStore.  If I set the preload attribute of <loaders> to true, the first member in the cluster pre-load the data correctly.

             

            But if the second member join the cluster after being offline for a certain amount of time, the data load from the second member is out of sync and it will override the correct data from the first member.

             

            Example (from a cold start):

             

            1. Cache A starts and pre-load the data from the store (store is emply);
            2. Put key01 and value01 in cache A (the data persist in the store);
            3. Cache B start, pre-load the data from the store (store is empty) and join the cluster ;
            4. Both cache are now replicated;
            5. Cache B shutdown;
            6. Update key01 to modifiedValue in cache A.  The data from cache B's store is now out of sync.
            7. Cache B start and pre-load the data from store (key01, value01) which override the previously updated data from cache A.

             

            I plan to analyse and modify the current implementation of FileCacheStore.

            • 3. Re: Load data from store only from cold start (only member in a cluster)
              manik

              donguidou wrote:

               

              Yes, I would like to pre-load the data but only when the cluster contains one and only one member.

               

              I have some issues with the current implementation of FileCacheStore.  If I set the preload attribute of <loaders> to true, the first member in the cluster pre-load the data correctly.

               

              But if the second member join the cluster after being offline for a certain amount of time, the data load from the second member is out of sync and it will override the correct data from the first member.

               

              Example (from a cold start):

               

              1. Cache A starts and pre-load the data from the store (store is emply);
              2. Put key01 and value01 in cache A (the data persist in the store);
              3. Cache B start, pre-load the data from the store (store is empty) and join the cluster ;
              4. Both cache are now replicated;
              5. Cache B shutdown;
              6. Update key01 to modifiedValue in cache A.  The data from cache B's store is now out of sync.
              7. Cache B start and pre-load the data from store (key01, value01) which override the previously updated data from cache A.

               

              I plan to analyse and modify the current implementation of FileCacheStore.

               

              What you need is the fetchPersistentState attribute of the loader config element.  Set this to true, and node B's FileCacheStore will be sync'd with node A's store.

              • 4. Re: Load data from store only from cold start (only member in a cluster)
                donguidou

                I will try the fetchPersistentState.  I did try that attribute with the demo from cr3 (after modifying the startup script a bit)  but it didn't work.  I've got the following exception.

                 

                {code}

                INFO: Trying to fetch state from hqpphdq02-14246 12-Jan-2010 10:40:29 AM org.infinispan.remoting.transport.jgroups.JGroupsTransport setState SEVERE: Caught while requesting or applying state org.infinispan.statetransfer.StateTransferException: java.io.EOFException: Read past end of file         at org.infinispan.statetransfer.StateTransferManagerImpl.assertDelimited(StateTransferManagerImpl.java:381)         at org.infinispan.statetransfer.StateTransferManagerImpl.applyState(StateTransferManagerImpl.java:307)         at org.infinispan.remoting.InboundInvocationHandlerImpl.applyState(InboundInvocationHandlerImpl.java:73)         at org.infinispan.remoting.transport.jgroups.JGroupsTransport.setState(JGroupsTransport.java:556)         at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.handleUpEvent(MessageDispatcher.java:789)         at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:849)         at org.jgroups.JChannel.up(JChannel.java:1413)         at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:828)         at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:502)         at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.connectToStateProvider(STREAMING_STATE_TRANSFER.java:526)         at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.handleStateRsp(STREAMING_STATE_TRANSFER.java:465)         at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:230)         at org.jgroups.protocols.FRAG2.up(FRAG2.java:189)         at org.jgroups.protocols.FC.up(FC.java:481)         at org.jgroups.protocols.pbcast.GMS.up(GMS.java:892)         at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:241)         at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:582)         at org.jgroups.protocols.UNICAST.up(UNICAST.java:275)         at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:700)         at org.jgroups.protocols.BARRIER.up(BARRIER.java:121)         at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:180)         at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:270)         at org.jgroups.stack.Protocol.up(Protocol.java:345)         at org.jgroups.protocols.Discovery.up(Discovery.java:283)         at org.jgroups.protocols.PING.up(PING.java:67)         at org.jgroups.protocols.TP.passMessageUp(TP.java:1012)         at org.jgroups.protocols.TP.access$100(TP.java:53)         at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1516)         at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1498)         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)         at java.lang.Thread.run(Thread.java:619) Caused by: java.io.EOFException: Read past end of file         at org.jboss.marshalling.AbstractUnmarshaller.eofOnRead(AbstractUnmarshaller.java:184)         at org.jboss.marshalling.AbstractUnmarshaller.readUnsignedByteDirect(AbstractUnmarshaller.java:312)         at org.jboss.marshalling.AbstractUnmarshaller.readUnsignedByte(AbstractUnmarshaller.java:280)         at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:207)         at org.jboss.marshalling.AbstractUnmarshaller.readObject(AbstractUnmarshaller.java:85)         at org.infinispan.marshall.jboss.JBossMarshaller.objectFromObjectStream(JBossMarshaller.java:207)         at org.infinispan.marshall.VersionAwareMarshaller.objectFromObjectStream(VersionAwareMarshaller.java:171)         at org.infinispan.statetransfer.StateTransferManagerImpl.assertDelimited(StateTransferManagerImpl.java:379)         ... 31 more 12-Jan-2010 10:40:29 AM org.infinispan.remoting.rpc.RpcManagerImpl retrieveState WARNING: Could not find available peer for state, backing off and retrying 12-Jan-2010 10:40:30 AM org.infinispan.util.logging.AbstractLogImpl info INFO: Trying to fetch state from hqpphdq02-14246 12-Jan-2010 10:40:30 AM org.infinispan.remoting.transport.jgroups.JGroupsTransport setState SEVERE: Caught while requesting or applying state org.infinispan.statetransfer.StateTransferException: Provider cannot provide state!         at org.infinispan.statetransfer.StateTransferManagerImpl.applyState(StateTransferManagerImpl.java:315)         at org.infinispan.remoting.InboundInvocationHandlerImpl.applyState(InboundInvocationHandlerImpl.java:73)         at org.infinispan.remoting.transport.jgroups.JGroupsTransport.setState(JGroupsTransport.java:556)         at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.handleUpEvent(MessageDispatcher.java:789)         at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:849)         at org.jgroups.JChannel.up(JChannel.java:1413)         at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:828)         at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:502)         at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.connectToStateProvider(STREAMING_STATE_TRANSFER.java:526)         at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.handleStateRsp(STREAMING_STATE_TRANSFER.java:465)         at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:230)         at org.jgroups.protocols.FRAG2.up(FRAG2.java:189)         at org.jgroups.protocols.FC.up(FC.java:481)         at org.jgroups.protocols.pbcast.GMS.up(GMS.java:892)         at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:241)         at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:582)         at org.jgroups.protocols.UNICAST.up(UNICAST.java:275)         at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:700)         at org.jgroups.protocols.BARRIER.up(BARRIER.java:121)         at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:180)         at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:270)         at org.jgroups.stack.Protocol.up(Protocol.java:345)         at org.jgroups.protocols.Discovery.up(Discovery.java:283)         at org.jgroups.protocols.PING.up(PING.java:67)         at org.jgroups.protocols.TP.passMessageUp(TP.java:1012)         at org.jgroups.protocols.TP.access$100(TP.java:53)         at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1516)         at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1498)         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)         at java.lang.Thread.run(Thread.java:619) 12-Jan-2010 10:40:30 AM org.infinispan.remoting.rpc.RpcManagerImpl retrieveState WARNING: Could not find available peer for state, backing off and retrying 12-Jan-2010 10:40:32 AM org.infinispan.util.logging.AbstractLogImpl info INFO: Trying to fetch state from hqpphdq02-14246 12-Jan-2010 10:40:32 AM org.infinispan.remoting.transport.jgroups.JGroupsTransport setState SEVERE: Caught while requesting or applying state org.infinispan.statetransfer.StateTransferException: Provider cannot provide state!         at org.infinispan.statetransfer.StateTransferManagerImpl.applyState(StateTransferManagerImpl.java:315)         at org.infinispan.remoting.InboundInvocationHandlerImpl.applyState(InboundInvocationHandlerImpl.java:73)         at org.infinispan.remoting.transport.jgroups.JGroupsTransport.setState(JGroupsTransport.java:556)         at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.handleUpEvent(MessageDispatcher.java:789)         at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:849)         at org.jgroups.JChannel.up(JChannel.java:1413)         at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:828)         at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:502)         at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.connectToStateProvider(STREAMING_STATE_TRANSFER.java:526)         at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.handleStateRsp(STREAMING_STATE_TRANSFER.java:465)         at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.up(STREAMING_STATE_TRANSFER.java:230)         at org.jgroups.protocols.FRAG2.up(FRAG2.java:189)         at org.jgroups.protocols.FC.up(FC.java:481)         at org.jgroups.protocols.pbcast.GMS.up(GMS.java:892)         at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:241)         at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:582)         at org.jgroups.protocols.UNICAST.up(UNICAST.java:275)         at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:700)         at org.jgroups.protocols.BARRIER.up(BARRIER.java:121)         at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:180)         at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:270)         at org.jgroups.stack.Protocol.up(Protocol.java:345)         at org.jgroups.protocols.Discovery.up(Discovery.java:283)         at org.jgroups.protocols.PING.up(PING.java:67)         at org.jgroups.protocols.TP.passMessageUp(TP.java:1012)         at org.jgroups.protocols.TP.access$100(TP.java:53)         at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1516)         at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1498)         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)         at java.lang.Thread.run(Thread.java:619) 12-Jan-2010 10:40:32 AM org.infinispan.remoting.rpc.RpcManagerImpl retrieveState WARNING: Could not find available peer for state, backing off and retrying

                {code}

                 

                The configuration use in the demo are 3.xml and 4.xml.

                 

                I will try this configuration in my application from the snapshot build from maven today.

                • 5. Re: Load data from store only from cold start (only member in a cluster)
                  donguidou

                  When I flag the fetchPersistentState attribute to true in my application, the application generate an exception as state in the previous post.

                   

                  I will increase the logging level to trace to see if I can get any useful information.

                  • 6. Re: Load data from store only from cold start (only member in a cluster)
                    donguidou

                    I tried the following configuration to have a better understanding of the fetchPersistentState attribute.

                     

                    {code:xml}

                    <?xml version="1.0" encoding="UTF-8"?> <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:infinispan:config:4.0">     <global>         <transport clusterName="infinispan-test-cluster" />     </global>     <default>         <clustering mode="replication">             <async useReplQueue="false" asyncMarshalling="true" />             <stateRetrieval fetchInMemoryState="false" />         </clustering>         <loaders passivation="false" shared="false" preload="false">             <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true" ignoreModifications="false"                 purgeOnStartup="false">                 <properties>                     <property name="location" value="U:/lib/infinispan-4.0.0.CR3/data/1" />                 </properties>                 <async enabled="true" />             </loader>         </loaders>     </default> </infinispan>

                    {code}

                     

                    I expect the store to fetch the data from another member of the cluster but that's not the case.  When I restart the member that I stopped,  the data that I deleted is still present.  If the fetchPersistentState attribute is set to true, I expect the deleted data to be deleted before the cache is available.

                    • 7. Re: Load data from store only from cold start (only member in a cluster)
                      manik

                      donguidou wrote:

                       

                      I expect the store to fetch the data from another member of the cluster but that's not the case.  When I restart the member that I stopped,  the data that I deleted is still present.  If the fetchPersistentState attribute is set to true, I expect the deleted data to be deleted before the cache is available.

                      This is possibly a bug then.  You should report it in JIRA.  As a workaround, you could purge the cache store on startup (there is a flag for this) so that the local store is purged before persistent state is retrieved from a neighbour.

                      • 8. Re: Load data from store only from cold start (only member in a cluster)
                        donguidou

                        I report this issue in https://jira.jboss.org/jira/browse/ISPN-335.

                         

                        Also, I tried your suggestion but it doesn't work.  The purgeOnStartup attribute actually clear the store but the fetchPersistentState attribute generates an exception.

                         

                        After some debugging, I noticed that state transfer happened after the preloading phase.

                         

                        Correct me if I'm wrong but the initialization of the cache should be:

                        1. Purge the store if purgeOnStartup attribute is set.
                        2. Fetch persistent state if fetchPersistentState attribute is set.
                        3. Pre-load the data from the store if the preload attribute is set.
                        • 9. Re: Load data from store only from cold start (only member in a cluster)
                          manik

                          Yes, I can see a bug in that the preload happens *before* any state is transferred.  I have just checked a fix in to trunk; if you feel like, try downloading the src tree (check out from Subversion), build and try again.

                           

                          http://fisheye.jboss.org/changelog/Infinispan/trunk?cs=1389

                          • 10. Re: Load data from store only from cold start (only member in a cluster)
                            donguidou

                            Ok, I've tested from the trunk (revision 1389) and the preload phase happens *after* the state is transferred.

                            But, the state transfer phase still generate an exception.

                            Also, if I set the fetchInMemoryState to false in <stateRetrieval /> and fetchPersistentState to true in <loader />, the persistent state transfer never happens (the code [StateTransferManagerImpl.java] is never run through the debugger).

                            If I set the fetchInMemoryState to true in <stateRetrieval /> and fetchPersistentState to true in <loader />, the persistent state transfer generates an exception.

                            • 11. Re: Load data from store only from cold start (only member in a cluster)
                              galder.zamarreno

                              Also, if I set the fetchInMemoryState to false in <stateRetrieval /> and fetchPersistentState to true in <loader />, the persistent state transfer never happens (the code [StateTransferManagerImpl.java] is never run through the debugger).

                               

                              I've started a thread in the dev list to discuss this.

                              If I set the fetchInMemoryState to true in <stateRetrieval /> and fetchPersistentState to true in <loader />, the persistent state transfer generates an exception.

                              This should be fixed now: http://fisheye.jboss.org/changelog/Infinispan/?cs=1405

                              • 12. Re: Load data from store only from cold start (only member in a cluster)
                                donguidou

                                Thanks Galder, the memory and persistant state are now transfer properly.

                                 

                                However, if I set preload to true in <loaders>, the pre-loading phase generates an exception.


                                2010-01-22 12:53:13,578 DEBUG {main} [o.i.s.StateTransferManagerImpl:16] State transfer process completed in 67 milliseconds 
                                2010-01-22 12:53:13,579 DEBUG {main} [o.i.l.CacheLoaderManagerImpl:16] Preloading transient state from cache loader org.infinispan.loaders.decorators.AsyncStore@111b8b76 
                                2010-01-22 12:53:13,579 TRACE {main} [o.i.l.LockSupportCacheStore:105] loadAll() 
                                2010-01-22 12:53:13,589 TRACE {main} [o.i.l.f.FileCacheStore:195] Found bucket file: 'U:\lib\infinispan-4.0.0.CR3\data\2\isd-source\94744607' 
                                2010-01-22 12:53:13,592 TRACE {main} [o.i.m.VersionAwareMarshaller:12] Read version 400 
                                2010-01-22 12:53:13,594 TRACE {main} [o.i.l.LockSupportCacheStore:111] Exit loadAll() 
                                2010-01-22 12:53:13,594 TRACE {main} [o.i.i.InvocationContextInterceptor:40] Invoked with command PutKeyValueCommand{key=cle03, value=value03, putIfAbsent=false, lifespanMillis=-1, maxIdleTimeMillis=-1} and InvocationContext [NonTxInvocationContext{flags=[SKIP_CACHE_STATUS_CHECK]}] 
                                2010-01-22 12:53:13,594 TRACE {main} [o.i.c.EntryFactoryImpl:12] Key cle03 is not in context, fetching from container. 
                                2010-01-22 12:53:13,595 TRACE {main} [o.i.c.EntryFactoryImpl:119] Exists in context. 
                                2010-01-22 12:53:13,595 TRACE {main} [o.i.u.c.l.LockManagerImpl:12] Attempting to lock cle03 with acquisition timeout of 10000 millis 
                                2010-01-22 12:53:13,595 TRACE {main} [o.i.c.EntryFactoryImpl:207] Successfully acquired lock! 
                                2010-01-22 12:53:13,595 TRACE {main} [o.i.i.CallInterceptor:69] Executing command: PutKeyValueCommand{key=cle03, value=value03, putIfAbsent=false, lifespanMillis=-1, maxIdleTimeMillis=-1}. 
                                2010-01-22 12:53:13,596 TRACE {main} [o.i.r.r.RpcManagerImpl:12] node 2-29486 broadcasting call PutKeyValueCommand{key=cle03, value=value03, putIfAbsent=false, lifespanMillis=-1, maxIdleTimeMillis=-1} to recipient list null 
                                2010-01-22 12:53:13,596 TRACE {main} [o.i.r.t.j.JGroupsTransport:12] dests=null, command=SingleRpcCommand{cacheName='isd-source', command=PutKeyValueCommand{key=cle03, value=value03, putIfAbsent=false, lifespanMillis=-1, maxIdleTimeMillis=-1}}, mode=ASYNCHRONOUS, timeout=15000 
                                2010-01-22 12:54:13,605 ERROR {main} [o.i.r.r.RpcManagerImpl:111] unexpected error while replicating java.util.concurrent.TimeoutException: Timed out waiting for a cluster-wide sync to be released. (timeout = 60 seconds)
                                     at org.infinispan.remoting.transport.jgroups.JGroupsDistSync.blockUntilReleased(JGroupsDistSync.java:51) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:397) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:100) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:124) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:229) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:216) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.remoting.rpc.RpcManagerImpl.broadcastRpcCommand(RpcManagerImpl.java:199) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.remoting.rpc.RpcManagerImpl.broadcastRpcCommand(RpcManagerImpl.java:192) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.ReplicationInterceptor.handleCrudMethod(ReplicationInterceptor.java:114) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.ReplicationInterceptor.visitPutKeyValueCommand(ReplicationInterceptor.java:78) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.LockingInterceptor.visitPutKeyValueCommand(LockingInterceptor.java:198) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.CacheStoreInterceptor.visitPutKeyValueCommand(CacheStoreInterceptor.java:194) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.CacheLoaderInterceptor.visitPutKeyValueCommand(CacheLoaderInterceptor.java:78) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:132) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:57) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:185) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.TxInterceptor.visitPutKeyValueCommand(TxInterceptor.java:132) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:113) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:118) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:48) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:34) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:57) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:76) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:269) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.CacheDelegate.put(CacheDelegate.java:433) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.loaders.CacheLoaderManagerImpl.preload(CacheLoaderManagerImpl.java:126) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [na:1.6.0_16]
                                     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [na:1.6.0_16]
                                     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [na:1.6.0_16]
                                     at java.lang.reflect.Method.invoke(Method.java:597) [na:1.6.0_16]
                                     at org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:170) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:852) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:672) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:574) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:148) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.CacheDelegate.start(CacheDelegate.java:310) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:390) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:354) [infinispan-core-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
                                     at ca.qc.hydro.hqp.mesi.infinispan.InfinispanTest.testMultipleCacheManager(InfinispanTest.java:45) [test-classes/:na]
                                     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [na:1.6.0_16]
                                     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [na:1.6.0_16]
                                     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [na:1.6.0_16]
                                     at java.lang.reflect.Method.invoke(Method.java:597) [na:1.6.0_16]
                                     at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) [junit-4.7.jar:na]
                                     at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) [junit-4.7.jar:na]
                                     at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) [junit-4.7.jar:na]
                                     at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) [junit-4.7.jar:na]
                                     at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) [junit-4.7.jar:na]
                                     at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) [junit-4.7.jar:na]
                                     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) [junit-4.7.jar:na]
                                     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) [junit-4.7.jar:na]
                                     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) [junit-4.7.jar:na]
                                     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) [junit-4.7.jar:na]
                                     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) [junit-4.7.jar:na]
                                     at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) [junit-4.7.jar:na]
                                     at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) [junit-4.7.jar:na]
                                     at org.junit.runners.ParentRunner.run(ParentRunner.java:236) [junit-4.7.jar:na]
                                     at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) [.cp/:na]
                                     at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) [.cp/:na]
                                     at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) [.cp/:na]
                                     at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) [.cp/:na]
                                     at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) [.cp/:na]
                                     at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) [.cp/:na]
                                
                                
                                
                                • 13. Re: Load data from store only from cold start (only member in a cluster)
                                  galder.zamarreno
                                  Hmmm, it seems like the replication call timed out. Do you have the log from the other node in the cluster? Can you check if there's any error/warn messages in the other log?
                                  • 14. Re: Load data from store only from cold start (only member in a cluster)
                                    manik

                                    Preload calls should be local only.  Galder brought this up on the dev list here:

                                     

                                         http://lists.jboss.org/pipermail/infinispan-dev/2010-January/002294.html

                                     

                                    I have fixed this in trunk (and have uploaded a 4.0.0-SNAPSHOT to the Maven repository) if you care to try it out?

                                     

                                    Cheers

                                    Manik

                                    1 2 Previous Next