1 Reply Latest reply on Sep 8, 2009 5:25 AM by mcmati

    replication timeout

      Hello that's me again ;)

      I have another problem using infinispan. I bet, that there is a simple solution.
      I use following cfg.xml file:

      <?xml version="1.0" encoding="UTF-8"?>
      
      <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:infinispan:config:4.0">
       <global>
       <transport clusterName="demoCluster"/>
       </global>
      
       <default>
       <clustering mode="distribution">
       <l1 enabled="true" lifespan="10000"/>
       <hash numOwners="2"/>
       <sync/>
       </clustering>
       </default>
      </infinispan>
      



      when I do this in my program:
       Map<String, String> rand = new HashMap<String, String>();
       int i=0;
       while (rand.size() < amount){
       rand.put("a"+(i+number), "b"+(i+number));
       i++;
       }
      
       customCache.putAll(rand);
      


      for larger amounts of data, like 100000 and more I get replication timeout.


      2009-09-08 08:33:34,921 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (main) real_dests=[esbopen-24761]
      2009-09-08 08:33:49,937 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (main) responses: [sender=esbopen-24761, retval=null, received=false, suspected=false]
      
      2009-09-08 08:33:49,937 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (main) replication exception:
      org.infinispan.util.concurrent.TimeoutException: Replication timeout for esbopen-24761
       at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:325)
      


      So I tried to put single keys in a loop instead of a whole map, but it also didn't work.

      for(int i=0; i<amount; i++){
       customCache.putIfAbsent("a"+(i+number) , "b"+(i+number));
       System.out.print(".");
       }
      


      Maybe it is because of the fact that I probably use "Synchronized" communication, or maybe I should set bigger replication timeout?

      Anyway, it still works rather slowly. Is it normal? When I use 2 instances on one machine then adding 100000 keys takes more or less 17 sec. (a0->b0, a1->b1,...), but over LAN I am not able to insert such amount of data.

      And one, last question. Is it normal, that when I use provided cfg.xml file, where , and when I start 3 instances of my application and I add different amounts of information to those 3 nodes, each node has the same amount od data? It should be this way in replication mode I guess.



      Best regards,
      Martin

        • 1. Re: replication timeout

          I also get error like that:

          2009-09-08 11:19:36,109 INFO [org.infinispan.distribution.DistributionManagerImpl] (Incoming-9,osoz-fd029efaa5-15458) Detected a veiw change. Member list changed from [esbopen-624, osoz-fd029efaa5-9048, esbopen-50869, osoz-fd029efaa5-15458] to [esbopen-624, esbopen-50869, osoz-fd029efaa5-15458]
          2009-09-08 11:19:36,109 INFO [org.infinispan.distribution.DistributionManagerImpl] (Incoming-9,osoz-fd029efaa5-15458) This is a LEAVE event! Node osoz-fd029efaa5-9048 has just left
          2009-09-08 11:19:36,109 INFO [org.infinispan.distribution.DistributionManagerImpl] (Incoming-9,osoz-fd029efaa5-15458) Starting transaction logging; expecting state from someone!
          2009-09-08 11:19:36,109 INFO [org.infinispan.distribution.DistributionManagerImpl] (Incoming-9,osoz-fd029efaa5-15458) Need to rehash
          2009-09-08 11:19:37,906 TRACE [org.infinispan.distribution.JoinTask] (Rehasher-osoz-fd029efaa5-15458) Requesting old consistent hash from coordinator
          2009-09-08 11:19:37,906 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Rehasher-osoz-fd029efaa5-15458) dests=[esbopen-624], command=RehashControlCommand{type=JOIN_REQ, sender=osoz-fd029efaa5-15458, state=null, consistentHash=null}, mode=SYNCHRONOUS, timeout=600000
          2009-09-08 11:19:37,906 TRACE [org.infinispan.marshall.VersionAwareMarshaller] (Rehasher-osoz-fd029efaa5-15458) Wrote version 400
          2009-09-08 11:19:37,906 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (Rehasher-osoz-fd029efaa5-15458) real_dests=[esbopen-624]
          2009-09-08 11:19:40,750 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Thread-1) Disconnecting and closing JGroups Channel
          2009-09-08 11:19:42,921 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsDistSync] (Incoming-9,osoz-fd029efaa5-15458) Releasing ReclosableLatch [State = 1, empty queue] gate
          2009-09-08 11:19:42,921 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (Rehasher-osoz-fd029efaa5-15458) responses: [sender=esbopen-624, retval=null, received=false, suspected=true]
          
          2009-09-08 11:19:42,921 TRACE [org.infinispan.remoting.rpc.RpcManagerImpl] (Rehasher-osoz-fd029efaa5-15458) replication exception:
          org.infinispan.remoting.transport.jgroups.SuspectException: Suspected member: esbopen-624
           at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:322)
           at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:88)
           at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:107)
           at org.infinispan.distribution.JoinTask.performRehash(JoinTask.java:82)
           at org.infinispan.distribution.RehashTask.call(RehashTask.java:52)
           at org.infinispan.distribution.RehashTask.call(RehashTask.java:30)
           at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
           at java.util.concurrent.FutureTask.run(Unknown Source)
           at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
           at java.lang.Thread.run(Unknown Source)