distributedSyncTimeout setting
cbo_ May 10, 2010 4:41 PMI'm still struggling with some issues. Have not heard much in my posting on Multicast discovery with multiple clusters. As I try to understand why I am seeing strangeness with my testing I now focusing in on the transport setting for distributedSyncTimeout.
I am using the ALPHA3 distribution. I have 2 JVMs in a replicated cluster. I am loading some 100,000 entries into one of the JVMs before bringing up the 2nd JVM on another machine. I start the 2nd JVM and it gets the 100,000 entries since I have the fetchInMemoryState set to true. That works. However, on occasion I do not see new entries in the first JVM get propagated to the cache on the 2nd. Other times I do see this working. Additionally, when simulating a failover I have witnessed a situation when brining the failed JVM back up it gets hung on the getCache call. Including a stack trace at the bottom.
First, can someone give an idea what that setting actually does?
And, is my assumption correct the unit is milliseconds??
main" prio=3 tid=0x0000000000440030 nid=0x2 waiting on condition [0xfffffd7ffe1fc000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0xfffffd7fc0ff6580> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2054)
at org.jgroups.util.Promise.doWait(Promise.java:117)
at org.jgroups.util.Promise._getResultWithTimeout(Promise.java:73)
at org.jgroups.util.Promise.getResultWithTimeout(Promise.java:42)
at org.jgroups.util.Promise.getResult(Promise.java:104)
at org.jgroups.protocols.pbcast.ClientGmsImpl.joinInternal(ClientGmsImpl.java:142)
at org.jgroups.protocols.pbcast.ClientGmsImpl.join(ClientGmsImpl.java:38)
at org.jgroups.protocols.pbcast.GMS.down(GMS.java:924)
at org.jgroups.protocols.FC.down(FC.java:431)
at org.jgroups.protocols.FRAG2.down(FRAG2.java:154)
at org.jgroups.protocols.pbcast.STREAMING_STATE_TRANSFER.down(STREAMING_STATE_TRANSFER.java:325)
at org.jgroups.protocols.pbcast.FLUSH.handleConnect(FLUSH.java:303)
at org.jgroups.protocols.pbcast.FLUSH.down(FLUSH.java:264)
at org.jgroups.stack.ProtocolStack.down(ProtocolStack.java:862)
at org.jgroups.JChannel.downcall(JChannel.java:1659)
at org.jgroups.JChannel.connect(JChannel.java:417)
- locked <0xfffffd7fc19a2240> (a org.jgroups.JChannel)
at org.jgroups.JChannel.connect(JChannel.java:380)
- locked <0xfffffd7fc19a2240> (a org.jgroups.JChannel)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.startJGroupsChannelIfNeeded(JGroupsTransport.java:167)
at org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.infinispan.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:170)
at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:852)
at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:672)
at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:574)
at org.infinispan.factories.GlobalComponentRegistry.start(GlobalComponentRegistry.java:134)
at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:135)
at org.infinispan.CacheDelegate.start(CacheDelegate.java:290)
at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:446)
at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:409)
at com.cboe.infrastructureServices.cacheService.JCacheFactoryInfinispanImpl.getCache(JCacheFactoryInfinispanImpl.java:49)