0 Replies Latest reply on Jun 19, 2015 6:21 AM by Maksym Pisotskyi

    Infinispan 6 "Trying to set a topology with invalid members for cache" bug

    Maksym Pisotskyi Newbie

      Hello, dear Infinispan community.

       

       

      Recently I have found issue with Infinispan 6, that is reproducing under high load, in case of concurrent start and stop big amount of nodes. After this issue, new members are not able to connect to cluster.

       

       

      Let me describe it on the following scenario:

      0) Some infinispan nodes were started (one node from them is coordinator);

      1) We start big amount of new nodes (for example 100). Each node, after some time, restarts cache, by using methods start() and stop() (org.infinispan.Cache/org.infinispan.commons.api.Lifecycle methods)

      2) Sometimes coordinator node failed with exception:

      java.lang.IllegalStateException: Trying to set a topology with invalid members for cache cache0: members = [...], numSegments=60, primaryOwners=[0, 11, 8, 6, 8, 4, 3, 0, 10, 3, 4, 0, 1, 2, 8, 11, 6, 0, 1, 1, 12, 2, 3, 12, 4, 9, 5, 1, 7, 6, 4, 7, 11, 10, 3, 9, 4, 5, 6, 0, 1, 7, 7, 2, 5, 2, 5, 6, 5, 12, 8, 10, 9, 9, 2, 7, 10, 12, 11, 3]}, pendingCH=null}

      3) After that, new nodes unable be started and they fail with exception:

      Exception in thread "main" org.infinispan.commons.CacheException: Unable to invoke method public void org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete() throws java.lang.InterruptedException on object of type StateTransferManagerImpl

              at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:185)

              at org.infinispan.factories.AbstractComponentRegistry$PrioritizedMethod.invoke(AbstractComponentRegistry.java:869)

              at org.infinispan.factories.AbstractComponentRegistry.invokeStartMethods(AbstractComponentRegistry.java:638)

              at org.infinispan.factories.AbstractComponentRegistry.internalStart(AbstractComponentRegistry.java:627)

              at org.infinispan.factories.AbstractComponentRegistry.start(AbstractComponentRegistry.java:530)

              at org.infinispan.factories.ComponentRegistry.start(ComponentRegistry.java:216)

              at org.infinispan.CacheImpl.start(CacheImpl.java:675)

              at org.infinispan.manager.DefaultCacheManager.wireAndStartCache(DefaultCacheManager.java:553)

              at org.infinispan.manager.DefaultCacheManager.createCache(DefaultCacheManager.java:516)

              at org.infinispan.manager.DefaultCacheManager.getCache(DefaultCacheManager.java:398)

              at Main.main(Main.java:40)

      Caused by: org.infinispan.commons.CacheException: Initial state transfer timed out for cache cache0 on xzur0346dap-45732

              at org.infinispan.statetransfer.StateTransferManagerImpl.waitForInitialStateTransferToComplete(StateTransferManagerImpl.java:202)

              at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

              at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

              at java.lang.reflect.Method.invoke(Method.java:601)

              at org.infinispan.commons.util.ReflectionUtil.invokeAccessibly(ReflectionUtil.java:183)

              ... 10 more

       

       

      I have investigated it and found root cause:

       

       

      In general this exception was caused by fact, that we stopping Infinispan Cache during time, when other Caches are connecting. Especially, when new members are joining the cluster Infinispan starts joining process, then recalculate cache topology for other caches, receive response from them, and finish joining process. But if during this process at special time some member left cluster exactly at special point in time, when the last member sends confirmation, it can broke this cycle and event about finish joining process will never by fired. Therefore, if in future other cluster members try to be connected, cooridinator node will not answered to them. Because lifecycle master checks, if joining process is in progress, then do not answer, but previously joining process was broken and will not be finished, so the client will not get response and fail by timeout.

       

       

      Code for this is in org.infinispan.topology.ClusterTopologyManagerImpl class, that has race condition.

       

       

      Exception "Trying to set a topology with invalid members for cache" brakes Infinispan lifecycle, because:

       

       

      ClusterTopologyManagerImpl code:

      private void endRebalance(String cacheName, ClusterCacheStatus cacheStatus) {

            synchronized (cacheStatus) {

              CacheTopology currentTopology = cacheStatus.getCacheTopology();

              int currentTopologyId = currentTopology.getTopologyId();

              log.debugf("Finished cluster-wide rebalance for cache %s, topology id = %d",

                    cacheName, currentTopologyId);

              int newTopologyId = currentTopologyId + 1;

              ConsistentHash newCurrentCH = currentTopology.getPendingCH();

              CacheTopology newTopology = new CacheTopology(newTopologyId, newCurrentCH, null);

              cacheStatus.updateCacheTopology(newTopology);

              cacheStatus.endRebalance();

            }

        }

       

       

      Our exception was thrown in method cacheStatus.updateCacheTopology(newTopology);

      Therefore, next method cacheStatus.endRebalance(); was not called. But exactly this method call is responsible for finishing joining process.

       

       

      But root cause of this issue I see in handleLeave method, that is not synchronized with endRebalance method:

       

       

      ClusterTopologyManagerImpl code:

      @Override

        public void handleLeave(String cacheName, Address leaver, int viewId) throws Exception {

            if (isShuttingDown) {

              log.debugf("Ignoring leave request from %s for cache %s, the local cache manager is shutting down",

                    leaver, cacheName);

              return;

            }

       

       

       

       

            ClusterCacheStatus cacheStatus = cacheStatusMap.get(cacheName);

            if (cacheStatus == null) {

              // This can happen if we've just become coordinator

              log.tracef("Ignoring leave request from %s for cache %s because it doesn't have a cache status entry");

              return;

            }

            boolean actualLeaver = cacheStatus.removeMember(leaver);

            if (!actualLeaver)

              return;

       

       

       

       

            onCacheMembershipChange(cacheName, cacheStatus);

        }

       

       

      For example:

       

       

      Thread0: ClusterTopologyManagerImpl:line:180: handleRebalanceCompleted(...)

      Thread1: ClusterTopologyManagerImpl:line:172: > handleLeave(...) > boolean actualLeaver = cacheStatus.removeMember(leaver);

      Thread0: ClusterTopologyManagerImpl:line:400: > handleRebalanceCompleted(...) > cacheStatus.updateCacheTopology(newTopology); \

                // Exception "Trying to set a topology with invalid members for cache" is thrown

      Thread1: ClusterTopologyManagerImpl:line:176: > handleLeave(...) > onCacheMembershipChange(cacheName, cacheStatus);

       

       

      If method handleLeave contained synchronization block:

           synchronized (cacheStatus) { boolean actualLeaver = cacheStatus.removeMember(leaver); ... onCacheMembershipChange(cacheName, cacheStatus);  }

      issue would not be reproduced, because onCacheMembershipChange updates cacheTopology object, and object would contain correct members values.

       

       

      It is easy reproducible and I have attached java classes, jgroups configuration file and scripts. Please take a look at it.

      To reproduce it, please make some preparation (compile Main class etc...) and simply run ./runs.sh script.

       

       

      Script will start necessary amount of members with logic. After some time you will see on lifecycle master node exception:

      "java.lang.IllegalStateException: Trying to set a topology with invalid members for cache"

      After this point in time, new nodes will not be able to connect to cluster, because lifecycle master node is broken.

      Of course your machine has to be strong enough to be able to start 100 Infinispan nodes at the same time.

       

       

      I have checked it in Infinispan 7 and it is not reproducible. Because exactly necessary code places were synchronized.

       

       

      Dear Infinispan community, are you aware about this issue? I think, that it would be good to fix it in Infinispan 6, because bug is critical.

       

       

      Best Regards,

      Maksym.