I'm afraid our partition handling doesn't work very well with just 2 nodes - we have a silent assumption that you'll have at least 3 nodes so that you'll always have 50% + 1 of the nodes running (except for when you really have a partition).
From an implementation perspective, the running node enters DEGRADED_MODE because we don't differentiate enough between clean shutdown and a crash.
But even from a theoretical perspective, preserving consistency requires a majority of nodes to be available. If you have only 2 nodes, you can stop a node, isolate it from the first node, then start it, and both nodes will be available for writing. Even bigger clusters are vulnerable here, because the starting node doesn't know what the majority should be. (Only since 8.2.Beta2 you can prevent this with the new
Please create an issue in JIRA if my theory hasn't convinced you, and maybe you'll convince us to ignore the majority rule for clean shutdown
Dan, I think that it makes perfect sense that when a node leaves voluntarily (as opposed to abrupt termination of network connection), the cluster would not enter degraded state.
I think that the troublesome part could be even current 'clean shutdown'. IIRC, when a node shutdowns in stop(), it just says 'Bye, I am leaving', but it's not waiting if the rebalance finishes completely (and if another node crashes during the rebalance, data can be lost, instead of providing it from the leaving-but-not-left node). So, a proper clean shutdown should install topology with writeCH without the leaving node but readCH with it, and the leaving node should not leave until next topology with readCH without it. Then, the fact that the node actually left should not make the cluster enter degraded state. Maybe I was technically incorrect somewhere above (I can't recall how the available nodes set in topology is used), but I think that the gist is clean.
That's good to know, to keep partition handling disabled for 2 nodes, unless the DEGRADED_MODE is desired or can be tolerated.