1 of 1 people found this helpful
You're missing the fact that entries are not stored on the node where the put operation is performed, but on the node determined by the consistent hash algorithm. Therefore, when you put something from server B it probably got stored on server A, and that's why both nodes find it, and node A keeps seeing it even when node B leaves the cluster. If you need to keep data local to the node, use the KeyAffinityService.
This is interesting. I think that the KeyAffinityService is an overkill for me. I need the key to the resource id.
If you think my usage in code and the attached jgroup file support my requirements (i.e. no replication will be done - only one node any given key) then I'm good.
If the above is correct, if I would have set many keys on Node B, I can expect that some of them will be missing on Node A assuming after putting them I take down server B?