Hi Darrell, with regards to your questions:
Is this expected behavior?
It's a little bit different than what you described. A key/value pair can exist in more than one node. The invalidation mode is optimized for read operations so if you read X subsequently from all nodes, then X will be stored in all nodes but only until you do a write on some of the nodes. After a write operation, X is invalidated in all other nodes but once the other nodes read the X, they again hold the value.
Is there any way to turn it off?
AFAIK no, this is expected bahavior of the invalidation mode.
If not, then it seems that the only way to have multiple nodes in a cluster to have X in their caches is to use replication?
If you really do not want to use invalidation, and not even replication mode where keys are stored on all nodes, you can try a distribution mode with L1 cache enabled.
I found the answer. The trick is to use cache.putForExternalRead() not put(). This causes entries to be put into cache without sending out invalidation messages to the rest of the cluster, which is exactly the way our ORM wants to use the cache. So now we can use invalidation caches for tables that change infrequently but are referred to heavily.
I should also say thanks to C2B2 consulting for pointing us in this direction .... !