This came up while implementing handling custom data versions with MVCC in 3.0.0.
We currently allow the setting of custom data versions with optimistic locking via the Option API:
cache.getInvocationContext().getOptionOverrides().setDataVersion(customDataVersion); cache.put("/a/b/c", "k", "v");
Now this is fundamentally broken in that the custom data version is only a single version and this is assumed to be applied on /a/b/c when the transaction commits. This breaks if, for example, /a/b does not exist in the cache and needs to be created as well - /a/b will be created with a default data version which may not be the intention.
Worse, assume /a/b does exist and also has a custom data version. If lockParentForInsertRemove is set, then we would expect to increment the parent's version as well. But again, only a single data version is passed in for the leaf node. In this case, what happens is that the parent's version is *not* incremented, breaking lockParentForInsertRemove semantics.
I think the problem really is a conceptual one, where you can't expect to pass in a single custom data version and have it make any sense on an API call that modifies more than 1 node. I know this API is important for certain use cases (Hibernate 2nd level cache), perhaps what we need is a richer API where we do something like:
Map<Fqn, DataVersion> customVersions = new HashMap<Fqn, DataVersion>(); customVersions.put(Fqn.fromString("/a/b", customVersion1); customVersions.put(Fqn.fromString("/a/b/c", customVersion2); cache.getInvocationContext().getOptionOverrides().setDataVersion(customVersions); cache.put("/a/b/c", "k", "v");
And with this, we can tell precisely which version applies to which node. Nodes not mentioned in the customVersions map will default to DefaultDataVersions upon creation, or throw an exception upon attempting to increment if a custom version is present on the node but not passed in.
What do you think? Yes, I know, a more cumbersome API, but more correct IMO.