No; reads never block unless you use FORCE_WRITE_LOCK flag (as when you call cache.getAdvancedCache().withFlags(FORCE_WRITE_LOCK).get(key)). Could you set up a test case demonstrating this behaviour?
Thanks for the answer. But i didnt mean that the read blocks. I mean why does the read (calling get on the cache) returns null?
So: Does a block/write on a key results in a null result when calling get on that key at the same time? Shouldnt it take an earlier object before the lock was obtained?!
Or is the problem that the key is saved on another node AND blocked so they node doesnt deliver the result?
Does a read return null if the entry is on another node and not localy available and locked at that moment by another write process??
Thats my problem/question. Its hard to set up a test case with multiple nodes demonstrating this i think.
Ok i THINK i found the problem! We have the distributed cache configured a bit strange.
We are using distributed Cache with 6 nodes and numOwners=6
So no remote Lookup.
So it gets distributed over all nodes. But if on one node a write is started a lock is activated, then i get the null when i try a get on that key i get a null.
It checks locally and its null locally? Why is not the previous value kept or is when i start a lock on one node the object removed from the cache on the other nodes??
Sorry, I was really answering to something a bit different than you asked.
get()s are not affected by the SKIP_REMOTE_LOOKUP flag, it wouldn't make much sense to invoke them. SKIP_REMOTE_LOOKUP only affects the value returned by put(). So, get() should always return the old value.
My guess is that the transaction which was writing the entry was not completed (committed), or rolled back. With transactions, the entry is really written into the cache only when the tx commits, before that all reads really return the old value (null if it was not written yet).
Ok thank you, so if onet write was successful commited on one node get should def return the (old) value if its called later even if the key is blocked for writing at the same time.
Im wondering because from the log statetement saying "starting method" to "key was null" its exactly 20 miliseconds. How is infinispan supposed to check on 6 nodes for the value in 20 miliseconds?
Its even less then 20ms since theres some other null checks and even one get on another cache in between.
For me it looks like it checks only locally and there its null for some reason. My guess at the moment is that when i start a lock for a key on one node in distributed mode, the key gets deleted from the other nodes for some reason and is only available on that node...
and since the remote look up seems not to be done...
But thats the question if it is really like this ill check more in detail and see if i can find the reason whats going on.
public static final Flag SKIP_REMOTE_LOOKUP
When used with distributed cache mode, will prevent retrieving a remote value either when executing a get() or exists(), or to provide an overwritten return value for a put() or remove(). This would render return values for some operations (such as
BasicCache.remove(Object)unusable, in exchange for the performance gains of reducing remote calls.
Note that if you want to ignore the return value of put() and you have configured a cache store you should also use the
But why is the key removed locally?!