I'm just a newbie in this jBPM world but have runned into problems trying to develop something similar (a workflow with MDB action consumers, where the tasks were modelled as JPA entitites in an EJB3 context).
I dind't find a way to perform pessimistic locking for the entities (I have the idea that it's not supported) and tests with concurrent threads led me to very odd results. About detection of conflicts in optimistic locking, I also think that table versioning is only detecting problems when committing, when the version numbers mismatch can be detected.
Another weird thing is READ_UNCOMMITED Hypersonic transaction isolation level mixed with optimistic locking. Changing to a RDBMS with a real transaction isolation leads to really different results when using optimistic locking (as chances to have obsoleted versions of the entities are bigger than with Hypersonic).
Hope it helps.
ok. i think i start to see the problem.
in case of a join, if 2 tx start around the same time, they both see each other still being active. then both of them will wait in the join. the parent will then not be propagated. both tokens will only update themselves so there will be no hibernate optimistic locking problem detected.
afaict, the current implementation will operate correctly with isolation levels TRANSACTION_REPEATABLE_READ and TRANSACTION_SERIALIZABLE, which are highly uncommon.
the solution would be to modify the parent token each time a token enters a join. then modifications to the parent will cause concurrent updates to be detected.
In MySQL, the parent token is not propagated under either READ UNCOMMITTED, READ COMMITTED or REPEATABLE READ. I did not try SERIALIZABLE, but as mteira mentions, the higher the isolation level, the more likely each thread will read stale data, causing the join to fail.
I think the problem lies in not acquiring a write lock on a token loaded for update from the beginning. This would prevent other tokens from reading it in databases that support the READ COMMITTED level.
I didn't find a way to do that using JPA and Entities (the only way to lock an entity seems to be the EntityManager.lock() method, and that's just playing with the entity version, and worse than that, adquiring the Entity using EntityManager.find() and locking it are two separate calls, so we cannot make it in a atomic fashion).
I expect that using Hibernate directly, there should be a way to adquire a persistent object in a fashion that maps into the DB query as a 'SELECT FOR UPDATE' clause or equivalent. But, is it correct to call this 'optimistic locking' anymore?
Hibernate does offer a way to load an object for update:
Session.load(class, id, LockMode.UPGRADE)
This is certainly not optmistic control but pessimistic locking. Tom's solution should do the trick for databases where optimistic control is the only viable approach (e.g. Hypersonic).
My alternative aims at environments where optimistic control is unfavorable due to high probability of concurrent updates.
session.load(class, id, LockMode.UPGRADE) or session.lock(processInstance, LockMode.UPGRADE) results in "SELECT ... FOR UPDATE" on databases that support pessimistic locking, this will result in a pessimistic lock.
We expose the lock already in the JbpmContext (or in the persistence service, i don't remember). So users can start adding pessimistic locks themselves whenever they want to.
But I want to work out a solution/workaround/fix that works by only using optimistic locking.