I have read the paper describing the PVM and I am wondering two things:
1. Since the PVM is presented as a workflow model (and even compared to the underlying relational model of RDBM), why bothering with performance consideration, object to RDBM mapping, application level caching, optimistic locking and so on at that level? Can't we design a robust, well defined model that implementations should implement (as well as they can)? It seems, reading the paper, that the model has been designed following the reverse way: the PVM tries its best to hide an implementation base.
2. The implementation needs persistence for fault-tolerance. Right. But, I am not sure a RDBM is the best way to achieve this. Mapping a complete object graph into tables (and conversely) requires a big amount of work (whatever the implementation is: Oracle vs MySQL, algorithmically, it requires a minimum number of steps). For the purpose of fault tolerance, why not using a prevalence system such as Prevayler or Space4J? We might keep using a DB for the real purpose of it: requesting. But for fault-tolerance, we may at least have a look into prevalence systems (since there are some constraints in prevalence systems, we have to measure the average number of business objects and the number of operations that exist in real PVM applications).