2 Replies Latest reply on Feb 6, 2006 4:10 PM by brian.stansberry

    Strictness of SERIALIZABLE semantics

    brian.stansberry

      Are we being too strict in our SERIALIZABLE semantics? Our semantics are largely based on the SQL92 semantics. Too lazy to look those up, but the javadoc for JDBC's TRANSACTION_SERIALIZABLE seems like a reasonable proxy:

      A constant indicating that dirty reads, non-repeatable reads and phantom reads are prevented. This level includes the prohibitions in TRANSACTION_REPEATABLE_READ and further prohibits the situation where one transaction reads all rows that satisfy a WHERE condition, a second transaction inserts a row that satisfies that WHERE condition, and the first transaction rereads for the same condition, retrieving the additional "phantom" row in the second read.


      I don't see anything about the existence of one read preventing another read.

      To me, a node in TreeCache is analogous to a database row. Finding out information about a node's children is analogous to querying a table with a n INNER JOIN to a parent table and a WHERE clause limiting results based on a field in the parent table. A phantom read in TreeCache terms is when a transaction has read a node, and then while the tx is alive another thread adds or removes children.

      Based on this, to me SERIALIZABLE semantics would mean the current REPEATABLE_READ behavior, with the addition that the insertion or removal of a node requires the acquisition of a write lock on the node's parent.

      Unfortunately, I don't see how these semantics could be enforced as part of a LockStrategy impl; this would need to be managed by the code that adds and removes nodes.