I assume you are maintaining this information persistently, i.e., everytime you update your timestamp that is reflected back into the datastore where the persistent information is "protects" is maintained?
I am using a version field which is just an int and a cmp field. Now, When I am trying to modify a entity, I am checking the version field of the incoming DTO(All my entity has their corresponding DTOs and modify operations are bulk operations) with the existing version field of the entity, If any mismatch found, I throw OptimisticLockingFailureException. Otherwise I increase the version of my entity by one. A user have to get a DTO from me to update. When he gets the dto, I am giving him my existing version. When he is submitting the dto, I am checking if the dto he got was already been modified by another user. That's simple. Now, should not the container give me consistency?
How can the container give you consistency? It knows nothing about the semantics of your data. Even if you were using traditional two-phase locking, with read and write locks, you (or the backend database) would be responsible for setting the type of lock required. With SQL it's fairly straightforward to know whether or not a read lock is needed, for example.
However, it sounds like what you are doing (with throwing exceptions if a mismatch is found) is going in the right direction. You just need to make sure that the field information is always in the db so that you get consistency across multiple VMs. The most efficient way of achieving this (at least in terms of db access, which will always be the bottleneck) is to use a volatile (in memory) timestamp which is a copy of the timestamp on the data field when it is first read, i.e., at the start of the transaction. You need to update this and reflect it back to the db but only at the very end of the transaction - within the 2PC if possible and as the first thing to be read (checked) and written to the db (assuming the check succeeds). The volatile timestamp will be sufficient to guarantee consistency/isolation within the VM and the durable representation is needed for inter-VM isolation.
Thank you for your suggestion. But I don't get the point of using timestamp instead of a simple int. I am increasing the int by 1 each time I an update is performed on the data.
Now, My plan is like this. I would like to use commit option B. So, before performing each business method, the version(int) will be loaded from the database. So, I get the fresh copy of the real version that is in db. Now, I compare the version with the version of my incoming DTO that I get from the client for update. (The client got the dto from me for modification and I supplied the version which the data had at that time). If the version matches, I increase the version by one. Otherwise , I throw an exception. After checking and modifying the version, I do other business operation and update other data. As far as I understand, The container will try to commit the whole at the end of that transaction. Now, I want my Isolation level as READ_COMMITED but I also want the container to throw an exception if the record was modified by any other process. Is that possible to achieve in jboss. The ejbDesignPattern refers this scenario with "READ_COMMITED WITH OPTIMISTIC LOCKING".
Hope I clarified the point.
Why use a timestamp over an integer? What are you going to do once the integer wraps? Can you guarantee that there isn't some really slow service out there that has a copy of the data and an integer value of 1 from the first time the integer was 1? I know it's unlikely, but it's possible. Where transactions and data integrity are concerned, you shouldn't take chances.
How many servers will be offering access to this information?