Imagine a world where you have near-serializable data integrity in an in-memory, replicated caching product. All while being able to scale, perform, manage a high degree of concurrency. Smells like optimistic locking in JBossCache? You smell well. Your nose knows. You sniff terrif-ically. Ok, forget the last one, that was bad.
Locking, whether for reading or writing, has been a necessary evil when dealing with concurrent access on datasets. Locks are currently implemented in a pessimistic fashion in JBossCache. Meaning, data is locked, whether for reading or writing, throughout the duration of a transaction involving access to the data. This can lead to scalability issues and deadlocks.
Optimistic locking assumes that locks aren't necessary for concurrent access, and deals with synchronisation and merging of concurrently accessed data at the time of committing a transaction. Some locking is still necessary, but this is only for a very short duration at the commit boundary of a transaction.
The long and the short of it, you can now have a high level of data integrity while still maintaining a high degree of concurrency, leading to a very scalable caching product.
Details on obtaining, configuration, design, etc. are on the JBoss Wiki - feedback is very much appreciated!