Version 1

    Some additional musing, rambling and general brain dumping on the student project 'memory resident transactional objects', the high level brief for which is at


    This project will investigate the ways that transactional objects can be used to provide more than just a wrapper around a traditional database.


    'relational databases are the de facto store for corporate data' : The traditional JEE model for manipulating transactional state is to push much of the work onto a RDBMS. EJBs, specifically entity beans, are essentially a way of doing short term (transaction scope) caching and manipulation of database state in memory using an object oriented model. This has performance drawbacks since it involves disk I/O. The only transaction type possible is ACID, since that's all that databases know.  Most use locking or MVCC. Some ORM frameworks layer optimistic concurrency control over this. ref EJB3 spec, docs.


    'Memory is the new disk' : Many highly scalable systems are moving to large distributed RAM caches (JBossCache, memcached) to scale. Such systems rely on cache replication for availability - accessing the data from the underlying persistent store is too slow for normal operation and takes place only during exceptional restart situations, such as after a maintenance shutdown or crash. The replicated cache is in effect the persistent store, with an ability to further save/restore from disk for use in extreme situations.


    Even where the cache is more peripheral to the architecture, keeping it in sync with a db or other parts of the system is useful, so the cache should be transactional. Ideally it should be a XA compliant resource manager, so it can be driven by a standard transaction manager. Same thing holds for other in-memory state managers e.g. workflow or business rules engines.


    The traditional ACID properties are sometimes not well suited to the business problem. In particular, we sometimes want to allow a business process (e.g. accept an order) to continue even if a non-essential part of the system (e.g. stock control) is down. For a time the system state may be inconsistent (e.g. item promised to the customer but stock level not updated) but it should eventually be reconciled (e.g. keep retrying the stock update until it works).  Coding this logic by hand is a pain, it should be taken care of by the middleware. @TransactionalModel(Type.ACID) vs. @TransactionModel(Type.EVENTUALLY_CONSISTENT) would be ideal. ref Vogel's work

    Software transactional memory systems provide a way of programming for multi-threaded systems that does not require the business programmer to be concerned with difficult manual concurrency control mechanisms (synchronization, locking) as they usually would. This is attractive as systems are getting more cores whilst programmers are not getting any smarter about multithreading. Implementations vary, with use of MVCC or locking both feasible. Limitations centre on I/O operations and other activity which passes outside the scope of the system and thus can't be rolled back on transaction abort. For Java, implementation approaches include code instrumentation and use of an agent. See multiverse  Deuce  for pure STM and also XSTM and commons-transaction for loosely related implementations.  Existing STM systems don't integrate well with existing transactions systems - you can't update the state of an STM managed object and e.g. a database in the same tx.


    ArjunaCore provides transactional objects for Java (TxOJ), an object programming framework that provides for Java object state to be manipulated transactionally, in the same managed tx as other resources (e.g. JDBC, JMS). Its model predates Java annotations and AOP techniques (and indeed predates Java itself - no managed runtime is assumed) so the programming model is more invasive to business code than is desirable - classes much subclass/implement and code must manage locks explicitly. Replication between JVMs is supported, with distributed locking.


    So much for the spread of options currently available and the forces driving them. Now, what to do about it?


    A next generation app server should be capable of offering a container that manages transactional entities in a much more flexible way than the current EJB3 beans.


    Why assume a relational database backend? (Actually to be fair EJB3 doesn't, although the EJB-QL is very SQL like). How about such a container backed by a replicated cache, with configurable parameters on how many replicas are required and how often to write the state to disk (pluggable db or other serialization) for backup? How would this differ from existing solutions e.g. JBossCache?


    Why assume an ACID transaction model? Annotions on beans should allow for flexible behaviour, from ACID to eventually consistent (modify the 'C'), volatile in-memory / replicated (modify the 'D' to varying degrees), perhaps with a detour via compensation based models (modify the 'I') (ref previous student work on annotations for WS-BA).


    How about an STM backed bean container. Construction techniques of method call interception / AOP / Annotations relevant here. Or update ArjunaCore to support annotions for configuration on objects, rather than requiring inheritance/impl of an existing base as at present.


    There are limitations on how an EJB is coded (don't start Threads, etc.)  A transactional bean would face similar limitations, particularly with regard to I/O. How about allowing for I/O or other ops that can't be rolled back, provided the bean implements a compenstation hook operation.


    How to wrap and present it in a way that business programmers / non-transaction experts can grasp easily? It took the EJB spec 3 goes to get there, but many of the pieces e.g. annotations are now available and better understood so we can build on that.