I don't know
we have the capacity to instrument recursively a data tree with interceptors for get/set which means that once we reach the leaves of a tree we know they are native java data types. In short we have the capacity to rebuild the tree on the other side
we don't need to serialize the whole shebang, it is uber fast
con: the implementation is kinda tricky as we need to identify all the nodes of the tree (references) with unique identifiers from the VM
2 cases b is shared or unique
b is unique you need a unique key you generate (a la session bean) and you use it across the cluster
b is shared and thus has a primary key and that is used to rebuild the tree.
finally as 1st iteration I am totally OK with the serialization requirement. but frankly in 2nd iteration there is a simple way to optimize all this with the above algorythm
> finally as 1st iteration I am totally OK with the
> serialization requirement. but frankly in 2nd
> iteration there is a simple way to optimize all this
> with the above algorythm
Its in the works boss. I have a perField versioned(ACID) object implementation on my box waiting to commit. I'm currently working on replicated transactional versioned objects right now.
I still have to worry about collections. Collections are a special case since I have to do the ACID logic within methods rather than at the field interception level. I'm pretty sure that I can replace new HashMap with our own implementation at runtime. More on this later.
> level. I'm pretty sure that I can replace new
> HashMap with our own implementation at runtime. More
> on this later.
Can't wait that is the point Adrian and juha pitched in paris overwriting the classpool so that HM is return with JBossHM... see other thread in this forum :) itis so beautiful