Hi Mark,
I got your message just as I was creating a new thread.
Mark Little wrote:
For distributed timeouts check out what we do with the JTS.
Tom Jenkinson wrote:
NOTES:
Regarding the timeout, I figured the transport would converted this to an absolute time? I can only really see it working with an absolute time and clocks in sync, then when we add it to the reaper we can calculate the remaining time. I am interested to know in the original question some of Davids concerns regarding timeout.
Just to confirm, the work for the transactions team side of this is now complete (as part of https://issues.jboss.org/browse/JBTM-895 and friends).
If you look at the example I have mentioned before (https://svn.jboss.org/repos/labs/labs/jbosstm/branches/JBOSSTS_4_16/atsintegration/examples) you will see how to achieve use the APIs we have exposed to perform this work within the transport tier.
Of particular note are:
1. The transport is responsible for providing a *Serializable* XAResource that is able to proxy calls to remote servers.
2. Per transaction, only one link should be built to each subordinate, this is relatively trivial to ensure and the example provides a suggestion based on registering the resource after returning from the remote server where it is easy to detect an existing subordinate atomic action, note this applies the transaction as a global notion, i.e. do not create a subordinate transaction at the originating server even when the transaction flows back in to the server, the example shows one way how to guard against this.
3. The transport should have no need to persist additional details of transaction state if the pattern illustrated in the example is followed.
4. The transport is *not* responsible for maintaing a list of subordinate identifiers, that requirement is removed and this is managed by the transaction manager.
5. Per subordinate transaction a proxy synchronization and proxy xa resource will need to be registered.
6. As requested, the node names are able to be stored as Strings. The strings *must* be able to be String::getBytes()'d into a byte array no longer than 32 bytes.
7. After you return from a remote server you are suggested to check the state of the transaction returned in order to prevent needless propagation of a transaction that is set in rollback only state.
8. Transaction timeout is shown how to be handled in the example and as discussed in a different thread, it should not need "padding"
9. The transport is responsible for providing an org.jboss.tm.XAResourceRecovery to detect orphan subordinate transactions, this is also illustrated in the example, this *must* be the last registered XAResourceRecovery. The transports implementation of this class must use a transport specific implementation to talk to any possible server that a transaction may have flowed to in order to discover potential orphan transactions. This is because the remote side may have a prepared a transaction and then crashed before the local side has prepared therefore we have no other options available to us. In the example I hard code all the remote server nodes before hand, an alternative is to build this list up incrementally at each node as the transport detects a new remote server is being communicated with.
10. The call to com.arjuna.ats.jta.common.jtaPropertyManager.getJTAEnvironmentBean().setXaResourceOrphanFilterClassNames() should add a filter for: "com.arjuna.ats.internal.jta.recovery.arjunacore.SubordinateJTAXAResourceOrphanFilter"
When reading the example, one approach would be to look at: com.arjuna.jta.distributed.example.server.impl.ProxyXAResource as the local interceptor for an EJB (say) and consider its counterpart remote interceptor to be something along the lines of: com.arjuna.jta.distributed.example.server.impl.RemoteServerImpl. The rest of the code should fall into place when the responsibilities of these classes are understood, alongside the addtional work mentioned in actually bringing the server up in a mode that can recover a transports ProxyXAResource-esque class.
Any problems please do let us know, it sounds a lot more complicated that it is in reality! The key part is getting the proxy registered and recoverable which hopefully the transport demonstrates in a relatively straightforward manner - not the use of classloaders in the example is to allow multiple transaction managers to run in the same VM - this is for testing and is not a requirement of this feature and can be largely ignored.