-
15. Re: Remoting Transport Transaction Inflow Design Discussion
dmlloyd Aug 17, 2011 11:56 AM (in response to marklittle)Mark Little wrote:
"I mean that I haven't addressed the issue of transaction timeout control."
What issues? The timeout is controlled by the coordinator, not the client. Or by "control" do you mean setTimeout calls?
Exactly. It's just a question of how these get propagated, which seems somewhat outside of the core of this solution. It's only mentioned because it's in the SPIs.
Mark Little wrote:
"Keeping in mind that this is nowhere near the only process of this complexity to be tested - and no, don't trot out "it's more complex than you think" unless you want to enumerate specific cases (which will probably then be appropriated into additional tests) - I think we'd follow the same approach we'd follow for testing other things. We'd unit test the protocol of course, and test to ensure that the implementation matches the specification, and verify that the protocol handlers on either "end" forward to the proper APIs."
Go take a look at the QA tests for JBossTS. You'll see that a sh*t load of them are covering recovery. And then take a look at XTS and REST-AT. You'll see that a sh*t load of them are covering recovery. Want to take a wild stab in the dark why that might be the case ;-)? Yes, it's complex. It's got to be fault tolerant, so we have to test all of the cases. There are no edge-cases with transactions: it either works or it fails. Unit tests aren't sufficient for this.
Well, it's always good to have a set of existing projects to draw test scenarios from. But otherwise I don't think this is directly relevant to the discussion: unless you're saying "we must test these 200 different scenarios before I let you type 'git commit'". We need high quality, detailed tests for every subsystem. For example having thorougly tested transactions doesn't do us a lot of good if, for example, our JPA implementation or HornetQ or something was writing corrupt data. I mean everything needs thorough testing. Just the fact that these other projects have lots of tests covering recovery doesn't mean that those tests are necessary, and on the other hand, there may be many scenarios unaccounted-for in these tests as well. AS is riddled with highly complex systems that need detailed testing.
If we use an SPI with a documented contract, it is not unreasonable to expect that contract to be met by its implementation. If the contract is not met by the implementation, yeah that's a bug, but saying that it's the responsibility of every project consuming that SPI to verify that its implementation(s) meet the SPI contract is crazy. Yeah we may introduce a test here or there to catch regression in the target project, but even this is not strictly necessary as the target project should be doing this!
In this particular case (solution 2 that is), we're specifying an implementation for XAResource, a transport for it, and an endpoint which controls XATerminator; this says to me that our tests can be limited in scope to testing this mechanism from end to end. As I said if we have other projects we can draw recovery scenarios from, that's fine, and we will do so. I don't know what else to tell you.
Mark Little wrote:
"That's just another way of saying we don't have any special, magical auto-recovery "stuff" that isn't provided by the transaction coordinator (which might well have some magical auto-recovery "stuff"). There might be a better way to express that."
Let me try and rephrase and let me know if I get it wrong: you assume that existing recovery approaches are sufficient for this and nothing new will need to be invented?
Yes, that is my assumption, as we are using existing propagation mechanisms.
Mark Little wrote:
"In case 1, the client has no TM and it uses a remote UserTransaction interface to directly control the remote TM. In case 2, the client is using the local TM to control transactions, and is treating the remote TM as an enrolled resource into the current transaction."
Yeah, so it's interposition. Like I said, these are two difference scenarios.
"Case 1 cannot be made to work when a local TM is present without adding some notion in the EE layer to determine whether it should use the local UserTransaction or the remote one. This is possible but is a possibly significant amount of work."
How significant? If we're putting all options on the table then this needs to be there too.
The problem is that we'd need some way to control which kind of UserTransaction is pulled from JNDI and thus injected into EE components. This can depend on what the user intends to do with it; thus we'd need to isolate many use cases and figure out what level this should be done at (deployment? component? server-wide?), and we need to do some analysis to determine where and how the remote server connection(s) should be specified and associate the two somehow. We're basically choosing between TMs on a per-operation basis. This type of configuration is unprecedented as far as I know - I think the analysis would take as long as the implementation, if not longer. Because it is not known exactly how this should look, I can't say how much effort this is going to be other than "lots".
Mark Little wrote:
"Theoretically each successive "step" will treat the TM of the subsequent "step" as a participating resource. As to D calling A, that will only work if the TM is clever enough to figure out what's happening (I don't see why it wouldn't as the Xid should, well, identify the transaction so A should recognize its own; but that's why we're having this discussion)."
Please go take a look at what we have to do for interposition in JTS. And it's not because JTS is more complex than it needs to be: interposition is a fundamental concept within distributed transactions and the problems, optimisations, recovery semantics etc. are there no matter what object model or distribution approach you use. Take a look at XTS too, for instance.
Yeah but keep in mind that we're dealing in a strict hierarchy here, there are no peers. The transaction isn't so much "distributed" as it is "controlled"; caller always dominates callee. This if D calls A the behavior I'd expect would be that A would treat the imported work as a different or subordinate transaction; it need not really have any direct knowledge that the two are related since the D→A relationship is controlled by D, and the C→D relationship is controlled by C, etc. If the D→A outcome is in doubt then it's up to D to resolve that branch, not A. But that's just my ignoramus opinion.
When it comes to reality, this situation is extremely unlikely to occur even in the weirdest situations I've ever heard of. The reason is that if you've got two nodes invoking on each other, it is highly likely that they are within the same "tier", which greatly increases the likelihood that they could simply run JTS and be done.
Here's what I consider to be a likely, real-world scenario:
Host A runs a thin client which uses the "solution 1" mechanism to control the transaction when it talks to Host B.
Host B runs a "front" tier which is isolated by firewall. This tier has one or more local transactional databases or caches, and a local TM. The services running on B also perform EJB invocations on Host C.
Host C is the "rear" tier separated from B by one or more layer of firewall, and maybe even a public network. B talks to C via remoting, using "solution 2" to propagate transactions to it, using the client/server style of invocation.
Host C participates in a peer-to-peer relationship with other services on Hosts D, E, and F in the same tier, using Remoting or IIOP but using JTS to coordinate the transaction at this level since C, D, E, and F all mutulally execute operations on one another (and possibly each consume local resources) in a distributed object graph style of invocation.
Note you can substitue A and B with an EIS and everything should be exactly the same (except that recovery processes would be performed by the EIS rather than by B's TM).
Everything I understand about transaction processing (which is definitely at least as much as a "joe user") says that there's no reason this shouldn't "just work". And we should be able to utilize existing transaction recovery mechanisms as well.
-
16. Re: Remoting Transport Transaction Inflow Design Discussion
dmlloyd Aug 17, 2011 12:31 PM (in response to marklittle)Mark Little wrote:
Could you outline the pros and cons of the current approaches we have in AS5/AS6? I know we've discussed them elsewhere already, but it would be good to capture it all here. For instance, why you believe that IIOP isn't right.
When it comes around to transaction control for the native invocation layer, as far as I am aware AS 5/6 only has the ClientUserTransaction approach (basically equivalent to "solution 1"). This is fine as far as it goes, but as I have said previously, integrating this at a server level is problematic.
The cons of the AS5/6 native approach are obvious: no transaction propagation, limited transaction control. And the problems in the existing transport implementation are well-known.
As far as IIOP goes, I don't believe that it's "not right" per se; I do believe that there are valid use cases which make IIOP (in general) less than opimal, for example in the case where network topology or security environment makes it undesirable (either in terms of security, performance, configurability, etc.; for example to run many different protocols across a single physical connection or to use OTP-style authentication). CORBA is complex no matter how you slice it, sometimes prohibitively so. But that's my opinion which is really irrelevant here; as I said if we decide as an organization (via the proper channels) to completely ditch the native transport in favor of a full investment in CORBA/IIOP then I will do so happily (I definitely have a lot of other stuff to be working on), though I do personally believe that in this case that we would be passing up the opportunity to make something really special which perfectly fits a real need.
The IIOP question is really about distributed object relationships versus client/server hierarchical relationships. This is an old ideological debate of the absolute worst kind which I will not be a part of, apart from saying that I see the merit of both architectures and believe we should support both.
-
17. Re: Remoting Transport Transaction Inflow Design Discussion
jhalliday Aug 17, 2011 2:25 PM (in response to dmlloyd)>> I'm pretty sure that e.g. support will tell you it is unacceptable to ship a solution that may require manual transaction cleanup. We've had a small number of corner cases in JTA that suffered that limitation and eliminating them and the support load they generate has been a high priority for the transaction development work. Intentionally introducing new ones is definitely in the category of Bad Ideas.
> can you be more specific than "Bad Idea"
Sure, explaining the chain of reasoning behind that one is easy:
Red Hat ships products and offers support on them. That support is fixed price based on SLA, not proportional to the number of tickets filed. On the other hand, support costs scale as a function of the number of issues reported. Thus the less support work we have to do, the lower our cost and the higher our profit. Intentionally building and shipping something we know is going to increase support load is contra-survival and therefore a Bad Idea.
>> [remote UserTransaction] ... behaves in an intuitive fashion only for a very limited, albeit common, set of use cases. For more complex scenarios its inherent limitations manifest in ways that can be confusing to users.
> or "some complex scenarios"
yup, I can give you a some of them too:
1) The 'client' is actually another AS instance, either of the same or earlier vintage, doing JNDI lookup of UserTransaction against a remote AS7.
2) The client wants to talk to two remote AS instances in the same tx.
3) The client is an environment that has its own UserTransaction implementation. This is actually just a more general version of case 1). but in which you can't use tricks like patching the client side lookup to return your actual UserTransaction instead of the remote proxy.
4) you want to support load balancing or failover for the client-server connection.
>> JCA inflow was either designed for propagation to leaf nodes only, or incredibly badly thought out.
> or "badly thought out"
yup, although it's really pretty obvious: The JCA inflow API uses an Xid as a poor man's transaction propagation context. Xids were designed only for control flow between a transaction manager and a resource manager, not for use in multi-level trees. The JCA has no provision for allowing subordinates to create new branches in the global transaction. For that it would have to pass in a mask of free bits in the bqual array as well as the Xid to the subordinate. Indeed the JCA expressly prohibits the container handling the inflow from altering the Xid. It has to remain immutable because without any knowledge of which bits can safely be mutated, the container can't guarantee to generate uniq Xids, a property which is required by the spec.
> or "not capable enough"
The XA spec expects that each resource manager (give or take its XAResource's isSameRM implementation) gets its own branch i.e. uniq Xid. With inflowed Xids you can't generate new Xids to meet that expectation, you have to use the inflowed one verbatim. That causes problems with the state machine for the XA protocol lifecycle, as it's tied to the Xid. For example, if the inflowed tx is used to connect to two resource managers, you can't recover from crashes cleanly as the recovery mechanism is tracking state on the assumption that the Xid belongs to at most one RM and once it has cleaned that one up it's done. Actually on further thought even an upper limit of one is optimistic - the Xid contains the node Id of the originating parent and that parent may connect to the same resource manager, in which case it's going to incorrectly manage the lifecycle because it can't distinguish the XAResource representing the subordinate tx from the one representing the RM as they have the same Xid. That last case is an artifact of our implementation rather than the spec though.
> or "unintuitive behavior"
yup, I can give you one for that too - the afterCompletions run relative to the commit in the local node where they are registered, which may actually be before the commit in another node and not correctly reflect heuristics outcomes or be suitable for triggering subsequent steps in a process that depend on running after commits in the other nodes. Likewise beforeCompletions run relative to the prepare in the local node, thus may run after a prepare in another node. In the best case that's merely inefficient, in the worst case, where resource managers are shared, it causes a flush of cached data to occur after a prepare, which will fail. It that's not complicated enough for you, take the inflowed transaction context and make a transactional call back to the originating parent server. fun, fun.
> Also - only one resource for inflowed transactions? How is that not a serious deficiency in our implementation?
It's a deficiency in the JCA spec, see above. The spec assumes the inflowed container is a leaf node i.e. RM, not a subordinate coordinator. There are some hacky things we can potentially do to work around that limitation in the spec without outright breaking compliance. They were on my list of things to do in the transactions upstream, but I seem to be a bit busy with AS integration issues instead :-)
> You're basically saying that an MDB can never access more than one resource. That's a major problem in and of itself.
Not at all. MDBs don't normally run in inflowed transactions. The server hosting the MDB container starts a top level transaction, enlists the JMS as a resource manager and additionally enlists any resource managers the MDB calls e.g. a database. It's a flat structure, not a hierarchic one.
> Finally "unacceptable to ship a solution that may require manual transaction cleanup" - you should know that any two-phase transaction system may require manual transaction cleanup; that's the nature of two-phase transactions.
sure, but they are the small number of outcomes that result from one or more of the players not behaving in accordance with the spec e.g. resource managers making autonomous outcome decisions. We don't automatically do anything about those because we simply can't - that's the point at which the spec basically says 'give up, throw a heuristic and let a human deal with the mess'. You're talking about the much more numerous expected failure cases that can be handled under automatically under the spec. Indeed exactly the kinds of run of the mill system failures a distributed transaction protocol is designed to protect a user against. Intentionally shipping a non spec compliant XAResource implementation that will result in a support case for many of those common failures is borderline business suicide, see above.
> I'm pretty sure that if someone unplugs the ethernet cable of the transaction coordinator after prepare but before commit, there's going to have to be some manual cleanup.
Really? Got a test case for that? Other than the one a certain competitor wrote and we soundly refuted as FUD? Because I've got an extensive test suite that shows no such outcomes. Well, except for MS SQLServer and mysql, neither of which is fully XA compliant at present. Ensuring clean transaction completion in crash situations is exactly what the transaction manager is for after all.
-
18. Re: Remoting Transport Transaction Inflow Design Discussion
marklittle Aug 17, 2011 3:08 PM (in response to dmlloyd)David Lloyd wrote:
Mark Little wrote:
"I mean that I haven't addressed the issue of transaction timeout control."
What issues? The timeout is controlled by the coordinator, not the client. Or by "control" do you mean setTimeout calls?
Exactly. It's just a question of how these get propagated, which seems somewhat outside of the core of this solution. It's only mentioned because it's in the SPIs.
OK, so let's ignore this for now. In the grand scheme of things it's trivial.
Mark Little wrote:
"Keeping in mind that this is nowhere near the only process of this complexity to be tested - and no, don't trot out "it's more complex than you think" unless you want to enumerate specific cases (which will probably then be appropriated into additional tests) - I think we'd follow the same approach we'd follow for testing other things. We'd unit test the protocol of course, and test to ensure that the implementation matches the specification, and verify that the protocol handlers on either "end" forward to the proper APIs."
Go take a look at the QA tests for JBossTS. You'll see that a sh*t load of them are covering recovery. And then take a look at XTS and REST-AT. You'll see that a sh*t load of them are covering recovery. Want to take a wild stab in the dark why that might be the case ;-)? Yes, it's complex. It's got to be fault tolerant, so we have to test all of the cases. There are no edge-cases with transactions: it either works or it fails. Unit tests aren't sufficient for this.
Well, it's always good to have a set of existing projects to draw test scenarios from. But otherwise I don't think this is directly relevant to the discussion: unless you're saying "we must test these 200 different scenarios before I let you type 'git commit'". We need high quality, detailed tests for every subsystem. For example having thorougly tested transactions doesn't do us a lot of good if, for example, our JPA implementation or HornetQ or something was writing corrupt data. I mean everything needs thorough testing. Just the fact that these other projects have lots of tests covering recovery doesn't mean that those tests are necessary, and on the other hand, there may be many scenarios unaccounted-for in these tests as well. AS is riddled with highly complex systems that need detailed testing.
I'm saying that if we are talking about developing a new distributed transaction protocol using JBR instead of CORBA, then I will need to see all of the transactions use cases we have covered in QA pass against this new implementation. Call me overly pessimistic, but even if you think that the scenario is narrowly focussed/self contained, I like the nice warm fuzzy feeling that passing QA tests brings.
In this particular case (solution 2 that is), we're specifying an implementation for XAResource, a transport for it, and an endpoint which controls XATerminator; this says to me that our tests can be limited in scope to testing this mechanism from end to end. As I said if we have other projects we can draw recovery scenarios from, that's fine, and we will do so. I don't know what else to tell you.
And the A->B->C scenario simply isn't possible?
"Case 1 cannot be made to work when a local TM is present without adding some notion in the EE layer to determine whether it should use the local UserTransaction or the remote one. This is possible but is a possibly significant amount of work."
How significant? If we're putting all options on the table then this needs to be there too.
The problem is that we'd need some way to control which kind of UserTransaction is pulled from JNDI and thus injected into EE components. This can depend on what the user intends to do with it; thus we'd need to isolate many use cases and figure out what level this should be done at (deployment? component? server-wide?), and we need to do some analysis to determine where and how the remote server connection(s) should be specified and associate the two somehow. We're basically choosing between TMs on a per-operation basis. This type of configuration is unprecedented as far as I know - I think the analysis would take as long as the implementation, if not longer. Because it is not known exactly how this should look, I can't say how much effort this is going to be other than "lots".
Interestingly we've had several TS f2f meetings where the discussion has arisen around running local JTA and remote JTA (JTS) in the same container. Jonathan can say more on this, since he was driving those thoughts.
However, let's assume for the sake of argument that initially we decide that in any container-to-container interactions that require transactions you either have to use HTTP, SOAP/HTTP or IIOP, but want to leave the door open for other approaches later, would we be having a different discussion? We discussed suitable abstractions earlier, which could be independent of any commitment to changes at this stage, so I'm still trying to figure out what all of those abstractions would be.
Mark Little wrote:
"Theoretically each successive "step" will treat the TM of the subsequent "step" as a participating resource. As to D calling A, that will only work if the TM is clever enough to figure out what's happening (I don't see why it wouldn't as the Xid should, well, identify the transaction so A should recognize its own; but that's why we're having this discussion)."
Please go take a look at what we have to do for interposition in JTS. And it's not because JTS is more complex than it needs to be: interposition is a fundamental concept within distributed transactions and the problems, optimisations, recovery semantics etc. are there no matter what object model or distribution approach you use. Take a look at XTS too, for instance.
Yeah but keep in mind that we're dealing in a strict hierarchy here, there are no peers. The transaction isn't so much "distributed" as it is "controlled"; caller always dominates callee. This if D calls A the behavior I'd expect would be that A would treat the imported work as a different or subordinate transaction; it need not really have any direct knowledge that the two are related since the D→A relationship is controlled by D, and the C→D relationship is controlled by C, etc. If the D→A outcome is in doubt then it's up to D to resolve that branch, not A. But that's just my ignoramus opinion.
Controlling the transaction termination protocol is definitely a parent/child relationship; that much is obvious. However, I still don't see how you can say that A->B->C->D isn't possible (remember that each of these letters represents an AS instance). So the transaction flows between (across) 4 AS instances. It could even be A->B|C->D|E->A, i.e., a (extended) diamond shape if you draw it out.
When it comes to reality, this situation is extremely unlikely to occur even in the weirdest situations I've ever heard of. The reason is that if you've got two nodes invoking on each other, it is highly likely that they are within the same "tier", which greatly increases the likelihood that they could simply run JTS and be done.
"Unlikely" isn't a term I like when thinking about transactions. We're supposed to be working with (probabilistic) guarantees. As I've said a few times, I believe there is a lot more to this problem than first thought, so it needs to be more carefully discussed, designed/architected and, presumably, implemented.
Here's what I consider to be a likely, real-world scenario:
Host A runs a thin client which uses the "solution 1" mechanism to control the transaction when it talks to Host B.
Host B runs a "front" tier which is isolated by firewall. This tier has one or more local transactional databases or caches, and a local TM. The services running on B also perform EJB invocations on Host C.
Host C is the "rear" tier separated from B by one or more layer of firewall, and maybe even a public network. B talks to C via remoting, using "solution 2" to propagate transactions to it, using the client/server style of invocation.
Host C participates in a peer-to-peer relationship with other services on Hosts D, E, and F in the same tier, using Remoting or IIOP but using JTS to coordinate the transaction at this level since C, D, E, and F all mutulally execute operations on one another (and possibly each consume local resources) in a distributed object graph style of invocation.
Note you can substitue A and B with an EIS and everything should be exactly the same (except that recovery processes would be performed by the EIS rather than by B's TM).
Everything I understand about transaction processing (which is definitely at least as much as a "joe user") says that there's no reason this shouldn't "just work". And we should be able to utilize existing transaction recovery mechanisms as well.
In this scenario why wouldn't we use something like REST-TX or XTS when bridging the firewall? Then we'd be in the transaction bridging arena that Jonathan and team have been working on for a while.
-
19. Re: Remoting Transport Transaction Inflow Design Discussion
marklittle Aug 17, 2011 3:16 PM (in response to dmlloyd)David Lloyd wrote:
Mark Little wrote:
Could you outline the pros and cons of the current approaches we have in AS5/AS6? I know we've discussed them elsewhere already, but it would be good to capture it all here. For instance, why you believe that IIOP isn't right.
When it comes around to transaction control for the native invocation layer, as far as I am aware AS 5/6 only has the ClientUserTransaction approach (basically equivalent to "solution 1"). This is fine as far as it goes, but as I have said previously, integrating this at a server level is problematic.
Understood.
The cons of the AS5/6 native approach are obvious: no transaction propagation, limited transaction control. And the problems in the existing transport implementation are well-known.
Humour me and put them here explicitly.
As far as IIOP goes, I don't believe that it's "not right" per se; I do believe that there are valid use cases which make IIOP (in general) less than opimal, for example in the case where network topology or security environment makes it undesirable (either in terms of security, performance, configurability, etc.; for example to run many different protocols across a single physical connection or to use OTP-style authentication). CORBA is complex no matter how you slice it, sometimes prohibitively so. But that's my opinion which is really irrelevant here; as I said if we decide as an organization (via the proper channels) to completely ditch the native transport in favor of a full investment in CORBA/IIOP then I will do so happily (I definitely have a lot of other stuff to be working on), though I do personally believe that in this case that we would be passing up the opportunity to make something really special which perfectly fits a real need.
So what about using one of the existing transports over which transactions are supported, rather than implement yet another one? Of course the performance of, say, HTTP isn't as good as JBR or any binary protocol, but I've yet to see any performance requirements being made against this particular requirement. In fact as far as I can tell, we're talking about the requirement to support a case that we couldn't support in 5/6 based only on the fact that would couldn't support it, not on the fact that we need to support it and with N tx/sec.
Yes SOAP/HTTP is equally slow (can be slower) than HTTP, but SOAP/JMS is an option. I suppose if there was enough reason we could even consider a SOAP/JBR, though at that point I'd be first in the queue to recommend removing SOAP from the equation entirely!
-
20. Re: Remoting Transport Transaction Inflow Design Discussion
dmlloyd Aug 17, 2011 4:02 PM (in response to jhalliday)Jonathan Halliday wrote:
>> [remote UserTransaction] ... behaves in an intuitive fashion only for a very limited, albeit common, set of use cases. For more complex scenarios its inherent limitations manifest in ways that can be confusing to users.
> or "some complex scenarios"
yup, I can give you a some of them too:
1) The 'client' is actually another AS instance, either of the same or earlier vintage, doing JNDI lookup of UserTransaction against a remote AS7.
2) The client wants to talk to two remote AS instances in the same tx.
3) The client is an environment that has its own UserTransaction implementation. This is actually just a more general version of case 1). but in which you can't use tricks like patching the client side lookup to return your actual UserTransaction instead of the remote proxy.
4) you want to support load balancing or failover for the client-server connection.
Okay so these basically correspond to the same scenarios which have already been outlined. As far as I know there's no need (in terms of existing functionality or explicit requirement) to support #4 during mid-transaction though.
Jonathan Halliday wrote:
>> JCA inflow was either designed for propagation to leaf nodes only, or incredibly badly thought out.
> or "badly thought out"
yup, although it's really pretty obvious: The JCA inflow API uses an Xid as a poor man's transaction propagation context. Xids were designed only for control flow between a transaction manager and a resource manager, not for use in multi-level trees. The JCA has no provision for allowing subordinates to create new branches in the global transaction. For that it would have to pass in a mask of free bits in the bqual array as well as the Xid to the subordinate. Indeed the JCA expressly prohibits the container handling the inflow from altering the Xid. It has to remain immutable because without any knowledge of which bits can safely be mutated, the container can't guarantee to generate uniq Xids, a property which is required by the spec.
I didn't find this in the JCA spec (there was a bit about RMs not altering an XID data bits in transit but this is not the same thing), but I see your point about XID generation in a hierarchical system (it'd be fine as long as there are no cycles and you could just patch on stuff to the end of the branch ID, but that's not technically very robust, and could violate the XID "format" if there is one). I'm curious to know how other vendors solve this problem with EIS transaction inflow. I could see a workaround in which additional XAResources are enlisted to the root controller by propagating them back *up* the chain, but this is back into custom SPI territory which I'd just as soon stay out of.
Alternatively the subordinate TM could simply generate a new global transaction ID for its subordinate resources. It'd technically be a lie but it'd cleanly solve this problem at least as far as transaction completion goes - recovery semantics might be hard to work out though.Jonathan Halliday wrote:
> or "not capable enough"
The XA spec expects that each resource manager (give or take its XAResource's isSameRM implementation) gets its own branch i.e. uniq Xid. With inflowed Xids you can't generate new Xids to meet that expectation, you have to use the inflowed one verbatim. That causes problems with the state machine for the XA protocol lifecycle, as it's tied to the Xid. For example, if the inflowed tx is used to connect to two resource managers, you can't recover from crashes cleanly as the recovery mechanism is tracking state on the assumption that the Xid belongs to at most one RM and once it has cleaned that one up it's done. Actually on further thought even an upper limit of one is optimistic - the Xid contains the node Id of the originating parent and that parent may connect to the same resource manager, in which case it's going to incorrectly manage the lifecycle because it can't distinguish the XAResource representing the subordinate tx from the one representing the RM as they have the same Xid. That last case is an artifact of our implementation rather than the spec though.
Again I can't find this in the spec. It clearly says that an XID is used to identify the incoming transaction, but nothing says that it cannot in turn generate different XIDs for its own resources.
As for your latter point though, recalling that we're dealing with a strictly hierarchical relationship here; even if the same transaction recursively flows in to a node into which it had already flowed, it doesn't really have to treat it as another branch of the same transaction, even if it were possible to do so. It's a departure from CORBA-style distribution in that every inflow can be a new level in the transaction hierarchy even if it passes through the same node (which you would not normally do in a hierarchical relationship, by definition, because resources could then be accessed from two wholly different XIDs even if they are logically a part of the same transaction). If true distribution is desired, there's always JTS, after all. That's what this is - you trade away the functionality you don't want anyway when you're in a client/server environment, and in return you get much simpler semantics (and in turn, less overhead) and the benefits of the optimized transport. Choices are good.
Jonathan Halliday wrote:
> or "unintuitive behavior"
yup, I can give you one for that too - the afterCompletions run relative to the commit in the local node where they are registered, which may actually be before the commit in another node and not correctly reflect heuristics outcomes or be suitable for triggering subsequent steps in a process that depend on running after commits in the other nodes. Likewise beforeCompletions run relative to the prepare in the local node, thus may run after a prepare in another node. In the best case that's merely inefficient, in the worst case, where resource managers are shared, it causes a flush of cached data to occur after a prepare, which will fail. It that's not complicated enough for you, take the inflowed transaction context and make a transactional call back to the originating parent server. fun, fun.
I woudn't be worried about the Synchronization stuff in a multi-tier environment - especially if we disallow resource sharing (i.e. treat each node's access to a resource as separate), which seems prudent given my above thoughts about unorthodox XID handling. In my experience, the use cases for the kind of boss/subordinate cascading which we are talking about would generally not rely on the ability (resource sharing) anyway. And if you're not sharing resources then if you look at the synchronization issues, you'll see that their semantics probably only matter relative to what the local node can see anyway. I think this lack of capability is fair if it saves us implementation effort.
That isn't to say that we couldn't invent some great new SPI which does this all much better. Given unlimited (or less limited) resources, this would be fine by me. Furthermore since all of this XATerminator/XAResource stuff is implementation details, we could do it one way now and change to a different, more feature-rich solution later on. Maybe at the same time we can tackle the XID deficiency in the JCA spec somehow.
Jonathan Halliday wrote:
> You're basically saying that an MDB can never access more than one resource. That's a major problem in and of itself.
Not at all. MDBs don't normally run in inflowed transactions. The server hosting the MDB container starts a top level transaction, enlists the JMS as a resource manager and additionally enlists any resource managers the MDB calls e.g. a database. It's a flat structure, not a hierarchic one.
The purpose is execute Work in the context of a transaction controlled by an outside party, and delivering messages as part of an imported transaction is allowed and described in the spec as one of the three models (with respect to transactions) in which messages my be delivered.
In any case, if that API was intended for flat execution then yeah it's an utter failure of an SPI. If it's intended for hierarchical execution then it's only a moderate failure (due to the XID problem), one that's actually workable in practice (in my opinion). Without resources to control, after all, there's not a lot of point to transactional inflow.
Jonathan Halliday wrote:
> Finally "unacceptable to ship a solution that may require manual transaction cleanup" - you should know that any two-phase transaction system may require manual transaction cleanup; that's the nature of two-phase transactions.
sure, but they are the small number of outcomes that result from one or more of the players not behaving in accordance with the spec e.g. resource managers making autonomous outcome decisions. We don't automatically do anything about those because we simply can't - that's the point at which the spec basically says 'give up, throw a heuristic and let a human deal with the mess'. You're talking about the much more numerous expected failure cases that can be handled under automatically under the spec. Indeed exactly the kinds of run of the mill system failures a distributed transaction protocol is designed to protect a user against. Intentionally shipping a non spec compliant XAResource implementation that will result in a support case for many of those common failures is borderline business suicide, see above.
The whole idea is predicated on complying with the XAResource contract; we would not intentionally ship a non spec compliant XAResource implementation.
Jonathan Halliday wrote:
> I'm pretty sure that if someone unplugs the ethernet cable of the transaction coordinator after prepare but before commit, there's going to have to be some manual cleanup.
Really? Got a test case for that? Other than the one a certain competitor wrote and we soundly refuted as FUD? Because I've got an extensive test suite that shows no such outcomes. Well, except for MS SQLServer and mysql, neither of which is fully XA compliant at present. Ensuring clean transaction completion in crash situations is exactly what the transaction manager is for after all.
Okay, great. What I was trying to get across with those requirement items is that we're only going to implement the contracts, and we're not implementing any special recovery semantics beyond what the contracts specify and what the TM does for us. If the TM can handle every crash scenario ever, all the better.
-
21. Re: Remoting Transport Transaction Inflow Design Discussion
dmlloyd Aug 17, 2011 4:30 PM (in response to marklittle)Mark Little wrote:
David Lloyd wrote:
Mark Little wrote:
Could you outline the pros and cons of the current approaches we have in AS5/AS6? I know we've discussed them elsewhere already, but it would be good to capture it all here. For instance, why you believe that IIOP isn't right.
When it comes around to transaction control for the native invocation layer, as far as I am aware AS 5/6 only has the ClientUserTransaction approach (basically equivalent to "solution 1"). This is fine as far as it goes, but as I have said previously, integrating this at a server level is problematic.
Understood.
The cons of the AS5/6 native approach are obvious: no transaction propagation, limited transaction control. And the problems in the existing transport implementation are well-known.
Humour me and put them here explicitly.
See below.
Mark Little wrote:
As far as IIOP goes, I don't believe that it's "not right" per se; I do believe that there are valid use cases which make IIOP (in general) less than opimal, for example in the case where network topology or security environment makes it undesirable (either in terms of security, performance, configurability, etc.; for example to run many different protocols across a single physical connection or to use OTP-style authentication). CORBA is complex no matter how you slice it, sometimes prohibitively so. But that's my opinion which is really irrelevant here; as I said if we decide as an organization (via the proper channels) to completely ditch the native transport in favor of a full investment in CORBA/IIOP then I will do so happily (I definitely have a lot of other stuff to be working on), though I do personally believe that in this case that we would be passing up the opportunity to make something really special which perfectly fits a real need.
So what about using one of the existing transports over which transactions are supported, rather than implement yet another one? Of course the performance of, say, HTTP isn't as good as JBR or any binary protocol, but I've yet to see any performance requirements being made against this particular requirement. In fact as far as I can tell, we're talking about the requirement to support a case that we couldn't support in 5/6 based only on the fact that would couldn't support it, not on the fact that we need to support it and with N tx/sec.
Yes SOAP/HTTP is equally slow (can be slower) than HTTP, but SOAP/JMS is an option. I suppose if there was enough reason we could even consider a SOAP/JBR, though at that point I'd be first in the queue to recommend removing SOAP from the equation entirely!
Well that's what I'm talking about: this is the direct functional replacement for the Remoting 2.x-based UnifiedInvoker stuff, and the JRMPInvoker before it. This is the functionality we're carrying over. I can tell you with absolute certainty that porting this stuff over directly is not a viable option. We can discuss the details offline if you like.
This is really bleeding over into another topic at this point. Remoting 3.x is the transport layer that we're using for AS management; the plan was to bring over a JSR-160 implementation plus EJB remote invocation which allows all these things to run over the same channel, and to leverage JBMAR, to get an optimally-performant invocation layer which supports all kinds of security strategies, and works with even the stupidest of firewalls. In other words, replace the old broken stuff with a combination of a bunch of existing good stuff. This in turn gives us a nice springboard for nicely supporting EE app-client, getting this multi-tier transaction stuff for free (or at least, that was the plan) and at the same time giving us a shiny new bullet point to throw up against the competitors who have a similar sort of transport already. Also it gives us a nice path forward to allow EAP 4 and 5 applications (and even applications running on third-party appservers) to talk to EAP 6 applications using this protocol simply by way of a client JAR which is something that has been requested of us more than once. We know that JBMAR outperforms everything else out there which claims any level of compliance to the serialization spec; we can choose to use it, or not.
-
22. Re: Remoting Transport Transaction Inflow Design Discussion
dmlloyd Aug 17, 2011 5:48 PM (in response to marklittle)Mark Little wrote:
Mark Little wrote:
"Keeping in mind that this is nowhere near the only process of this complexity to be tested - and no, don't trot out "it's more complex than you think" unless you want to enumerate specific cases (which will probably then be appropriated into additional tests) - I think we'd follow the same approach we'd follow for testing other things. We'd unit test the protocol of course, and test to ensure that the implementation matches the specification, and verify that the protocol handlers on either "end" forward to the proper APIs."
Go take a look at the QA tests for JBossTS. You'll see that a sh*t load of them are covering recovery. And then take a look at XTS and REST-AT. You'll see that a sh*t load of them are covering recovery. Want to take a wild stab in the dark why that might be the case ;-)? Yes, it's complex. It's got to be fault tolerant, so we have to test all of the cases. There are no edge-cases with transactions: it either works or it fails. Unit tests aren't sufficient for this.
Well, it's always good to have a set of existing projects to draw test scenarios from. But otherwise I don't think this is directly relevant to the discussion: unless you're saying "we must test these 200 different scenarios before I let you type 'git commit'". We need high quality, detailed tests for every subsystem. For example having thorougly tested transactions doesn't do us a lot of good if, for example, our JPA implementation or HornetQ or something was writing corrupt data. I mean everything needs thorough testing. Just the fact that these other projects have lots of tests covering recovery doesn't mean that those tests are necessary, and on the other hand, there may be many scenarios unaccounted-for in these tests as well. AS is riddled with highly complex systems that need detailed testing.
I'm saying that if we are talking about developing a new distributed transaction protocol using JBR instead of CORBA, then I will need to see all of the transactions use cases we have covered in QA pass against this new implementation. Call me overly pessimistic, but even if you think that the scenario is narrowly focussed/self contained, I like the nice warm fuzzy feeling that passing QA tests brings.
Okay, that's reasonable, but bear in mind that what we're talking about is probably going to be more constrained in some ways, so we may need fewer, or at most a few different, tests. Some of these tests may simply verify that things we don't support in this scenario are explicitly disallowed for example.
Mark Little wrote:
In this particular case (solution 2 that is), we're specifying an implementation for XAResource, a transport for it, and an endpoint which controls XATerminator; this says to me that our tests can be limited in scope to testing this mechanism from end to end. As I said if we have other projects we can draw recovery scenarios from, that's fine, and we will do so. I don't know what else to tell you.
And the A->B->C scenario simply isn't possible?
A->B->C yes is possible, however this is really (A client -> B server -> (a bunch of existing stuff) -> B client -> C server); which is to say that if we verify that we follow the rules for what we implement, then it's up to JBTS to follow the rules for what it implements. Testing multiple steps like this is the task of the aforementioned QA tests, I suppose, for the "warm and fuzzy" quality, but it wouldn't be covered in the unit tests for any of the individual pieces as it is expected that the components that they interact with all behave according to contract.
Mark Little wrote:
"Case 1 cannot be made to work when a local TM is present without adding some notion in the EE layer to determine whether it should use the local UserTransaction or the remote one. This is possible but is a possibly significant amount of work."
How significant? If we're putting all options on the table then this needs to be there too.
The problem is that we'd need some way to control which kind of UserTransaction is pulled from JNDI and thus injected into EE components. This can depend on what the user intends to do with it; thus we'd need to isolate many use cases and figure out what level this should be done at (deployment? component? server-wide?), and we need to do some analysis to determine where and how the remote server connection(s) should be specified and associate the two somehow. We're basically choosing between TMs on a per-operation basis. This type of configuration is unprecedented as far as I know - I think the analysis would take as long as the implementation, if not longer. Because it is not known exactly how this should look, I can't say how much effort this is going to be other than "lots".
Interestingly we've had several TS f2f meetings where the discussion has arisen around running local JTA and remote JTA (JTS) in the same container. Jonathan can say more on this, since he was driving those thoughts.
However, let's assume for the sake of argument that initially we decide that in any container-to-container interactions that require transactions you either have to use HTTP, SOAP/HTTP or IIOP, but want to leave the door open for other approaches later, would we be having a different discussion? We discussed suitable abstractions earlier, which could be independent of any commitment to changes at this stage, so I'm still trying to figure out what all of those abstractions would be.
Well, all of this is predicated on the assumption that we still need some kind of remote control of transactions from server to server when using the Remoting-based transport in order to fulfill our obligations for existing functionality. If we can work it out or establish that this is not the case, then sure, we can revisit this discussion at a later time, assuming that such time would ever arrive. I would very much like the opportunity to be able to work on establishing requirements for such an abstraction. Such a discussion should come hand in hand with fixing the shortcomings of XATerminator in the JCA specification as well though, as these seem to me to be two faces of the same problem.
Mark Little wrote:
Mark Little wrote:
"Theoretically each successive "step" will treat the TM of the subsequent "step" as a participating resource. As to D calling A, that will only work if the TM is clever enough to figure out what's happening (I don't see why it wouldn't as the Xid should, well, identify the transaction so A should recognize its own; but that's why we're having this discussion)."
Please go take a look at what we have to do for interposition in JTS. And it's not because JTS is more complex than it needs to be: interposition is a fundamental concept within distributed transactions and the problems, optimisations, recovery semantics etc. are there no matter what object model or distribution approach you use. Take a look at XTS too, for instance.
Yeah but keep in mind that we're dealing in a strict hierarchy here, there are no peers. The transaction isn't so much "distributed" as it is "controlled"; caller always dominates callee. This if D calls A the behavior I'd expect would be that A would treat the imported work as a different or subordinate transaction; it need not really have any direct knowledge that the two are related since the D→A relationship is controlled by D, and the C→D relationship is controlled by C, etc. If the D→A outcome is in doubt then it's up to D to resolve that branch, not A. But that's just my ignoramus opinion.
Controlling the transaction termination protocol is definitely a parent/child relationship; that much is obvious. However, I still don't see how you can say that A->B->C->D isn't possible (remember that each of these letters represents an AS instance). So the transaction flows between (across) 4 AS instances. It could even be A->B|C->D|E->A, i.e., a (extended) diamond shape if you draw it out.
I don't think I said it wasn't possible, or at least I didn't intend to. I think supporting trees or linear chains of appservers is fine. I just don't think we necessarily have to support A->B->A kinds of scenarios or other non-tree (especially cyclic) directed graphs as this is not generally a good fit for Remoting in the first place; one would normally choose IIOP or JRMP for this kind of topology. Now we could support this, of course, and if we did I imagine it might be a good idea to treat ->A as a wholly new branch of the transaction, or even as a wholly new global transaction, rather than trying to reconcile what resources on A are already active in the transaction or whatever else you have to do for JTS style interposition. Likewise if we have A->B->C and all three access a common resource, I don't think that we necessarily need to support resource sharing in the Remoting scenario (though if we can, that's great).
That said, I think that a proper implementation ought to be able to mix and match the approaches. A linear chain of app servers and their resources using the Remoting transaction approach could terminate an a JTS "cloud" of appservers, which in turn could enroll more linear chains of app servers and their resources, so long as the contstraints we place aren't broken (for example if we don't support shared resources in the Remoting chains then we simply don't).
Mark Little wrote:
Here's what I consider to be a likely, real-world scenario:
Host A runs a thin client which uses the "solution 1" mechanism to control the transaction when it talks to Host B.
Host B runs a "front" tier which is isolated by firewall. This tier has one or more local transactional databases or caches, and a local TM. The services running on B also perform EJB invocations on Host C.
Host C is the "rear" tier separated from B by one or more layer of firewall, and maybe even a public network. B talks to C via remoting, using "solution 2" to propagate transactions to it, using the client/server style of invocation.
Host C participates in a peer-to-peer relationship with other services on Hosts D, E, and F in the same tier, using Remoting or IIOP but using JTS to coordinate the transaction at this level since C, D, E, and F all mutulally execute operations on one another (and possibly each consume local resources) in a distributed object graph style of invocation.
Note you can substitue A and B with an EIS and everything should be exactly the same (except that recovery processes would be performed by the EIS rather than by B's TM).
Everything I understand about transaction processing (which is definitely at least as much as a "joe user") says that there's no reason this shouldn't "just work". And we should be able to utilize existing transaction recovery mechanisms as well.
In this scenario why wouldn't we use something like REST-TX or XTS when bridging the firewall? Then we'd be in the transaction bridging arena that Jonathan and team have been working on for a while.
Well, you could, of course (at least you could use XTS, but not REST-TX unless we designed our own EJB-over-REST protocol). Or you could use an IIOP proxy. And we could map our management protocol on to SOAP, or IIOP. But the point is that all of these approaches are a pain for users and all lack the advantages of a native protocol - namely performance, more flexible and efficient security, simpler client and server implementation (i.e. no SOAP bindings or monkeying with IDL). In other words, I can say "connect to this URL, authenticate thusly, give me this EJB, and start calling stuff on it" with a native client, which you cannot do without extra steps and a lot of configuration and complexity in SOAP or even CORBA. It's the API simplicity of RMI (simpler, really) with sane firewall-friendly connection semantics and much better performance, or at least that's the idea.
Could we drop a native transport in favor of SOAP+IIOP? Absolutely. But it would suck, IMO. I think a native transport is an essential tool, and it's something we've provided in the past, but it's not up to me to make that call in any case. Even if we dropped it though, I'd probably develop it in my free time anyway just because I have no inclination to use SOAP or IIOP for my personal projects, and it's something I believe in. Granted my free time projects have this way of winding up back in the middle of things these days.
-
23. Re: Remoting Transport Transaction Inflow Design Discussion
jason.greene Aug 17, 2011 10:27 PM (in response to dmlloyd)While we need to figure out the short term, I really think we should start formulating a plan for our long term direction as part of this discussion. As part of that I think we should explore the possibility of making JTS capable of running on a native transport.
-
24. Re: Remoting Transport Transaction Inflow Design Discussion
marklittle Aug 18, 2011 7:11 AM (in response to dmlloyd)"A->B->C yes is possible, however this is really (A client -> B server -> (a bunch of existing stuff) -> B client -> C server); which is to say that if we verify that we follow the rules for what we implement, then it's up to JBTS to follow the rules for what itimplements."
How do you expect to prevent the context from flowing from B server to B client? A per-thread interceptor doing the disassociation, for instance?
"Well, all of this is predicated on the assumption that we still need some kind of remote control of transactions from server to server when using the Remoting-based transport in order to fulfill our obligations for existing functionality. If we can work it out or establish that this is not the case, then sure, we can revisit this discussion at a later time, assuming that such time would ever arrive. I would very much like the opportunity to be able to work on establishing requirements for such an abstraction. Such a discussion should come hand in hand with fixing the shortcomings of XATerminator in the JCA specification as well though, as these seem to me to be two faces of the same problem."
Whatever happens now, I think it's clear that we need to look at this for the future. And a realistic future at that, i.e., not "years and years from now".
"I don't think I said it wasn't possible, or at least I didn't intend to. I think supporting trees or linear chains of appservers is fine. I just don't think we necessarily have to support A->B->A kinds of scenarios or other non-tree (especially cyclic) directed graphs as this is not generally a good fit for Remoting in the first place; one would normally choose IIOP or JRMP for this kind of topology."
OK, but this then comes back to my first question above: how do we prevent these things? Leaving it up to the developer of the application/business object/EJB isn't going to work.
"That said, I think that a proper implementation ought to be able to mix and match the approaches. A linear chain of app servers and their resources using the Remoting transaction approach could terminate an a JTS "cloud" of appservers, which in turn could enroll more linear chains of app servers and their resources, so long as the contstraints we place aren't broken (for example if we don't support shared resources in the Remoting chains then we simply don't)."
Agreed, and in fact this sounds very much like the genesis of the bridging work that I keep mentioning Jonathan has been doing (has done). The idea there was that a transaction could start out over, say, SOAP, inflow to a container and then be sent out over another transport, etc. Graphs of arbitrary size and complexity could be supported, with extensibility to support arbitrary transports.
"Could we drop a native transport in favor of SOAP+IIOP? Absolutely. But it would suck, IMO. I think a native transport is an essential tool, and it's something we've provided in the past, but it's not up to me to make that call in any case. Even if we dropped it though, I'd probably develop it in my free time anyway just because I have no inclination to use SOAP or IIOP for my personal projects, and it's something I believe in. Granted my free time projects have this way of winding up back in the middle of things these days."
Yes, I understand the trade-offs. I mentioned it in order to determine if there was some intermediate step(s) that we could take to help address the issue.
-
25. Re: Remoting Transport Transaction Inflow Design Discussion
marklittle Aug 18, 2011 7:12 AM (in response to jason.greene)Definitely. Although it's not quite JTS over some other transport, but I know what you mean.
-
26. Re: Remoting Transport Transaction Inflow Design Discussion
jhalliday Aug 25, 2011 8:26 AM (in response to dmlloyd)From the point of view of the transactions project roadmap, it seems that -
ClientUserTransaction requires no additional implementation work. The same hooks that supported the earlier implementations of this can be used for the new one too.
Transaction context inflow support falls into two parts: whole transaction (gtrid) interposition and branch only (bqual) interposition.
For whole transaction interposition, a new subordinate transaction context is created on each node receiving an inflow. Synchronizations are handled purely locally. The JCA inflow API can be used, albeit with semantics which IMO are not spec compliant. This model would be relatively simple to implement on the transaction manager side, as the existing recovery architecture will mostly still apply. IMO it's of limited utility for users though, as resource managers will see independent transactions and not do any transaction branch coupling. That impacts both functionality and performance.
For branch only transaction interposition, subordinate nodes maintain the inflowed gtrid but create new branches within an allocated portion of the bqual state space. This requires information about the allocation of bqual space to be communicated, either by explicit parameter passing for the general case or by encoding in the Xid for the jboss->jboss inflow case. It affords the opportunity for branch coupling in resource managers used by more than one node in the same transaction, but leads to more complicated recovery needs. Specifically recovery can no longer be driven off consideration of the gtrid ownership alone, but must also consider bqual ownership. This naturally requires that the bqual value actually contain node ownership information, which will require a new encoding. On the other hand we probably need one anyhow to communicate delegation of the bqual state space on links where we're working with 3rd party implementations and thus constrained to the JCA api rather than one that could carry additional parameters. For links where we do control both ends, we need additional methods to support afterCompletion as a separate phase. BeforeCompletion is already available as a separate step on subordinate transactions, although it may need to be even finer grained to allow for JTA 1.1 TSR sync interposition semantics to be transaction global rather than node local.
In both models the communication is entirely top down - the coordinator does not exist as a network endpoint as in JTS, as persistent ids are not supported by the remoting transport. This constrains recovery to use the XA recovery scan model rather than the JTS replayCompletion one. One consequence of this is that parent nodes will require to maintain a list of all possible subordinates and have a recovery module plugin for them, but that's probably not an undue burden for most deployment scenarios. Another consequence is that it's probably going to be better to build it as distributed hooks into JBossJTA rather than a pluggable transport layer for JBossJTS. That should also offer better performance as resource records will get inlined to the tx rather than be separate ostore entries.
-
27. Re: Remoting Transport Transaction Inflow Design Discussion
dmlloyd Sep 15, 2011 12:10 AM (in response to jhalliday)Okay it looks to me like we have a plan here, but do we have a path forward? What do you need from the AS team, if anything?
-
28. Re: Remoting Transport Transaction Inflow Design Discussion
jason.greene Sep 19, 2011 11:10 AM (in response to dmlloyd)David Lloyd wrote:
Okay it looks to me like we have a plan here, but do we have a path forward? What do you need from the AS team, if anything?
Taking Jonathan's lack of reply, combined with an internal email that states he will not be doing any improvements or changes for EAP6, leads me to believe that we should just take the XATerminator option and do this ourselves. We are going to proceed down this path until we here that the TM project is willing to cooperate.
-
29. Re: Remoting Transport Transaction Inflow Design Discussion
jhalliday Sep 19, 2011 11:43 AM (in response to jason.greene)Sorry, running a bit behind what with having spent the last two work days in a meeting room with lousy connectivity :-(
The discussion above is on the basis of 'what would we ideally like to have in the future' per the related conf call. We're not even at the stage of 'we will deliver feature enhancement X by date Y' yet. What we eventually commit to implement will depend on what other requirements we have competing for attention in the same timeframe and we don't have all that information yet.
For the AS7.1 release we've already delivered the final TS feature release, so if you want tx over remoting in that you will indeed need to build it on whatever is already there, which basically means JCA tx inflow.