1 2 3 4 5 6 Previous Next 77 Replies Latest reply on Oct 31, 2011 9:02 AM by tomjenkinson Go to original post
      • 75. Re: Remote txinflow: XID changes
        tomjenkinson

        Thanks for the clarification, and as for leaving this alone till after paternity, I am very much happy to! (breathes sigh of relief).

         

        Although the required TS code changes have been fairly minor thus far, working through all the error conditions is quite an adventure and has led to an extensive test harness being developed to ensure TS supports access in this specific manner.

        • 76. Re: Remote txinflow: XID changes
          tomjenkinson

          Hi Mark,

           

          The main issue with delivering this functionality is the additional 3 persistence points, 2 of which we have covered by providing an alternative XID implementation for proxies and subordinates, 1 of which is optional anyway.

           

          1. The optional persistence point: This persistence point seems to remain, though still optional: when recovering, you need to know which servers to talk to to call recover on for unprepared transactions, you could  argue that transaction timeout is enough for this and it then becomes an transport implementors prerogative whether to persist this information (perhaps this is only done if the transport detects that the transaction does not have a timeout).

           

          2. Using alternative XID implementations for proxy XA resources (which have the subordinate and parent node name in the bqual) and for subordinate transactions (which needs subordinate, parent and parents parent) should remove the requirement to record XIDs at a caller server. The reason that the subordinate needs three node names is because the proxy xa resource is enlisted at node 2 say with with a bqual (2,1), it then needs to ask the remote server (say 3) for all subordinate transaction XIDs that the server it is running in as a parent are part of . It will then need to convert these Xids to one its own local server understands was part of a transaction (by dropping the recovered Xids subordinate node name and bumping the other two down a notch -  i.e. converting a bqual of 3,2,1 to 2,1). Unfortunately, the bqual of the subordinate JTA transactions will have a bqual length over 64 bytes, so this does still need investigation to see how different objectstores cope with this.

           

          3. The normal XID will need to be changed to have the String node identifier used in the bqual and alongside a shortened EIS name, this will get rid of the requirement for dynamic subordinate node allocation, it is relatively painless but will need Jonathans buy in as it will be impacting funtionality (EIS name) that he has provided.

           

          Of course, in reality I have parked this till after I get back from paternity

           

          Tom

          • 77. Re: Remote txinflow: XID changes
            tomjenkinson

            Just to confirm, the work related to compressing the EIS name down to an int by storing a mapping file is actually done and in the branch as it is work not strictly related to XID changes for distributed JTA but is a functional requirement so it made sense to get it in the next release.

             

            Therefore, when I get back from paternity the following changes may be made:

             

            1. We can get rid of the requirement for the transport to maintain the subordinate node name as a dynamic integer as there is now enough space in the bqual to put the node name as a string (assuming 64(bqual length) - 28(Uid) - 4(eis key)) 32 bytes is enough to store a remoting name. Of course, if we ever need to put something else in the bqual that will not be possible again.

             

            2. I can see a place where it would be relatively straighforward to add the parents parent node identifier (jca.SubordinateAtomicAction). I will probably create a different class for this (distributedjta.SubordinateAtomicAction), this gets rid ofone of the other persistence points of the transport.

             

            Both of these changes appear relatively trivial to make again but I wont destabilise branch before I leave (hopefully I wont destabilise it on return either).

             

            I believe the transport will still need the initial persistence point, that is it needs to know (for outstanding transactions) which servers it has talked to so that it can call recover on them in the case of failure. This is optional and we can rely on timeout typically, unless timeout is not set...

            1 2 3 4 5 6 Previous Next