1 2 Previous Next 28 Replies Latest reply on Jul 5, 2009 10:35 AM by clebert.suconic Go to original post
      • 15. Re: Journal Cleanup and Journal Compactor
        clebert.suconic

        This is what I have been doing on the journal compacting.


        - I first need to elect what files will be compacted. This needs to be a sequential list, starting at the first file. But I can't compact a file with pending transactions as I don't know if the user will later decide for a rollback or not.


        - I have created another field on the Header with a orderID. The JournalFile now has a fileID and a orderID. (fileID will be aways equals to orderID with the exception on compacting files).


        - I open a file with the same orderID as the first file on the journal.
        (create or open, getting it from the list of available files).

        - I open a transaction on the that file, and I add a record stating that I am compacting files (with the list of fileID)

        - As I read the files, I add the valid records at the compacting output. When I'm done I commit the transaction at the compacting output.

        I keep opening files over demand (getting them from the available files list, or creating over demand)


        - If the server crash now, I will have the compacting record stating there are files that need to be deleted (reload will handle that also)

        - I now delete the old files (delete here means.. putting them back to available files).

        - As soon as all the files are deleted, I can delete the compacting record within the transaction.


        By using the transaction approach we already have in the journal, will simplify things, as we won't need any temporary files.

        • 16. Re: Journal Cleanup and Journal Compactor
          clebert.suconic

          Since now I don't need the transaction summary any more (just number of records associated with the transaction at the currentFile), I will be able to move pending transactions. And for doing that I will need to keep all the transaction IDs and everything else.

          For doing that, I will need to use temporary files, and rename them as you suggested.


          - I will open a journal file with a temporary name. (Open here, means gets it from the cached files, or create a new one).

          - As I read the journal files I add the valid records to the temporary file. (I will off course open new files as I fill the file).

          - Add a temporary small file when I start renaming the files, and deleting the old files. and delete that file when I'm done.





          The only issue I have now is allowing concurrent appends to the journal, as I'm compacting the files.

          The problem is, when deleteing or updating a record, I don't know where the AddRecord is going to be located at. So, I don't know where to add the negative value to.

          So, what I'm thinking as a solution is:

          - During compacting, all Adds, updates and deletes are stored to the current-file, as we aways do.

          - But instead of discounting POSFiles right away, I cache those IDs on a collection. When I'm done compacting I do a fast operation applying the reference counts accordingly.

          - Case the server is interrupted during reload, the delete or update will take the original location of the record.

          • 17. Re: Journal Cleanup and Journal Compactor
            timfox

             

            "clebert.suconic@jboss.com" wrote:


            The problem is, when deleteing or updating a record, I don't know where the AddRecord is going to be located at. So, I don't know where to add the negative value to.

            So, what I'm thinking as a solution is:

            - During compacting, all Adds, updates and deletes are stored to the current-file, as we aways do.

            - But instead of discounting POSFiles right away, I cache those IDs on a collection. When I'm done compacting I do a fast operation applying the reference counts accordingly.

            - Case the server is interrupted during reload, the delete or update will take the original location of the record.


            Isn't it simpler than this?

            When compacting you don't touch the information in memory. So if an update or delete comes in you just update as normal.

            Then when the compacting is done you just do a quick switch over in memory.

            I don't see why you need to cache anything.

            • 18. Re: Journal Cleanup and Journal Compactor
              clebert.suconic

              I need to update dataFiles pos and neg counters with the deletes that happened during compacting. If I don' t do that those files will not be reclaimable until we restart the server.

              • 19. Re: Journal Cleanup and Journal Compactor
                timfox

                Right, but you don't need to do that until after you've done the compacting, and that can be updated quickly.

                I don't think you need to cache any deletes that come in during compacting.

                • 20. Re: Journal Cleanup and Journal Compactor
                  clebert.suconic


                  I don't think you need to cache any deletes that come in during compacting.



                  How do I know what deletes came during compacting if i don' t cache them?


                  Example:

                  File1: Record1
                  File2: record10
                  File3: Record100
                  File4: CurrentFile

                  I start compacting:

                  As I' m compacting, delete100 comes.

                  I write Delete100 on File4, but the data structures think that File3 has Record100. So, a negCount on File3 will be added.

                  At the end, you will have

                  File1: Record 1, Record 10, Record100

                  File4: deleteRecord100

                  File1 will not be reclaimable until you reload the journal and reclaulate te Positives and Negatives.

                  When I'm done compacting, I don' t know what deletes happened during compacting to fix this issue.


                  • 21. Re: Journal Cleanup and Journal Compactor
                    timfox

                    As deletes come in you update the counts in memory as normal.

                    Compacting *does not* change those counts.

                    When compacting is finished you can just update them in memory in one operation.

                    • 22. Re: Journal Cleanup and Journal Compactor
                      clebert.suconic

                       

                      When compacting is finished you can just update them in memory in one operation.


                      Yes, but so far what I have seen is I need to know what deletes happened during compacting to perform this operation.

                      • 23. Re: Journal Cleanup and Journal Compactor
                        clebert.suconic

                        Before jumping into my proposed solution, let me just make a quick note about what's stored on the journal.

                        // A list of dataFiles (used files)
                        private final Queue dataFiles;

                        //A list of freeFiles
                        private final Queue freeFiles = new ConcurrentLinkedQueue();

                        //A list of freeFiles but already opened (for fast move forward on the journal)
                        private final BlockingQueue openedFiles;

                        //A list of Adds and updates for each recordID
                        // This is being renamed to recordsMap BTW
                        private final ConcurrentMap<Long, PosFiles> posFilesMap;


                        now the compacting would be:

                        exclusiveLockOnJournal (for a very short time, this is required to take a valid snapshot before compacting starts)
                        {
                        - Disallow reclaiming while compacting is being done
                        - set some flag such as compacting = true
                        - Take a snapshot of dataFiles, posFilesMap and pending transactions.
                        }


                        - for each dataFile on the snapshot
                        - append valid records (based on the snapshot) to a new temporary datafile. if the temporary datafile is full, open a new one
                        - as records are appended, calculate the new posFilesMap


                        - As soon as the compacting is done, I need to rename temporary files (using the process you originaly described.. with a small mark file)

                        - I need also to update the posfilesMap.

                        I will take the list of updates and deletes tha happened while compacting was working, and replay them on the new posfilesMap (in a fast operation).


                        This is because at this point, I wouldn't have any information about deletes and updates.

                        - When a delete happen, you only have a neg added to DataFile, and I wouldn't know how to replay the information.

                        - For updates, I only have a list of what files took an update (inside PosFiles). You could have two updates on a same file, and each updates was sent to a different file

                        So far I need to compute that information, as I don't have it anywhere.

                        • 24. Re: Journal Cleanup and Journal Compactor
                          clebert.suconic

                          A question:

                          Say you have this on the journal:


                          File1: AppendTransaction TX=1, ID=1, data

                          File2: AppendTransaction TX=1, ID=2, data

                          File3: Commit


                          FileN: Current



                          Say, now I'm compacting File1, 2 and 3.


                          I think I don't need to keep the append1 and 2 as transactions. I could just:

                          Append ID=1, Append ID=2.




                          I don't see any reason to keep the transaction after compacting, since the record is already confirmed.


                          Can you think of any reason?

                          • 25. Re: Journal Cleanup and Journal Compactor
                            timfox

                             

                            "clebert.suconic@jboss.com" wrote:


                            I don't see any reason to keep the transaction after compacting, since the record is already confirmed.



                            +1.

                            Also with any tx that has been rolled-back you can remove all the records.

                            And with any prepared tx which has been committed you can remove the prepare record too.

                            • 26. Re: Journal Cleanup and Journal Compactor
                              clebert.suconic

                              I just did the following test after the compactor was implemented (on my branch for now).


                              Added 1000 messages to a destinationB.

                              Then started runListener and runSender on destinationA with 100K messages.


                              And that showed the linked-list effect because of the 1000 messages, as I predicted.

                              Compacting was being executed about 3 times during that execution. I double checked if there wasn't anything wrong, and everything was correct... it was just the linked-list effect.

                              Compacting will be executed every time the numberOfFiles > minCompactSize (as defined on http://www.jboss.org/index.html?module=bb&op=viewtopic&t=157889).

                              • 27. Re: Journal Cleanup and Journal Compactor
                                clebert.suconic

                                Full compacting won't eliminate the need for cleanup.

                                The previous post explained how to replicate the issue...

                                And here is a short example of what it happens...



                                You have these files:
                                F1 : SA1, SA2, SA3... SA100... A1
                                F2: A2, D1
                                F3: A3, D2
                                F4: A4, D3
                                ...
                                Fn: An, D(n-1), A(n+1)


                                * A = Append, D = Delete, U=Update, F = File
                                * SA = Surviving Add (Nothing will ever delete those records)




                                On the example above, you have 100 records on F1, that will never be deleted.


                                As soon as you compact this, you will have:

                                New-F1: SA1, SA2, SA3... SA100. A(n+1)

                                That is... the live records from FN were compacted into New-F1 also.



                                After compacting is done, we delete that record that was on FN, and we add a new one:

                                New-F1: SA1, SA2, SA3... SA100. A(n+1)
                                New-F2: D(n+1) A(n+2)


                                At this point there is a link between New-F1 and New-F2.

                                Conclusion: once you have live records: compacting will aways be executed time to time, what will have consequences on performance.

                                We should have compacting being executed ocasionally.. not every 5 minutes. We need to cleanup those links somehow when they happen.


                                On the tests I did when I left 100 surviving records on the journal, compacting was called at least 3 times while sending/consuming 100K messages.

                                Tim said on the meeting today.. let compacting to happen every 30 minutes.. but that wouldn't work, as nothing will be reclaimed for 30 minutes (you could eat up all your disk space in production because of this.. as bad as it is).


                                Possible solution is to cleanup New-F1 only (eliminating A(n+1) phisically from New-F1 will eliminate the link between New-F1 and New-F2). I could use the Compacting code for that, and that would work as long as we start from the beggining of the list.


                                I don't intend to do any deep technical discussions around the cleanup implementation now, since I need to finish compacting first. So far I only want to point the issue that we really need the cleanup also.

                                • 28. Re: Journal Cleanup and Journal Compactor
                                  clebert.suconic

                                  My brain can't stop working... even while sleeping :-)


                                  Since we don't have the transaction summary any more (only number of records at current file per transaction), Cleanup is really easy (piece of cake).


                                  We can just compact an individual file.

                                  - Open a new file (getting it from the file cache if available).
                                  - Add the live records. (don't add already deleted records).
                                  - If there are delete records pointing to previous files, keep all the delete records.
                                  - If there are pending transactions, also keep them.
                                  - Do the same rename schema as done on the compacting

                                  At the end you removed the linked-list for a given file.


                                  I will provide more details for this on monday. I just wanted to register my thoughts somewhere at the moment.

                                  1 2 Previous Next