3 Replies Latest reply on Feb 3, 2003 11:46 AM by slaboure

    fail over and writing files

    atifaj

      suppose I am writing my data in a file on one machine (server1) then fail-over occur and dispatcher moves the request to the machine 2. Now what will happen to the data that I was writing in a file on server (machine1) ?

      Also consider the same scenario in terms of logging of different actions in a file. Will it make two logs on two different machines (if cluster spans on two machines)

        • 1. Re: fail over and writing files
          slaboure

          Well, that's not a clustering issue, the result will depend on what you do in *your* code. Or I don't understand your point.

          Cheers,


          sacha

          • 2. Re: fail over and writing files
            atifaj

            Well I try to clearfy my question

            lets see we need to have application running on server A and I am logging some activities on server A. Now to remove the fail-over issue I make cluster and deploy the application on server B.

            Now my questions are:

            When I will start the application does log will be generated on both servers (A and B) or only A.

            If answer is only on A then during failover of A. How can I ensure that log be produce on a single file.

            I hope I you understand my question.

            Thanks

            • 3. Re: fail over and writing files
              slaboure

              Well, it is really up to you to use the clustering framework so that it fits your requirements: read the doco, especially the ReplicantManager and HAPartition part. You will easily find a way to implement a "flip-flap" behaviour where you are sure that only node works at a given time.