12 Replies Latest reply on Jun 30, 2008 1:00 PM by Edward Staub

    job executor explained in a nutshell

    Tom Baeyens Master

      I was writing in an email to someone how the job executor worked. So I thought i leverage that effort in the docs and here in the forum.


      Here's how the job executor works in a nutshell:

      Jobs are records in the database. Jobs are commands and can be executed. Both timers and async messages are jobs. For async messages, the dueDate is simply set to now when they are inserted. The job executor must execute the jobs. This is done in 2 phases: 1) a job executor thread must acquire a job and 2) the thread that acquired the job must execute it.

      Acquiring a job and executing the job are done in 2 separate transactions. A thread acquires a job by putting its name into the owner field of the job. Each thread has a unique name based on ip-address and sequence number. Hibernate's optimistic locking is enabled on Job-objects. So if 2 threads try to acquire a job concurrently, one of them will get a StaleObjectException and rollback. Only the first one will succeed. The thread that succeeds in acquiring a job is now responsible for executing it in a separate transaction.

      A thread could die inbetween acquisition and execution of a job. For clean-up of those situations, there is 1 lock-monitor thread per job executor that checks the lock times. Jobs that are locked for more then 30 mins (by default) will be unlocked so that they can be executed by another job.

      The required isolation level should be set to REPEATABLE_READ for hibernate's optimistic locking to work correctly. That isolation level will guarantee that

      update JBPM_JOB job
      set job.version = 2
       job.lockOwner = '192.168.1.3:2'
      where
       job.version = 1


      will only return result 1 row updated in exactly 1 of the competing transactions.

      Non-Repeatable Reads means that the following anomaly can happen: A transaction re-reads data it has previously read and finds that data has been modified by another transaction, one that has been committed since the transaction's previous read.

      Non-Repeatable reads are a problem for optimistic locking and therefor, isolation level READ_COMMITTED is required if you configure more then 1 job executor thread.



        • 1. Re: job executor explained in a nutshell
          Pavel Kadlec Novice

          Hello,

          I use Oracle and when I try to set isolation level of data source to TRANSACTION_REPEATABLE_READ I got exception that READ_COMMITTED and SERIALIZABLE are the only allowed isolation levels.
          When I try to set isolation level of data source to TRANSACTION_SERIALIZABLE it works, but I receive a lot of exceptions from jBPM when using multithreaded JobExecutor that the running transactions are not able to serialize.

          Is READ_COMMITTED sufficient isolation level for Oracle?

          Regards
          Pavel

          • 2. Re: job executor explained in a nutshell
            Pavel Kadlec Novice

            There is written that Oracle offers the read committed and serializable isolation levels, as well as a read-only mode that is not part of SQL92. Read committed is the default.

            http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10743/consist.htm

            Can I do something except change of db?

            Pavel

            • 3. Re: job executor explained in a nutshell
              Edward Staub Expert

              >> Is READ_COMMITTED sufficient isolation level for Oracle?

              Yes. -Ed Staub

              • 4. Re: job executor explained in a nutshell
                Edward Staub Expert

                >> Can I do something except change of db?

                Change from which database?

                From Oracle? Why do you need to?

                From HSQLDB? If you need concurrency, you must move off HSQLDB - JBPM relies on the database for synchronization, so that it can be clustered. I'm sure some other db's work too - you might search this forum, or start a new thread if you can't find anything relevant.

                -Ed Staub

                • 5. Re: job executor explained in a nutshell
                  Pavel Kadlec Novice

                  Hello,

                  I was thinking about REPEATABLE_READ isolation level with optimistic locking and it seems to me that it can not work. In docs there is written that

                  Non-Repeatable reads are a problem for optimistic locking and therefore isolation level READ_COMMITTED is not enough cause it allows for Non-Repeatable reads to occur. So REPEATABLE_READ is required if you configure more than one job executor thread.


                  But I found here http://www.avaje.org/occ.html that

                  It is important to note that Optimistic Concurrency Checking only works in READ_COMMITED Transaction Isolation Level. Specifically, at higher Isolation levels such as SERIALIZABLE the UPDATE or DELETE will see the database as at the Transaction start time. In the time gap between Transaction start time and the UPDATE/DELETE statement there could be commited changes that will be lost (Lost Updates).


                  I think b.) is right, it makes sense.

                  jBPM docs are not correct, am I right? Your docs confused me.

                  Pavel

                  • 6. Re: job executor explained in a nutshell
                    Pavel Kadlec Novice

                    Sorry Ed, the docs is not surely yours. I have spent more time making the multithreaded job executor work, still have problems, so I am a little bit nervous.

                    Sorry,

                    Pavel

                    • 7. Re: job executor explained in a nutshell
                      Alejandro Guizar Master

                      I find this statement wildly odd-sounding:

                      In the time gap between Transaction start time and the UPDATE/DELETE statement there could be commited changes that will be lost (Lost Updates).

                      No updates will be ever lost with higher isolation levels. The database will ensure that no two transactions that update the same data item can proceed. With "lost" they seem to mean unavailable to the isolated transaction. However, that is exactly what is needed for the job executor to operate properly.

                      Note that the REPEATABLE_READ suggestion is not a rule. Isolation levels are implemented differently in each database and in some databases it may be possible to use a lower isolation level that works.

                      • 8. Re: job executor explained in a nutshell
                        Pavel Kadlec Novice

                        Hello Alex,


                        No updates will be ever lost with higher isolation levels. The database will ensure that no two transactions that update the same data item can proceed.

                        Yes, I agree. But the db will block the second update until transaction with first update commits. There is no need to use optimistic locking with repeatable read isolation level because optimistic locking does not work with repeatable read.


                        If I understand it well, we have optimistic locking to prevent conflicting updates. We do not want to send two conflicting updates to db because db (in my case Oracle) will block the second update. We want application could see that another committed tx changed our row (that we already read) and we want to rollback our tx immediately (without blocking).

                        If we set transaction to repeatable read isolation level, in my opinion optimistic locking does not work, because during whole transaction we always read same version of our row. The application can not see that another commited tx updated some row. (bacause we have repeatable read).

                        I think that optimistic locking needs non-repeatable read to work but jbpm docs says opposite.

                        Regards
                        Pavel

                        • 9. Re: job executor explained in a nutshell
                          Pavel Kadlec Novice

                          I think that optimistic locking needs READ COMMITTED isolation level of transactions. Nothing more, nothing less.

                          Pavel

                          • 10. Re: job executor explained in a nutshell
                            Pavel Kadlec Novice

                            I must rewrite my last but one reply. I was trying it ones more how it behaves and the blocking I was writing about is not truth.

                            Hello Alex,


                            No updates will be ever lost with higher isolation levels. The database will ensure that no two transactions that update the same data item can proceed.


                            Yes, I agree.

                            If we set transaction to repeatable read isolation level, in my opinion optimistic locking does not work, because during whole transaction we always read same version of our row. The application can not see that another commited tx updated some row (bacause we have repeatable read).

                            I think that optimistic locking needs non-repeatable read to work but jbpm docs says opposite.

                            Regards
                            Pavel


                            • 11. Re: job executor explained in a nutshell
                              Jiri Pechanec Apprentice

                              Hi,

                              in my opinion READ_COMMITTED is the correct level of isolation for optimistic locking.

                              Morover in the forst post Tom contradicts himself

                              The required isolation level should be set to REPEATABLE_READ for hibernate's optimistic locking to work correctly. That isolation level will guarantee that


                              Non-Repeatable reads are a problem for optimistic locking and therefor, isolation level READ_COMMITTED is required if you configure more then 1 job executor thread


                              What I think is the problem with Exceutors is that they report the StaleObjectExcepetion even if it is quite expected result in such case.

                              Maybe for tables with jobs we should think about pessimistic locking as the conflicts can be quite frequent.

                              • 12. Re: job executor explained in a nutshell
                                Edward Staub Expert

                                Agreed - READ_COMMITTED seems required.