1 Reply Latest reply on Feb 27, 2009 2:07 AM by Brad Davis

    Performance slowdown with large JBPM_LOG table

    Michael Holtzman Newbie

      Greetings. We are seeing a huge slowdown in jBPM throughput when the JBPM_LOG file gets very large.

      For example, during our benchmark testing we start with a "virgin" database. After three weeks of testing, we see a ~40% increase in workflow execution time. At this point, the JBPM_LOG table has about 6M (yes, million) records.

      If we clean the database tables, performance returns to its original level. I don't really understand why the size of JBPM_LOG has such a profound effect on performance.

      In production we need the log records to construct an audit trail, so disabling logging is not an option.

      Any suggestions on improving performance with large JBPM tables?

      Anyone running a high volume jBPM application care to comment on performance over time?

      BTW, this is jBPM 3.1.2 with selected fixes from later versions applied.


        • 1. Re: Performance slowdown with large JBPM_LOG table
          Brad Davis Novice

          The log table is basically an audit table. If you dont need auditing in your application, you could just turn it off by unconfiguring the logging service from jbpm.cfg.xml.

          Comment out:

           <service name='logging' factory='org.jbpm.logging.db.DbLoggingServiceFactory' />

          Another option would be to write your own logging service that extends on DbLoggingServiceFactory. There you could filter out unwanted logs. I have done this with success in the past.