1 of 1 people found this helpful
most likely you can solve this by increasing transaction timeout on application server but that usually means you might be locking part of db for quite some time. So rethinking the process definition design might be better option. You could consider using more hierarchical definition where each reusable subprocess gets hundreds of items to process and then can decide if it will do it or distribute to other processes. Then you can take advantage of using async work to make the processing be done in parallel.
Increasing transaction timeout (and disabling JTA) indeed helped, but as you say locking db for 15 minutes is not a good idea.
I am not sure I undestand your idea for redesign right. By 'more hierarchical' do you mean something like:
- topmost process that parses a file and gets item list, organizes them into batches (for example 100) and creates multiple intermediate processes
- intermediate process that takes batch of items and justs fires low-level subprocess for each of them
- per-item low-level subprocess
And make starting nodes in intermediate and low-level subprocesses async.
yup, that's exactly what I had in mind.
Make the processes bit more intelligent and let them decide what to do. For example if there are batches to be processed that are big more than 100 then split them to additional set of processes let's say to 10 but if there is only 20 then simply process them.