-
1. Re: Concurrent Processing using Fork in synchronous manner
estaub Jul 6, 2007 11:38 AM (in response to drashmi)See http://jira.jboss.com/jira/browse/JBPM-983, and vote for it if relevant - it probably is.
Are you using the JobExecutor?
More specifics on your workflow would be helpful, if it's at all more complicated than you've stated.
It may help your particular case to mark a node on each fork with "asynchronous=exclusive". This helps in some simple cases.
-Ed Staub -
2. Re: Concurrent Processing using Fork in synchronous manner
drashmi Jul 6, 2007 11:52 AM (in response to drashmi)I have already seen this JIRA but this JobExecutorThread comes into picture when you mark the node execution to be in asynchronous mode. But my case is not like that.
Consider the below processdefinition:<?xml version="1.0" encoding="UTF-8"?> <process-definition xmlns="" name="mainprocsimple1_JBPM"> <join name="join1"> <transition name="" to="end1"></transition> </join> <start-state name="start"> <transition name="" to="fork1"></transition> </start-state> <fork name="fork1"> <transition name="tr1" to="state1"></transition> <transition name="2" to="state2"></transition> </fork> <node name="mainnode_sleep120"> <action class="org.jbpm.tutorial.action.Sleep_120_ActionHandler"> </action> <transition name="" to="join1"></transition> </node> <node name="mainnode_sleep80"> <action class="org.jbpm.tutorial.action.Sleep_80_ActionHandler"> </action> <transition name="" to="join1"></transition> </node> <state name="state1"> <transition name="tim" to="mainnode_sleep80"></transition> </state> <end-state name="end1"></end-state> <state name="state2"> <transition name="" to="mainnode_sleep120"></transition> </state> </process-definition>
In the above example, when i signal state1 and state2 simultaneously, then the execution flows to mainnode_sleep80 and mainnode_sleep120. But since in mainnode_sleep80, I call an ActionHandler which simply makes a sleep for 80 secs and in the other node mainnode_sleep120, the ActionHandler makes a sleep of 120 secs. The path of execution for mainnode_sleep80 completes successfully but when mainnode_sleep120 completes after making a sleep for 120 secs, we get an error as StaleObjectStateException saying that the row was already updated or deleted in another transaction.
The above scenario is only an example of our process and not the real case. We are just trying to find out why we are getting this error and whether there is any solution to avoid such an error.
I make myself very clear that I am not looking for async operation. I want the normal sync operation. -
3. Re: Concurrent Processing using Fork in synchronous manner
kukeltje Jul 6, 2007 12:22 PM (in response to drashmi)If you do a sleep of 80s in actions, you hold a lock on those... that is NOT what you want... believe me... it does not scale at all...
Examples often do 'wrong' things..What is the issue here is that there is no wait state where the process is persisted, so everything happens in one 'transaction'... and both update the same token... -
4. Re: Concurrent Processing using Fork in synchronous manner
estaub Jul 6, 2007 2:01 PM (in response to drashmi)Ronald,
>> Examples often do 'wrong' things..
I think you're suggesting that this isn't a real use case. I think I see your point -- without any asynchronous behavior, the two branches are serialized and so might as well be modeled as a straight line.
But imagine an actionhandler that may enter a wait state (not signal, but wait for an external signal) or not, dependent on some condition. In this case, a fork might make sense, but in some conditions would fail as described.
In general, I think these kinds of problems should be fixed, regardless of whether it seems to be a real-world case. ["These kinds of problems" is vague, and vulnerable to reductio ad absurdum.] They reduce confidence in JBPM and give it a "bad taste". If it seems broken, it might as well be broken.
>> What is the issue here is that there is no wait state where the process is persisted, so everything happens in one 'transaction'... and both update the same token...
Agreed. A good fix is complex... this is another of those cases where the correct fix will depend on whether the deployment is single-threaded, multi-threaded/single-server, or multi-threaded/multi-server. I wrote a fix that queues up a job to do the parent-token work in Join... but it's only a good fit for one or two of those three deployment scenarios.
-Ed Staub -
5. Re: Concurrent Processing using Fork in synchronous manner
kukeltje Jul 6, 2007 6:16 PM (in response to drashmi)Good catch Ed, that was what I was trying to suggest.
You are right that there can be and most likely will be situations where this error will still occur and as you know some interesting discussions take place. I'm trying to get this high on the list of 'issues' and you do a good job in this by urging people to vote.
Still, people should know that there are still good and wrong ways to do things.
My latin is not that good but I think that with my Dutch, English, French, German and little Spanish that I do know I tend to agree :-)