If the child processes are really trivial, and you don't need to be able to deploy new versions of them independently, then maybe just fork. Otherwise, probably subprocess.
Another thing to consider is what monitoring and/or intervention you will need on the child processes. If you need to examine and/or change variables at the child process level, subprocesses may be easier.
Forking is probably more efficient, but I don't know how much, or whether it's significant in your context.
There are probably other factors I'm not thinking of. You really need to look at your own requirements first.
Thanks Ed. Sounds like there are a few different ways to do it.
* child process does not need to be independent of the parent process... that is we can write the parent & child together has a pair (or even a single process definition).
* user wants to be able to monitor the parent process to have visibility into the overall progress (e.g. how many of the children have completed their work)
I'm not sure how to define the process to use fork. Would we write a simple custom node that "forks" the N children? In that case there would only be a single leaving transition from the fork node... but that node would actually start up N tokens on that leaving transition.
Then would we also have a custom join node that increments a process variable as each child joins back in (that would give a client visibility into the progress)?
Does that make sense? Or is there a way to achieve something similar with the built-in nodes?
Warning: I have a lot more "book learning" than real experience here.
On the wiki you'll find an example called "ForEachForkActionHandler' or similar. Check it out.
Re join: Updating process variables to show status is trivial - you can just use script if you want. If all you need is a running-token count, that would be sufficient. But you probably want to look at what you're going to use/implement for status UI, then work backward to what your data requirements are.