Design time governance - short vs long running workflows and other thoughts.....
objectiser May 21, 2014 4:31 AMIn the future we are going to want to include some additional validation capabilities, such as impact analysis on services that use a changed api. This will be local within a 'stage', rather than concerned with moving artifacts between stages - but could be a pre-condition to the artifact being considered for promotion to the next stage.
Is this the way we want to handle such validation? The advantage is that the workflow controls the ordering in which the validation is performed, what type of validation is performed, and what action to take if a validation failure occurs. If in the initial stage (e.g. dev), then stopping the workflow would be reasonable, as once the issue was fixed, the updated artifact would be saved triggering another workflow.
An alternative may be to have separate validation modules that are triggered by specific artifact change events, where those validations could be performed using bpmn workflows or rules. However then we potentially loose the dependency between the rules - i.e. preventing promotion of the artifact unless it passes all validations. This may be resolved by treating the validation modules as a necessary step before optionally triggering the workflow (i.e. the workflow trigger configured to determine whether validation issues should prevent it from running). This also implies the validation results should be stored somewhere - most logical target would be the artifact itself.
I think having the workflow control the validation steps seems most straightforward for now - but should try integrating rules to perform the actual validation steps.
The other issue is the scope of the workflow. We currently provide an example of a single workflow for the full lifecycle. This is good from the perspective of showing/documenting an overall workflow, but from a runtime perspective, it means instances of this workflow could be very long running - which means they cannot be updated once running.
Wondering whether it is better to have a workflow per stage - that way they are very localised, and short running - their only responsibility is to define what should happen to move an artifact from one stage to the next. This means that from a workflow maintenance perspective, it is much easier to introduce changes without worrying about existing running process instances.
The disadvantage is that potentially the context from previous stages would be lost - although this could always be stored as properties on the artifact.