-
1. Re: Parsing and validation
jesper.pedersen Dec 1, 2010 8:46 AM (in response to maeste)Yes, this is a top priority.
We used to have a method that would validate the metadata model after the merge and throw an exception if the constraints defined by the XSD wasn't correct.
The datasource part is easier since there is only one place where the information can come from (XML), so the parser can be a lot more strict there.
As to updates from the management layer - the management facade must verify that the update is correct before sending the value to the instance. However, we can only do basic tests here, but that should be enough.
Implementation: If a builder pattern can help, go for it. The current requirements for the metadata model can't be changed though.
-
2. Re: Parsing and validation
maeste Dec 2, 2010 11:53 AM (in response to jesper.pedersen)Hi all,
I have just committed the implementation of validation for datasources metadata. You can see the patch there:
https://github.com/maeste/IronJacamar/commit/173b9afbc33f2774e42908b221329dc767f306ca
There are 2 main points I'd like to discuss and get your feedbacks:
- Validation has been implemented as method validate AND it's fired for every metadata creation (aka as part of the constructor). I think it's better to have valid metadata since we are working with immutable object at metadata level (at least in 99% of cases). Of course the method validate is public and can be fired by metadata's client too in case of metadata forced modification (forceXXX methods). Opinions?
- I have implemented at the moment just a validation to respect xsd specification (more or less just mandatory fields and cardinalities). But of course there are more things we could verify at code level. Some example: backGroundValidationMinutes has non sense if BackGroundValidation is false; min pool size could not be < of max_pool_size and so on. Or even more complex checking like provided className are at least valid class name. Since this validation comes very early in user experience (typically during deployment) I think a fine grained check would help a lot users avoiding hard to understand runtime exception during datasources/ra use. Of course make this fine grained check would need some extra analysis and some extra documentation, but I think we would get great benefits.
Feedbacks are more than welcome, in the mean time I'll provide unit tests for current validation
bye
S.
-
3. Re: Parsing and validation
jesper.pedersen Dec 2, 2010 1:39 PM (in response to maeste)Validation has been implemented as method validate AND it's fired for every metadata creation (aka as part of the constructor). I think it's better to have valid metadata since we are working with immutable object at metadata level (at least in 99% of cases). Of course the method validate is public and can be fired by metadata's client too in case of metadata forced modification (forceXXX methods). Opinions?
The validator is also available as a standalone tool, and have Ant and Maven integration. Having the validator as part of our deployment chain is to ensure that people are actually using it before deploying to our container.
So all the triggered rules should be collected and summaried in the generated report.
Of course make this fine grained check would need some extra analysis and some extra documentation, but I think we would get great benefits.
Yes, it would be nice to have a very fine gained analysis of the deployments. However, I think it is of lower priority than some of our other tasks atm. If we have some sort of framework for these validation rules it would be a good task for a new contributor.
-
4. Re: Parsing and validation
maeste Dec 3, 2010 5:28 AM (in response to jesper.pedersen)Jesper Pedersen wrote:
Validation has been implemented as method validate AND it's fired for every metadata creation (aka as part of the constructor). I think it's better to have valid metadata since we are working with immutable object at metadata level (at least in 99% of cases). Of course the method validate is public and can be fired by metadata's client too in case of metadata forced modification (forceXXX methods). Opinions?
The validator is also available as a standalone tool, and have Ant and Maven integration. Having the validator as part of our deployment chain is to ensure that people are actually using it before deploying to our container.
So all the triggered rules should be collected and summaried in the generated report
Oki, but it isn't our validator. Here we are validate our metadata representing xml or annotations. AFAIK our validator tool is about our javax.* implementations, generated from this metadata. IOW here we have implemented minimal check from xsds constraints, something that users could achieve standalone just validating xml against our xsds. Of course we are not enabling xsds validation for performance reasons, doing this validation on generated metadata instead.
Jesper Pedersen wrote:
Of course make this fine grained check would need some extra analysis and some extra documentation, but I think we would get great benefits.
Yes, it would be nice to have a very fine gained analysis of the deployments. However, I think it is of lower priority than some of our other tasks atm. If we have some sort of framework for these validation rules it would be a good task for a new contributor.
Good point. I have created a JIRA for the community about that: https://jira.jboss.org/browse/JBJCA-476
Anyone interested?
bye
S.
-
5. Re: Parsing and validation
jesper.pedersen Dec 3, 2010 8:47 AM (in response to maeste)Oki, but it isn't our validator.
Ah, ok I guess that we have to define more clearly where what type of validation we are talking about since we are starting to have a lot of layers that focuses on different things.