No. The spec states that sub-directories are ignored.
A batch application may use the archive loader (see section 10.6) to load Job XML documents. The application does this by storing the Job XML documents under the META-INF/batch-jobs directory. For .jar files the batch-jobs directory goes under the standard META-INF directory. For .war files it goes under the WEB-INF/classes/META-INF directory. Note Job XML documents are valid only in the batch- jobs directory: sub-directories are ignored.
If you don't mind a JBeret specific approach you could implement the org.jberet.spi.JobXmlResolver and resolve from the sub-directories. You could use this as an example https://github.com/jberet/jsr352/blob/master/jberet-core/src/main/java/org/jberet/tools/MetaInfBatchJobsJobXmlResolver.java. It's just a standard services that uses the ServiceLoader. It should work in standalone and on WildFly 9.0.2.Final and higher.
For WildFly WFLY-7000 will need to be taken into consideration. Though if they are just sub directories it should work fine.
James R. perkins
Sorry for the stupid question - but the idea is implement my own version of this in my project under the package org.jberet.tools and put it in the classpath so my version is the one used?
no, that class is just an example of how to implementing interface org.jberet.spi.JobXmlResolver.
You will need to implement your own class implementing org.jberet.spi.JobXmlResolver. You class will know how to find the job xml resource given it a job name. In you case, probably by searching all the subdirectories.
you also need to add into your application package a META-INF/services/... entry for your JobXmlResolver.
is an example of the services declaration file.
This will define /additional, preferred/ ways of finding job xml files. The default ways of finding job xml files will always be in effect.