have you tried jdbcBatchlet (org.jberet.support.io.JdbcBatchlet), the batch artifact from jberet-support? For example usage in a sample app, see intro-jberet/csv2db.xml at master · jberet/intro-jberet · GitHub
In the above job xml, the first of the two steps prepares the output table: creates the table if not exist, and then delete the table content. The jdbcBathlet is configured to run 2 sql statements sequentially.
Your task 1 looks like a one-time task so doesn't fit with the chunk-based reader-process-writer pattern.
Is jdbcBatchlet availble since 1.3?
yes, it's available since jberet-support 1.3.0.Beta5.
It should also work with earlier versions of jberet-core as well.
It works to with 1.3.0.Beta6, so I'm waiting for a final release:)
I have got one addition question about logging. In standard environment jdbcBatchlet log statement that will be executed like:
JBERET060506: Adding sql statement to be executed: truncate table t_my_table
But standard jdbcItemReader and jdbcItemWriter logs nothing. How to change it?
As far as jberet-support is concerned, 1.3.0.Beta6 is already pretty stable, and 1.3.0.Final will not be much different.
jdbcBatchlet is one-time task, so logging the statements is cheap, whereas jdbcItemReader and jdbcItemWriter repeat their logic and so logging each time will end up being too verbose. It's not practical to log the data item(s) being read or written, since one such data item can be sizable, let alone chunks of them. As for the other metrics, they are available from step metrics.
You are right, but it would be nice to know that pair reader-processor-writer has been started. Now I always start it with batchlet with on flow. This batchlet only logs events like 'start loading from ...'.