Spring batch execute dynamically generated steps in a tasklet - spring

I have a spring batch job that does the following...
Step 1. Creates a list of objects that need to be processed
Step 2. Creates a list of steps depending on how many items are in the list of objects created in step 1.
Step 3. Tries to executes the steps from the list of steps created in step 2.
The executing x steps is done below in executeDynamicStepsTasklet(). While the code runs without any errors it does not seem to be doing anything. Does what I have in that method look correct?
thanks
/*
*
*/
#Configuration
public class ExportMasterListCsvJobConfig {
public static final String JOB_NAME = "exportMasterListCsv";
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Value("${exportMasterListCsv.generateMasterListRows.chunkSize}")
public int chunkSize;
#Value("${exportMasterListCsv.generateMasterListRows.masterListSql}")
public String masterListSql;
#Autowired
public DataSource onlineStagingDb;
#Value("${out.dir}")
public String outDir;
#Value("${exportMasterListCsv.generatePromoStartDateEndDateGroupings.promoStartDateEndDateSql}")
private String promoStartDateEndDateSql;
private List<DivisionIdPromoCompStartDtEndDtGrouping> divisionIdPromoCompStartDtEndDtGrouping;
private List<Step> dynamicSteps = Collections.synchronizedList(new ArrayList<Step>()) ;
#Bean
public Job exportMasterListCsvJob(
#Qualifier("createJobDatesStep") Step createJobDatesStep,
#Qualifier("createDynamicStepsStep") Step createDynamicStepsStep,
#Qualifier("executeDynamicStepsStep") Step executeDynamicStepsStep) {
return jobBuilderFactory.get(JOB_NAME)
.flow(createJobDatesStep)
.next(createDynamicStepsStep)
.next(executeDynamicStepsStep)
.end().build();
}
#Bean
public Step executeDynamicStepsStep(
#Qualifier("executeDynamicStepsTasklet") Tasklet executeDynamicStepsTasklet) {
return stepBuilderFactory
.get("executeDynamicStepsStep")
.tasklet(executeDynamicStepsTasklet)
.build();
}
#Bean
public Tasklet executeDynamicStepsTasklet() {
return new Tasklet() {
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
FlowStep flowStep = new FlowStep(createParallelFlow());
SimpleJobBuilder jobBuilder = jobBuilderFactory.get("myNewJob").start(flowStep);
return RepeatStatus.FINISHED;
}
};
}
public Flow createParallelFlow() {
SimpleAsyncTaskExecutor taskExecutor = new SimpleAsyncTaskExecutor();
taskExecutor.setConcurrencyLimit(1);
List<Flow> flows = dynamicSteps.stream()
.map(step -> new FlowBuilder<Flow>("flow_" + step.getName()).start(step).build())
.collect(Collectors.toList());
return new FlowBuilder<SimpleFlow>("parallelStepsFlow")
.split(taskExecutor)
.add(flows.toArray(new Flow[flows.size()]))
.build();
}
#Bean
public Step createDynamicStepsStep(
#Qualifier("createDynamicStepsTasklet") Tasklet createDynamicStepsTasklet) {
return stepBuilderFactory
.get("createDynamicStepsStep")
.tasklet(createDynamicStepsTasklet)
.build();
}
#Bean
#JobScope
public Tasklet createDynamicStepsTasklet() {
return new Tasklet() {
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
for (DivisionIdPromoCompStartDtEndDtGrouping grp: divisionIdPromoCompStartDtEndDtGrouping){
System.err.println("grp: " + grp);
String stepName = "stp_" + grp;
String fileName = grp + FlatFileConstants.EXTENSION_CSV;
Step dynamicStep =
stepBuilderFactory.get(stepName)
.<MasterList,MasterList> chunk(10)
.reader(queryStagingDbReader(
grp.getDivisionId(),
grp.getRpmPromoCompDetailStartDate(),
grp.getRpmPromoCompDetailEndDate()))
.writer(masterListFileWriter(fileName))
.build();
dynamicSteps.add(dynamicStep);
}
System.err.println("createDynamicStepsTasklet dynamicSteps: " + dynamicSteps);
return RepeatStatus.FINISHED;
}
};
}
public FlatFileItemWriter<MasterList> masterListFileWriter(String fileName) {
FlatFileItemWriter<MasterList> writer = new FlatFileItemWriter<>();
writer.setResource(new FileSystemResource(new File(outDir, fileName )));
writer.setHeaderCallback(masterListFlatFileHeaderCallback());
writer.setLineAggregator(masterListFormatterLineAggregator());
return writer;
}
So now I have a list of dynamic steps that need to be executed and I believe that they are in StepScope. Can someone advise me on how to execute them

This will not work. Your Tasklet just creates a job with a FlowStep as first Step. Using the jobBuilderfactory just creates the job. it does not launch it. The methodname "start" may be misleading, since this only defines the first step. But it does not launch the job.
You cannot change the structure of a job (its steps and substeps) once it is started. Therefore, it is not possible to configure a flowstep in step 2 based on things that are calculated in step 1. (of course you could do some hacking deeper inside the springbatch structure and directly modify the beans and so ... but you don't want to do that).
I suggest, that you use a kind of "SetupBean" with an appropriate postConstruct method which is injected into your class that configures your job. This "SetupBean" is responsible to calculate the list of objects being processed.
#Component
public class SetUpBean {
private List<Object> myObjects;
#PostConstruct
public afterPropertiesSet() {
myObjects = ...;
}
public List<Object> getMyObjects() {
return myObjects;
}
}
#Configuration
public class JobConfiguration {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private SetUpBean setup;
...
}

Related

Saving file information in Spring batch MultiResourceItemReader

I have a directory having text files. I want to process files and write data into db. I did that by using MultiResourceItemReader.
I have a scenario like whenever file is coming, the first step is to save file info, like filename, record count in file in a log table(custom table).
Since i used MultiResourceItemReader, It's loading all files once and the code which i wrote is executing once in server startup. I tried with getCurrentResource() method but its returning null.
Please refer below code.
NetFileProcessController.java
#Slf4j
#RestController
#RequestMapping("/netProcess")
public class NetFileProcessController {
#Autowired
private JobLauncher jobLauncher;
#Autowired
#Qualifier("netFileParseJob")
private Job job;
#GetMapping(path = "/process")
public #ResponseBody StatusResponse process() throws ServiceException {
try {
Map<String, JobParameter> parameters = new HashMap<>();
parameters.put("date", new JobParameter(new Date()));
jobLauncher.run(job, new JobParameters(parameters));
return new StatusResponse(true);
} catch (Exception e) {
log.error("Exception", e);
Throwable rootException = ExceptionUtils.getRootCause(e);
String errMessage = rootException.getMessage();
log.info("Root cause is instance of JobInstanceAlreadyCompleteException --> "+(rootException instanceof JobInstanceAlreadyCompleteException));
if(rootException instanceof JobInstanceAlreadyCompleteException){
log.info(errMessage);
return new StatusResponse(false, "This job has been completed already!");
} else{
throw new ServiceException(errMessage);
}
}
}
}
BatchConfig.java
#Configuration
#EnableBatchProcessing
public class BatchConfig {
private JobBuilderFactory jobBuilderFactory;
#Autowired
public void setJobBuilderFactory(JobBuilderFactory jobBuilderFactory) {
this.jobBuilderFactory = jobBuilderFactory;
}
#Autowired
StepBuilderFactory stepBuilderFactory;
#Value("file:${input.files.location}${input.file.pattern}")
private Resource[] netFileInputs;
#Value("${net.file.column.names}")
private String netFilecolumnNames;
#Value("${net.file.column.lengths}")
private String netFileColumnLengths;
#Autowired
NetFileInfoTasklet netFileInfoTasklet;
#Autowired
NetFlatFileProcessor netFlatFileProcessor;
#Autowired
NetFlatFileWriter netFlatFileWriter;
#Bean
public Job netFileParseJob() {
return jobBuilderFactory.get("netFileParseJob")
.incrementer(new RunIdIncrementer())
.start(netFileStep())
.build();
}
public Step netFileStep() {
return stepBuilderFactory.get("netFileStep")
.<NetDetailsDTO, NetDetailsDTO>chunk(1)
.reader(new NetFlatFileReader(netFileInputs, netFilecolumnNames, netFileColumnLengths))
.processor(netFlatFileProcessor)
.writer(netFlatFileWriter)
.build();
}
}
NetFlatFileReader.java
#Slf4j
public class NetFlatFileReader extends MultiResourceItemReader<NetDetailsDTO> {
public netFlatFileReader(Resource[] netFileInputs, String netFilecolumnNames, String netFileColumnLengths) {
setResources(netFileInputs);
setDelegate(reader(netFilecolumnNames, netFileColumnLengths));
}
private FlatFileItemReader<NetDetailsDTO> reader(String netFilecolumnNames, String netFileColumnLengths) {
FlatFileItemReader<NetDetailsDTO> flatFileItemReader = new FlatFileItemReader<>();
FixedLengthTokenizer tokenizer = CommonUtil.fixedLengthTokenizer(netFilecolumnNames, netFileColumnLengths);
FieldSetMapper<NetDetailsDTO> mapper = createMapper();
DefaultLineMapper<NetDetailsDTO> lineMapper = new DefaultLineMapper<>();
lineMapper.setLineTokenizer(tokenizer);
lineMapper.setFieldSetMapper(mapper);
flatFileItemReader.setLineMapper(lineMapper);
return flatFileItemReader;
}
/*
* Mapping column data to DTO
*/
private FieldSetMapper<NetDetailsDTO> createMapper() {
BeanWrapperFieldSetMapper<NetDetailsDTO> mapper = new BeanWrapperFieldSetMapper<>();
try {
mapper.setTargetType(NetDetailsDTO.class);
} catch(Exception e) {
log.error("Exception in mapping column data to dto ", e);
}
return mapper;
}
}
I am stuck on this scenario, Any help appreciated
I don't think MultiResourceItemReader is appropriate in your case. I would run a job per file for all the reasons of making one thing do one thing and do it well:
Your preparatory step will work by design
It would be easier to run multiple jobs in parallel and improve your file ingestion throughput
In case of failure, you would only restart the job for the failed file
EDIT: add an example
Resource[] netFileInputs = ... // same code that looks for file as currently in your reader
for (Resource netFileInput : netFileInputs) {
Map<String, JobParameter> parameters = new HashMap<>();
parameters.put("netFileInput", new JobParameter(netFileInput.getFilename()));
jobLauncher.run(job, new JobParameters(parameters));
}

How to reset MultiResourceItemReader for each job run . Step scope not working

How can I initilize the MultiResourceItemReader for each job run . currently with this setup its still using the same instance for each job run
I put the #StepScope still its using the same old list of files which it has already been processed. I am not sure what else I have to add in this code
I also tried with the #JobScope also it did not work out. there is something fundamental I am missing
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Value("file:ftp-inbound/*.csv")
#Autowired
private Resource[] inputResources;
#Autowired
private StepBuilderFactory steps;
#Autowired
private JobBuilderFactory jobs;
#Autowired
private ResourceLoader resourceLoader;
#Bean
public FlatFileItemReader<AccommodationRoomAvailability> itemReader() throws UnexpectedInputException, ParseException, IOException {
FlatFileItemReader<AccommodationRoomAvailability> reader = new FlatFileItemReader<AccommodationRoomAvailability>();
DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer();
String[] tokens = {"Product ID", "Allotment", "Kamertype", "Zoeknaam", "Hotel", "Datum", "Beschikbaar", "Nachten"};
tokenizer.setNames(tokens);
tokenizer.setDelimiter(";");
tokenizer.setStrict(true);
reader.setLinesToSkip(1);
DefaultLineMapper<AccommodationRoomAvailability> lineMapper = new DefaultLineMapper<AccommodationRoomAvailability>();
lineMapper.setLineTokenizer(tokenizer);
lineMapper.setFieldSetMapper(new RecordFieldSetMapper());
reader.setLineMapper(lineMapper);
return reader;
}
#Bean
#Qualifier("multiResourceReader")
#StepScope
public MultiResourceItemReader<AccommodationRoomAvailability> multiResourceItemReader() throws Exception {
MultiResourceItemReader<AccommodationRoomAvailability> resourceItemReader = new MultiResourceItemReader<AccommodationRoomAvailability>();
resourceItemReader.setResources(inputResources);
resourceItemReader.setDelegate(itemReader());
resourceItemReader.setStrict(false);
resourceItemReader.setSaveState(false);
// resourceItemReader.read();
return resourceItemReader;
}
#Bean
public ItemProcessor<AccommodationRoomAvailability, String> itemProcessor() {
return new AvailabilityProcessor();
}
#Bean
public ItemWriter itemWriter() {
return new ItemWriter() {
#Override
public void write(List list) throws Exception {
}
};
}
#Bean
protected Step step1(#Qualifier("multiResourceReader") MultiResourceItemReader<AccommodationRoomAvailability> reader, ItemProcessor<AccommodationRoomAvailability, String> processor,
ItemWriter writer) {
return steps.get("step1")/*.listener(new StepListener())*/.<AccommodationRoomAvailability, String>chunk(30000).reader(reader)
.processor(processor)
.writer(writer)
.build();
}
#Bean
public Step step2() throws IOException {
FileDeletingTasklet task = new FileDeletingTasklet();
task.setResources(inputResources);
return stepBuilderFactory.get("step2")
.tasklet(task)
.build();
}
#Bean(name = "job")
public Job job(#Qualifier("step1") Step step1, Step step2) throws IOException {
return jobs.get("job")
.start(step1).on("*").to(step2).end()
// .flow(step1).on("").to(step2()).end()
.build();
}
}
Once your application context is created, the injected resources #Value("file:ftp-inbound/*.csv") will be the same during the whole lifetime of your app. That's why the reader will always read the same values.
You need to pass these resources as a parameter to your job and late-bind them in your reader with Step scope. In your example it would be something like:
#Bean
#Qualifier("multiResourceReader")
#StepScope
public MultiResourceItemReader<AccommodationRoomAvailability> multiResourceItemReader(#Value("#{jobParameters['inputResources']}") Resource[] inputResources) throws Exception {
MultiResourceItemReader<AccommodationRoomAvailability> resourceItemReader = new MultiResourceItemReader<AccommodationRoomAvailability>();
resourceItemReader.setResources(inputResources);
resourceItemReader.setDelegate(itemReader());
resourceItemReader.setStrict(false);
resourceItemReader.setSaveState(false);
return resourceItemReader;
}
Then pass input resources as a parameter to your job:
JobParameters jobParameters = new JobParametersBuilder()
.addString("inputResources", "file:ftp-inbound/*.csv")
.toJobParameters();
currently with this setup its still using the same instance for each job run
That's because your resources are always the same when they are injected in a field of your configuration class. If you use the job parameters approach I mentioned in the previous example, you will have a different instance if you run the job with different set of files.

method annotation with #BeforeStep not getting called

my goal is passing some value from a tasklet to another step which is consist of itemreader, processor and writer. based on the reference, i should use ExecutionContextPromotionListener. however, for some reason #BeforeStep is not being called. this is what I have.
Tasklet
#Component
public class RequestTasklet implements Tasklet {
#Autowired
private HistoryRepository historyRepository;
#Override
public RepeatStatus execute(StepContribution sc, ChunkContext cc) throws Exception {
List<History> requests
= historyRepository.findHistory();
ExecutionContext stepContext = cc.getStepContext().getStepExecution().getJobExecution().getExecutionContext();
stepContext.put("someKey", requests);
return RepeatStatus.FINISHED;
}
}
ItemReader
#Component
public class RequestReader implements ItemReader<History> {
private List<History> requests;
#Override
public History read() throws UnexpectedInputException,
ParseException,
NonTransientResourceException {
System.out.println("requests====>" + requests);
if (CollectionUtils.isNotEmpty(requests)) {
History request = requests.remove(0);
return request;
}
return null;
}
#BeforeStep
public void beforeStep(StepExecution stepExecution) {
System.out.println("====================here======================");
JobExecution jobExecution = stepExecution.getJobExecution();
ExecutionContext jobContext = jobExecution.getExecutionContext();
this.requests = (List<History>) jobContext.get("someKey");
}
}
Configuration
#Bean(name = "update-job")
public Job updateUserAttributes(
JobBuilderFactory jbf,
StepBuilderFactory sbf,
ExecutionContextPromotionListener promoListener,
HistoryProcessor processor,
RequestReader reader,
HistoryWriter writer,
RequestTasklet loadRequestTasklet) {
Step preStep = sbf.get("load-request-tasklet")
.tasklet(loadRequestTasklet)
.listener(promoListener)
.build();
Step step = sbf.get("update-step")
.<History, History>chunk(2)
.reader(reader)
.processor(processor)
.writer(writer)
.taskExecutor(taskExecutor())
.build();
return jbf.get("update-job")
.incrementer(new RunIdIncrementer())
.start(preStep).next(step).
build();
}
#Bean
public ExecutionContextPromotionListener promoListener() {
ExecutionContextPromotionListener listener = new ExecutionContextPromotionListener();
listener.setKeys(new String[]{
"someKey",
});
return listener;
}
I also tried to extend StepExecutionListenerSupport in ItemReader but got the same result.
I googled around and looked at this question answered due to spring proxy which is not my case beforestep issue.
little more information as i am testing. I set listener.setStrict(Boolean.TRUE); to see if keys are set. but i am getting
java.lang.IllegalArgumentException: The key [someKey] was not found in the Step's ExecutionContext.
at org.springframework.batch.core.listener.ExecutionContextPromotionListener.afterStep(ExecutionContextPromotionListener.java:61) ~[spring-batch-core-4.0.0.M1.jar:4.0.0.M1]
appreciate any help.

Spring boot batch partitioning JdbcCursorItemReader error

I have been unable to get this to work even after following Victor Jabor blog very comprehensive example. I have followed his configuration as he described and used all the latest dependencies. I, as Victor am trying to read from one db and write to another. I have this working without partitioning but need partitioning to improve performance as I need to be able to read 5 to 10 million rows within 5mins.
The following seems to work:
1) ColumnRangePartitioner
2) TaskExecutorPartitionHandler builds the correct number of step tasks based on the gridsize and spawns the correct number of threads
3) setPreparedStatementSetter from the stepExecution set by the ColumnRangePartitioner.
But when I run the application I get errors from JdbcCursorItemReader which are not consistent and which I dont understand. As a last resort I will have to debug the JdbcCursorItemReader. I am hoping to get some help before this and hopefully it will be a configuration issue.
ERROR:
Caused by: java.sql.SQLException: Exhausted Resultset
at oracle.jdbc.driver.OracleResultSetImpl.getInt(OracleResultSetImpl.java:901) ~[ojdbc6-11.2.0.2.0.jar:11.2.0.2.0]
at org.springframework.jdbc.support.JdbcUtils.getResultSetValue(JdbcUtils.java:160) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE]
at org.springframework.jdbc.core.BeanPropertyRowMapper.getColumnValue(BeanPropertyRowMapper.java:370) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE]
at org.springframework.jdbc.core.BeanPropertyRowMapper.mapRow(BeanPropertyRowMapper.java:291) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE]
at org.springframework.batch.item.database.JdbcCursorItemReader.readCursor(JdbcCursorItemReader.java:139) ~[spring-batch-infrastructure-3.0.7.RELEASE.jar:3.0.7.RELEASE]
Configuration classes:
#Configuration #EnableBatchProcessing public class BatchConfiguration {
#Bean
public ItemProcessor<Archive, Archive> processor(#Value("${etl.region}") String region) {
return new ArchiveProcessor(region);
}
#Bean
public ItemWriter<Archive> writer(#Qualifier(value = "postgres") DataSource dataSource) {
JdbcBatchItemWriter<Archive> writer = new JdbcBatchItemWriter<>();
writer.setSql("insert into tdw_src.archive (id) " +
"values (:id)");
writer.setDataSource(dataSource);
writer.setItemSqlParameterSourceProvider(new org.springframework.batch.item.database.
BeanPropertyItemSqlParameterSourceProvider<>());
return writer;
}
#Bean
public Partitioner archivePartitioner(#Qualifier(value = "gmDataSource") DataSource dataSource,
#Value("ROWNUM") String column,
#Value("archive") String table,
#Value("${gm.datasource.username}") String schema) {
return new ColumnRangePartitioner(dataSource, column, schema + "." + table);
}
#Bean
public Job archiveJob(JobBuilderFactory jobs, Step partitionerStep, JobExecutionListener listener) {
return jobs.get("archiveJob")
.preventRestart()
.incrementer(new RunIdIncrementer())
.listener(listener)
.start(partitionerStep)
.build();
}
#Bean
public Step partitionerStep(StepBuilderFactory stepBuilderFactory,
Partitioner archivePartitioner,
Step step1,
#Value("${spring.batch.gridsize}") int gridSize) {
return stepBuilderFactory.get("partitionerStep")
.partitioner(step1)
.partitioner("step1", archivePartitioner)
.gridSize(gridSize)
.taskExecutor(taskExecutor())
.build();
}
#Bean(name = "step1")
public Step step1(StepBuilderFactory stepBuilderFactory, ItemReader<Archive> customReader,
ItemWriter<Archive> writer, ItemProcessor<Archive, Archive> processor) {
return stepBuilderFactory.get("step1")
.listener(customReader)
.<Archive, Archive>chunk(5)
.reader(customReader)
.processor(processor)
.writer(writer)
.build();
}
#Bean
public TaskExecutor taskExecutor(){
return new SimpleAsyncTaskExecutor();
}
#Bean
public SimpleJobLauncher getJobLauncher(JobRepository jobRepository) {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(jobRepository);
return jobLauncher;
}
Custom Reader:-
public class CustomReader extends JdbcCursorItemReader<Archive> implements StepExecutionListener {
private StepExecution stepExecution;
#Autowired
public CustomReader(#Qualifier(value = "gmDataSource") DataSource geomangerDataSource,
#Value("${gm.datasource.username}") String schema) throws Exception {
super();
this.setSql("SELECT TMP.* FROM (SELECT ROWNUM AS ID_PAGINATION, id FROM " + schema + ".archive) TMP " +
"WHERE TMP.ID_PAGINATION >= ? AND TMP.ID_PAGINATION <= ?");
this.setDataSource(geomangerDataSource);
BeanPropertyRowMapper<Archive> rowMapper = new BeanPropertyRowMapper<>(Archive.class);
this.setRowMapper(rowMapper);
this.setFetchSize(5);
this.setSaveState(false);
this.setVerifyCursorPosition(false);
// not sure if this is needed? this.afterPropertiesSet();
}
#Override
public synchronized void beforeStep(StepExecution stepExecution) {
this.stepExecution = stepExecution;
this.setPreparedStatementSetter(getPreparedStatementSetter());
}
private PreparedStatementSetter getPreparedStatementSetter() {
ListPreparedStatementSetter listPreparedStatementSetter = new ListPreparedStatementSetter();
List<Integer> list = new ArrayList<>();
list.add(stepExecution.getExecutionContext().getInt("minValue"));
list.add(stepExecution.getExecutionContext().getInt("maxValue"));
listPreparedStatementSetter.setParameters(list);
LOGGER.debug("getPreparedStatementSetter list: " + list);
return listPreparedStatementSetter;
}
#Override
public ExitStatus afterStep(StepExecution stepExecution) {
return null;
}
}
I've got this all working.
First I needed to order my select statement in my CustomReader so the rownum remains the same for all threads and lastly I had to scope the beans by using #StepScope for each bean used in the step.
In reality I wont be using rownum since this needs to be ordered which reduce loose performance and therefore I will use a pk column to get the best performance.

spring batch with annotations

I'm currently trying to create a batch with the spring annotations but the batch is never called. No error occurs, my batch isn't called. Its a simple batch that retrieves values from the database and add messages in a queue (rabbitmq).
The main configuration class:
#Configuration
#EnableBatchProcessing
public class BatchInfrastructureConfiguration {
#Bean
public JobLauncher getJobLauncher() throws Exception {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(getJobRepository());
jobLauncher.afterPropertiesSet();
return jobLauncher;
}
public JobRepository getJobRepository() throws Exception {
MapJobRepositoryFactoryBean factory = new MapJobRepositoryFactoryBean();
factory.setTransactionManager(new ResourcelessTransactionManager());
factory.afterPropertiesSet();
return (JobRepository) factory.getObject();
}
}
The configuration class specific to my batch
#Configuration
#Import(BatchInfrastructureConfiguration.class)
public class PurchaseStatusBatchConfiguration {
#Inject
private JobBuilderFactory jobBuilders;
#Inject
private StepBuilderFactory stepBuilders;
#Bean
public Job purchaseStatusJob(){
return jobBuilders.get("purchaseStatusJob")
.start(step())
.build();
}
#Bean
public Step step(){
return stepBuilders.get("purchaseStatusStep")
.tasklet(new PurchaseStatusBatch())
.build();
}
}
The batch class:
public class PurchaseStatusBatch implements Tasklet {
#Inject
private PurchaseRepository purchaseRepository;
#Inject
#Qualifier(ApplicationConst.BEAN_QUALIFIER_PURCHASE_QUEUE)
private RabbitTemplate rabbitTemplate;
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
PurchaseDto purchaseDto;
PurchaseMessage purchaseMessage;
List<Purchase> notVerifiedPurchase = purchaseRepository.findByVerified(false);
for (Purchase purchase : notVerifiedPurchase) {
purchaseDto = new PurchaseDto();
purchaseDto.setOrderId(purchase.getOrderId());
purchaseDto.setProductId(purchase.getProductId());
purchaseDto.setPurchaseToken(purchase.getPurchaseToken());
purchaseDto.setUserScrapbookKey(purchase.getUserScrapbookKey());
purchaseMessage = new PurchaseMessage();
purchaseMessage.setPurchaseDto(purchaseDto);
rabbitTemplate.convertAndSend(purchaseMessage);
}
return null;
}
}
The job runner (class calling the batch):
#Service
public class PurchaseStatusJobRunner {
#Inject
private JobLocator jobLocator;
#Inject
private JobLauncher jobLauncher;
//#Scheduled(fixedDelay = 3000L)
//#Scheduled(cron="* * * * *") // every 1 minute
#Scheduled(fixedDelay = 3000L)
public void runJob() throws JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException, JobParametersInvalidException, NoSuchJobException {
jobLauncher.run(jobLocator.getJob("purchaseStatusJob"), new JobParameters());
}
}
Short answer: The problem is that your PurchaseStatusBatch instance is not a spring bean... then all its attributes are null (they have not been injected)
Try this :
#Bean
public Step step(){
return stepBuilders.get("purchaseStatusStep")
.tasklet(purchaseStatusBatch())
.build();
}
#Bean
public PurchaseStatusBatch purchaseStatusBatch() {
return new PurchaseStatusBatch()
}
When you want to have feedback on your job execution, just use the JobExecution instance returned by the JobLauncher. You can get the ExitStatus, you can get all exceptions catched by the JobLauncher, and more information.
Another solution to have feedback would be providing a real database to your JobRepository, and then check execution statuses in it.

Resources