I have been using FlatFileItemReader and processing files one by one. But I am trying to use MultiResourceItemReader and supplying all the files at once, there are 3 CSV files after filtering and at most 50.
While running the Job even if all the files are supplied, if I verify the results only 1 file is processed. Data is read from the CSV file and saved to database and after verifying the results only the data of 1 file is saved to database. I couldn't find what am I doing wrong. My code for
MultiResourceItemReader is below :
#Bean(name = "multiItemReader")
#StepScope
public MultiResourceItemReader<CDSBrokerBOIDMappingEntity> multiResourceItemReader(#Value("#{jobParameters[filenameStartPattern]}") String filenameStartPattern
, #Value("#{jobParameters[filenameEndPattern]}") String filenameEndPattern, #Value("#{jobParameters[localDirectory]}") String localDirectory) throws Exception {
String[] localDirectories = localDirectory.split(",");
List<Resource> inputResources = Collections.synchronizedList(new ArrayList<>());
for (String localDirectory1 : localDirectories){
try (Stream<Path> walk = Files.walk(Paths.get(localDirectory1), 1)) {
walk.filter(Files::isRegularFile) // is a file
.filter(p -> p.getFileName().toString().startsWith(filenameStartPattern) && p.getFileName().toString().endsWith(filenameEndPattern))
.findAny().ifPresentOrElse(f -> {
log.info("CSV FILE => " + f.getFileName().toString());
inputResources.add(new FileSystemResource(f));
},
() -> {
log.info("No file found");
});
} catch (IOException e) {
e.printStackTrace();
}
}
log.info("No. of files => "+inputResources.size());
MultiResourceItemReader<CDSBrokerBOIDMappingEntity> resourceItemReader = new MultiResourceItemReader<CDSBrokerBOIDMappingEntity>();
resourceItemReader.setResources(inputResources.toArray(Resource[]::new));
resourceItemReader.setDelegate(importReader());
resourceItemReader.setStrict(true);
return resourceItemReader;
}
And FlatFileItemReader code is :
#Bean
public FlatFileItemReader<CDSBrokerBOIDMappingEntity> importReader() throws Exception {
FlatFileItemReader<CDSBrokerBOIDMappingEntity> reader = new FlatFileItemReader<>();
reader.setLinesToSkip(1);
reader.setLineMapper(new DefaultLineMapper<CDSBrokerBOIDMappingEntity>() {{
setLineTokenizer(new DelimitedLineTokenizer() {{
setNames(new String[]{"BOID", "CLIENT_MEMBER_CODE", "BROKER_ID", "LAST_MODIFIED_DATE", "ISVALID"});
}});
setFieldSetMapper(new BeanWrapperFieldSetMapper<CDSBrokerBOIDMappingEntity>() {{
setTargetType(CDSBrokerBOIDMappingEntity.class);
}});
}});
reader.setStrict(true);
reader.afterPropertiesSet();
return reader;
}
Writer is :
#Bean
public ItemWriter<CDSBrokerBOIDMappingEntity> writer() {
// log.info("Writer current thread. {}", Thread.currentThread().getName());
RepositoryItemWriter<CDSBrokerBOIDMappingEntity> writer = new RepositoryItemWriter<CDSBrokerBOIDMappingEntity>();
writer.setRepository(cdsBrokerBOIDMappingRepository);
// writer.setMethodName("save");
try {
writer.afterPropertiesSet();
} catch (Exception e) {
e.printStackTrace();
}
return writer;
}
And Step and Job :
#Bean
public Job importUserJob(MultiResourceItemReader<CDSBrokerBOIDMappingEntity> importReader, JobCompletionNotificationListener listener) {
return jobBuilderFactory
.get("importUserJob")
.incrementer(new RunIdIncrementer())
.listener(listener)
.flow(step1(importReader))
.end()
.build();
}
#Bean
public Step step1(#Qualifier("multiItemReader") MultiResourceItemReader<CDSBrokerBOIDMappingEntity> importReader) {
return stepBuilderFactory.get("step1").<CDSBrokerBOIDMappingEntity, CDSBrokerBOIDMappingEntity>chunk(200)
.reader(importReader)
.processor(processor())
.writer(writer())
.listener(stepListener())
.taskExecutor(taskExecutor())
.build();
}
I tried multiple times but only 1 file is read.
Is there anything wrong in the code ? Or my approach is wrong ?
The MultiResourceItemReader is working as expected with Spring Batch v4.3.3, here is a quick example:
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.item.ItemWriter;
import org.springframework.batch.item.file.FlatFileItemReader;
import org.springframework.batch.item.file.MultiResourceItemReader;
import org.springframework.batch.item.file.builder.FlatFileItemReaderBuilder;
import org.springframework.batch.item.file.mapping.PassThroughLineMapper;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.FileSystemResource;
import org.springframework.core.io.Resource;
#Configuration
#EnableBatchProcessing
public class SO68125366 {
#Bean
public MultiResourceItemReader<String> itemReader() {
FlatFileItemReader<String> itemReader = new FlatFileItemReaderBuilder<String>()
.name("itemReader")
.lineMapper(new PassThroughLineMapper())
.build();
MultiResourceItemReader<String> multiResourceItemReader = new MultiResourceItemReader<>();
multiResourceItemReader.setDelegate(itemReader);
Resource resource1 = new FileSystemResource("file1.txt");
Resource resource2 = new FileSystemResource("file2.txt");
multiResourceItemReader.setResources(new Resource[] {resource1, resource2});
return multiResourceItemReader;
}
#Bean
public ItemWriter<String> itemWriter() {
return items -> items.forEach(System.out::println);
}
#Bean
public Job job(JobBuilderFactory jobs, StepBuilderFactory steps) {
return jobs.get("job")
.start(steps.get("step")
.<String, String>chunk(5)
.reader(itemReader())
.writer(itemWriter())
.build())
.build();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(SO68125366.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
jobLauncher.run(job, new JobParameters());
}
}
With two files file1.txt and file2.txt containing respectively hello and world, the sample prints:
hello
world
which means the MultiResourceItemReader read both resources and not only one as you mention. The complete sample can be found in this repo.
Related
Edited to update my latest configuration: Is this on the right track for my use-case?
I have a flow that's supposed to go like this:
The FileRetrievingTasklet retrieves a remote file and places the
"type" of that file in the execution context.
If the file is of type "YEARLY", proceed to the yearlyStep().
If the file is of type "QUARTERLY", proceed to the quarterlyStep().
Finish.
This seems so simple, but what I have doesn't work. The job finishes with FAILED after the tasklet step.
Here's my job config:
#Bean
public Job fundsDistributionJob() {
return jobBuilderFactory
.get("fundsDistributionJob")
.start(retrieveFileStep(stepBuilderFactory))
.on("YEARLY").to(yearEndStep())
.from(retrieveFileStep(stepBuilderFactory))
.on("QUARTERLY").to(quarterlyStep())
.end()
.listener(new FileWorkerJobExecutionListener())
.build();
}
And one of the steps:
#Bean
public Step quarterlyStep() {
return stepBuilderFactory.get("quarterlyStep")
.<Item, Item>chunk(10)
.reader(quarterlyReader())
.processor(processor())
.writer(writer())
.listener(new StepItemReadListener())
.faultTolerant()
.skipPolicy(new DistSkipPolicy())
.build();
}
Can someone tell me what's missing?
The approach with a decider (before your edit) is the way to go. You just had an issue with your flow definition. Here is an example that works as you described:
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.job.flow.FlowExecutionStatus;
import org.springframework.batch.core.job.flow.JobExecutionDecider;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableBatchProcessing
public class MyJob {
private final JobBuilderFactory jobs;
private final StepBuilderFactory steps;
public MyJob(JobBuilderFactory jobs, StepBuilderFactory steps) {
this.jobs = jobs;
this.steps = steps;
}
#Bean
public Step retrieveFileStep() {
return steps.get("retrieveFileStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("Downloading file..");
chunkContext.getStepContext().getStepExecution()
.getExecutionContext().put("type", Type.YEARLY);
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public JobExecutionDecider fileMapperDecider() {
return (jobExecution, stepExecution) -> {
Type type = (Type) stepExecution.getExecutionContext().get("type");
return new FlowExecutionStatus(type == Type.YEARLY ? "yearly" : "quarterly");
};
}
#Bean
public Step yearlyStep() {
return steps.get("yearlyStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("running yearlyStep");
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Step quarterlyStep() {
return steps.get("quarterlyStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("running quarterlyStep");
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Job job() {
return jobs.get("job")
.start(retrieveFileStep())
.next(fileMapperDecider())
.from(fileMapperDecider()).on("yearly").to(yearlyStep())
.from(fileMapperDecider()).on("quarterly").to(quarterlyStep())
.build()
.build();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(MyJob.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
jobLauncher.run(job, new JobParameters());
}
enum Type {
YEARLY, QUARTERLY
}
}
It prints:
Downloading file..
running yearlyStep
If you change the type attribute in the execution context to Type.QUARTERLY in retrieveFileStep, it prints:
Downloading file..
running quarterlyStep
Perfect solution for dynamic step creationin Spring Batch.
Just that I am not able to get parameters into this , which will decide what step need to be executed or how can pass steps Array ?
<pre>#Bean
public Job job() {
Step[] stepsArray = // create your steps array or pass it as a parameter
SimpleJobBuilder jobBuilder = jobBuilderFactory.get("mainCalculationJob")
.incrementer(new RunIdIncrementer())
.start(truncTableTaskletStep());
for (Step step : stepsArray) {
jobBuilder.next(step);
}
return jobBuilder.build();
}</pre>
Thanks
i am looking how to pass this step array as parameter and get in above function
Here is an example of how to pass the steps array as a parameter:
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.job.builder.SimpleJobBuilder;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableBatchProcessing
public class MyJobConfiguration {
private final JobBuilderFactory jobBuilderFactory;
private final StepBuilderFactory stepBuilderFactory;
public MyJobConfiguration(JobBuilderFactory jobBuilderFactory, StepBuilderFactory stepBuilderFactory) {
this.jobBuilderFactory = jobBuilderFactory;
this.stepBuilderFactory = stepBuilderFactory;
}
public Step initialStep() {
return stepBuilderFactory.get("initialStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("initial step");
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Step[] dynamicSteps() {
// load steps sequence from db and create steps here
Step step1 = stepBuilderFactory.get("step1")
.tasklet((contribution, chunkContext) -> {
System.out.println("hello");
return RepeatStatus.FINISHED;
})
.build();
Step step2 = stepBuilderFactory.get("step2")
.tasklet((contribution, chunkContext) -> {
System.out.println("world");
return RepeatStatus.FINISHED;
})
.build();
return new Step[]{step1, step2};
}
#Bean
public Job job(Step[] dynamicSteps) {
SimpleJobBuilder jobBuilder = jobBuilderFactory.get("job")
.start(initialStep());
for (Step step : dynamicSteps) {
jobBuilder.next(step);
}
return jobBuilder.build();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(MyJobConfiguration.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
jobLauncher.run(job, new JobParameters());
}
}
Nothing related to Spring Batch here, this is Spring dependency injection: passing an array of beans of type Step as a parameter to a another bean definition method (of type Job).
I'm reading a csv a file using a multiResourceItemReader and I've kept the skip limit to be 10. When the limit exceeds I want to catch the SkipLimitExceedException and throw my own customized exception with a message like "Invalid csv" ,where or how do i catch it?
try {
log.info("Running job to insert batch fcm: {} into database.", id);
jobLauncher
.run(importJob, new JobParametersBuilder()
.addString("fullPathFileName", TMP_DIR)
.addString("batch_fcm_id", String.valueOf(id))
.addLong("time",System.currentTimeMillis())
.toJobParameters());
}
catch(...){...}
I cannot catch it here, is it because I'm using MultiResourceItemReader and the asynchronous process doesn't allow me to catch it here?
my job is as follows
#Bean(name = "fcmJob")
Job importJob(#Qualifier(MR_ITEM_READER) Reader reader,
#Qualifier(JDBC_WRITER) JdbcBatchItemWriter jdbcBatchItemWriter,
#Qualifier("fcmTaskExecutor") TaskExecutor taskExecutor) {
Step writeToDatabase = stepBuilderFactory.get("file-database")//name of step
.<FcmIdResource, FcmIdResource>chunk(csvChunkSize) // <input as, output as>
.reader(reader)
.faultTolerant()
.skipLimit(10)
.skip(UncategorizedSQLException.class)
.noSkip(FileNotFoundException.class)
.writer(jdbcBatchItemWriter)
.taskExecutor(taskExecutor)
.throttleLimit(20)
.build();
return jobBuilderFactory.get("jobBuilderFactory") //Name of job builder factory
.incrementer(new RunIdIncrementer())
.start(writeToDatabase)
.on("*")
.to(deleteTemporaryFiles())
.end()
.build();
}
I have tried using ItemReaderListener, SkipPolicy, SkipListener, but they cannot throw an exception, is there any other way?
The exception you are looking for is not thrown by the job, you can get it from the job execution using JobExecution#getAllFailureExceptions.
So in your example, instead of doing:
try {
jobLauncher.run(job, new JobParameters());
} catch (Exception e) {
//...
}
You should do:
JobExecution jobExecution = jobLauncher.run(job, new JobParameters());
List<Throwable> allFailureExceptions = jobExecution.getFailureExceptions();
In your case, SkipLimitExceedException will be one of allFailureExceptions.
EDIT: Adding an example showing that SkipLimitExceedException is part of allFailureExceptions:
import java.util.Arrays;
import java.util.List;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.batch.item.ItemReader;
import org.springframework.batch.item.ItemWriter;
import org.springframework.batch.item.support.ListItemReader;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableBatchProcessing
public class MyJob {
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Bean
public ItemReader<Integer> itemReader() {
return new ListItemReader<>(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10));
}
#Bean
public ItemProcessor<Integer, Integer> itemProcessor() {
return item -> {
if (item % 3 == 0) {
throw new IllegalArgumentException("no multiples of three here! " + item);
}
return item;
};
}
#Bean
public ItemWriter<Integer> itemWriter() {
return items -> {
for (Integer item : items) {
System.out.println("item = " + item);
}
};
}
#Bean
public Step step() {
return steps.get("step")
.<Integer, Integer>chunk(2)
.reader(itemReader())
.processor(itemProcessor())
.writer(itemWriter())
.faultTolerant()
.skip(IllegalArgumentException.class)
.skipLimit(2)
.build();
}
#Bean
public Job job() {
return jobs.get("job")
.start(step())
.build();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(MyJob.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
JobExecution jobExecution = jobLauncher.run(job, new JobParameters());
List<Throwable> allFailureExceptions = jobExecution.getAllFailureExceptions();
for (Throwable failureException : allFailureExceptions) {
System.out.println("failureException = " + failureException);
}
}
}
This sample prints:
item = 1
item = 2
item = 4
item = 5
item = 7
item = 8
failureException = org.springframework.batch.core.step.skip.SkipLimitExceededException: Skip limit of '2' exceeded
Each parallel step will create a file, if all succeed then these files will be moved together to an output folder. If any of these steps fail then none of the files will go to the output folder and the whole job is failed. Help with / code example much appreciated for batch noob.
read from a table then split the results by type and process in parallel
You can partition data by type using a partition step. Partitions will be processed in parallel and each partition creates a file. Then you add step after the partition step to clean up the files if any of the partitions fail. Here is a quick example you can try:
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.HashMap;
import java.util.Map;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.core.partition.support.Partitioner;
import org.springframework.batch.core.step.tasklet.Tasklet;
import org.springframework.batch.item.ExecutionContext;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.task.SimpleAsyncTaskExecutor;
#Configuration
#EnableBatchProcessing
public class PartitionJobSample {
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Bean
public Step step1() {
return steps.get("step1")
.partitioner(workerStep().getName(), partitioner())
.step(workerStep())
.gridSize(3)
.taskExecutor(taskExecutor())
.build();
}
#Bean
public SimpleAsyncTaskExecutor taskExecutor() {
return new SimpleAsyncTaskExecutor();
}
#Bean
public Partitioner partitioner() {
return gridSize -> {
Map<String, ExecutionContext> map = new HashMap<>(gridSize);
for (int i = 0; i < gridSize; i++) {
ExecutionContext executionContext = new ExecutionContext();
executionContext.put("data", "data" + i);
String key = "partition" + i;
map.put(key, executionContext);
}
return map;
};
}
#Bean
public Step workerStep() {
return steps.get("workerStep")
.tasklet(getTasklet(null))
.build();
}
#Bean
#StepScope
public Tasklet getTasklet(#Value("#{stepExecutionContext['data']}") String partitionData) {
return (contribution, chunkContext) -> {
if (partitionData.equals("data2")) {
throw new Exception("Boom!");
}
System.out.println(Thread.currentThread().getName() + " processing partitionData = " + partitionData);
Files.createFile(Paths.get(partitionData + ".txt"));
return RepeatStatus.FINISHED;
};
}
#Bean
public Step moveFilesStep() {
return steps.get("moveFilesStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("moveFilesStep");
// add code to move files where needed
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Step cleanupFilesStep() {
return steps.get("cleanupFilesStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("cleaning up..");
deleteFiles();
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Job job() {
return jobs.get("job")
.flow(step1()).on("FAILED").to(cleanupFilesStep())
.from(step1()).on("*").to(moveFilesStep())
.from(moveFilesStep()).on("*").end()
.from(cleanupFilesStep()).on("*").fail()
.build()
.build();
}
public static void main(String[] args) throws Exception {
deleteFiles();
ApplicationContext context = new AnnotationConfigApplicationContext(PartitionJobSample.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
jobLauncher.run(job, new JobParameters());
}
private static void deleteFiles() throws IOException {
for (int i = 0; i <= 2; i++) {
Files.deleteIfExists(Paths.get("data" + i + ".txt"));
}
}
}
This example creates 3 dummy partitions ("data0", "data1" and "data2"). Each partition will create a file. If all partitions finish correctly, you will have three files "data0.txt", "data1.txt" and "data2.txt" which will be moved in the moveFilesStep.
Now let make one of the partitions fail, for example the second partition:
#Bean
#StepScope
public Tasklet getTasklet(#Value("#{stepExecutionContext['data']}") String partitionData) {
return (contribution, chunkContext) -> {
if (partitionData.equals("data2")) {
throw new Exception("Boom!");
}
System.out.println(Thread.currentThread().getName() + " processing partitionData = " + partitionData);
Files.createFile(Paths.get(partitionData + ".txt"));
return RepeatStatus.FINISHED;
};
}
In this case, the cleanupFilesStep will be triggered and will delete all files.
Hope this helps.
I have a simple 3-step flow:
public Job myJob() {
Step extract = extractorStep();
Step process = filesProcessStep();
Step cleanup = cleanupStep();
return jobBuilderFactory.get("my-job")
.flow(echo("Starting job"))
.next(extract)
.next(process)
.next(cleanup)
.next(echo("Ending job"))
.end()
.build();
}
Now I want to add error processing using ExitStatus from StepExecutionListener.afterStep(). Any error should forward flow to cleanup step. So I changed to the code below:
public Job myJob() {
Step extract = extractorStep();
Step process = filesProcessStep();
Step cleanup = cleanupStep();
return jobBuilderFactory.get("my-job")
.start(echo("Starting batch job"))
.next(extract).on(ExitStatus.FAILED.getExitCode()).to(cleanup)
.from(extract).on("*").to(process)
.next(process).on(ExitStatus.FAILED.getExitCode()).to(cleanup)
.from(process).on("*").to(cleanup)
.next(echo("End batch job"))
.end()
.build();
}
Now I have an infinite loop to the cleanup step.
I would like some help to correct this flow.
In your example, the flow is undefined starting from cleanup. You should precise that from cleanup the flow must continue to echo using .from(cleanup).to(echo("End batch job")).end(). Here is an example:
import org.springframework.batch.core.ExitStatus;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableBatchProcessing
public class MyJob {
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Bean
public Step extractorStep() {
return steps.get("extractorStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("extractorStep");
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Step filesProcessStep() {
return steps.get("filesProcessStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("filesProcessStep");
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Step cleanupStep() {
return steps.get("cleanupStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("cleanupStep");
return RepeatStatus.FINISHED;
})
.build();
}
public Step echo(String message) {
return steps.get("echo-" + message)
.tasklet((contribution, chunkContext) -> {
System.out.println(message);
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Job job() {
Step start = echo("Starting batch job");
Step extract = extractorStep();
Step process = filesProcessStep();
Step cleanup = cleanupStep();
Step stop = echo("End batch job");
return jobs.get("job")
.start(start).on("*").to(extract)
.from(extract).on(ExitStatus.FAILED.getExitCode()).to(cleanup)
.from(extract).on("*").to(process)
.from(process).on("*").to(cleanup)
.from(cleanup).next(stop)
.build()
.build();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(MyJob.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
jobLauncher.run(job, new JobParameters());
}
}
it prints:
Starting batch job
extractorStep
filesProcessStep
cleanupStep
End batch job
if the extractorStep fails, for example:
#Bean
public Step extractorStep() {
return steps.get("extractorStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("extractorStep");
chunkContext.getStepContext().getStepExecution().setExitStatus(ExitStatus.FAILED);
return RepeatStatus.FINISHED;
})
.build();
}
the flow will skip filesProcessStep and go to cleanup:
Starting batch job
extractorStep
cleanupStep
End batch job
Hope this helps.