Is there a way to catch SkipLimitExceedException in spring batch - spring

I'm reading a csv a file using a multiResourceItemReader and I've kept the skip limit to be 10. When the limit exceeds I want to catch the SkipLimitExceedException and throw my own customized exception with a message like "Invalid csv" ,where or how do i catch it?
try {
log.info("Running job to insert batch fcm: {} into database.", id);
jobLauncher
.run(importJob, new JobParametersBuilder()
.addString("fullPathFileName", TMP_DIR)
.addString("batch_fcm_id", String.valueOf(id))
.addLong("time",System.currentTimeMillis())
.toJobParameters());
}
catch(...){...}
I cannot catch it here, is it because I'm using MultiResourceItemReader and the asynchronous process doesn't allow me to catch it here?
my job is as follows
#Bean(name = "fcmJob")
Job importJob(#Qualifier(MR_ITEM_READER) Reader reader,
#Qualifier(JDBC_WRITER) JdbcBatchItemWriter jdbcBatchItemWriter,
#Qualifier("fcmTaskExecutor") TaskExecutor taskExecutor) {
Step writeToDatabase = stepBuilderFactory.get("file-database")//name of step
.<FcmIdResource, FcmIdResource>chunk(csvChunkSize) // <input as, output as>
.reader(reader)
.faultTolerant()
.skipLimit(10)
.skip(UncategorizedSQLException.class)
.noSkip(FileNotFoundException.class)
.writer(jdbcBatchItemWriter)
.taskExecutor(taskExecutor)
.throttleLimit(20)
.build();
return jobBuilderFactory.get("jobBuilderFactory") //Name of job builder factory
.incrementer(new RunIdIncrementer())
.start(writeToDatabase)
.on("*")
.to(deleteTemporaryFiles())
.end()
.build();
}
I have tried using ItemReaderListener, SkipPolicy, SkipListener, but they cannot throw an exception, is there any other way?

The exception you are looking for is not thrown by the job, you can get it from the job execution using JobExecution#getAllFailureExceptions.
So in your example, instead of doing:
try {
jobLauncher.run(job, new JobParameters());
} catch (Exception e) {
//...
}
You should do:
JobExecution jobExecution = jobLauncher.run(job, new JobParameters());
List<Throwable> allFailureExceptions = jobExecution.getFailureExceptions();
In your case, SkipLimitExceedException will be one of allFailureExceptions.
EDIT: Adding an example showing that SkipLimitExceedException is part of allFailureExceptions:
import java.util.Arrays;
import java.util.List;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.batch.item.ItemReader;
import org.springframework.batch.item.ItemWriter;
import org.springframework.batch.item.support.ListItemReader;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableBatchProcessing
public class MyJob {
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Bean
public ItemReader<Integer> itemReader() {
return new ListItemReader<>(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10));
}
#Bean
public ItemProcessor<Integer, Integer> itemProcessor() {
return item -> {
if (item % 3 == 0) {
throw new IllegalArgumentException("no multiples of three here! " + item);
}
return item;
};
}
#Bean
public ItemWriter<Integer> itemWriter() {
return items -> {
for (Integer item : items) {
System.out.println("item = " + item);
}
};
}
#Bean
public Step step() {
return steps.get("step")
.<Integer, Integer>chunk(2)
.reader(itemReader())
.processor(itemProcessor())
.writer(itemWriter())
.faultTolerant()
.skip(IllegalArgumentException.class)
.skipLimit(2)
.build();
}
#Bean
public Job job() {
return jobs.get("job")
.start(step())
.build();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(MyJob.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
JobExecution jobExecution = jobLauncher.run(job, new JobParameters());
List<Throwable> allFailureExceptions = jobExecution.getAllFailureExceptions();
for (Throwable failureException : allFailureExceptions) {
System.out.println("failureException = " + failureException);
}
}
}
This sample prints:
item = 1
item = 2
item = 4
item = 5
item = 7
item = 8
failureException = org.springframework.batch.core.step.skip.SkipLimitExceededException: Skip limit of '2' exceeded

Related

MultiResourceItemReader not working as expected

I have been using FlatFileItemReader and processing files one by one. But I am trying to use MultiResourceItemReader and supplying all the files at once, there are 3 CSV files after filtering and at most 50.
While running the Job even if all the files are supplied, if I verify the results only 1 file is processed. Data is read from the CSV file and saved to database and after verifying the results only the data of 1 file is saved to database. I couldn't find what am I doing wrong. My code for
MultiResourceItemReader is below :
#Bean(name = "multiItemReader")
#StepScope
public MultiResourceItemReader<CDSBrokerBOIDMappingEntity> multiResourceItemReader(#Value("#{jobParameters[filenameStartPattern]}") String filenameStartPattern
, #Value("#{jobParameters[filenameEndPattern]}") String filenameEndPattern, #Value("#{jobParameters[localDirectory]}") String localDirectory) throws Exception {
String[] localDirectories = localDirectory.split(",");
List<Resource> inputResources = Collections.synchronizedList(new ArrayList<>());
for (String localDirectory1 : localDirectories){
try (Stream<Path> walk = Files.walk(Paths.get(localDirectory1), 1)) {
walk.filter(Files::isRegularFile) // is a file
.filter(p -> p.getFileName().toString().startsWith(filenameStartPattern) && p.getFileName().toString().endsWith(filenameEndPattern))
.findAny().ifPresentOrElse(f -> {
log.info("CSV FILE => " + f.getFileName().toString());
inputResources.add(new FileSystemResource(f));
},
() -> {
log.info("No file found");
});
} catch (IOException e) {
e.printStackTrace();
}
}
log.info("No. of files => "+inputResources.size());
MultiResourceItemReader<CDSBrokerBOIDMappingEntity> resourceItemReader = new MultiResourceItemReader<CDSBrokerBOIDMappingEntity>();
resourceItemReader.setResources(inputResources.toArray(Resource[]::new));
resourceItemReader.setDelegate(importReader());
resourceItemReader.setStrict(true);
return resourceItemReader;
}
And FlatFileItemReader code is :
#Bean
public FlatFileItemReader<CDSBrokerBOIDMappingEntity> importReader() throws Exception {
FlatFileItemReader<CDSBrokerBOIDMappingEntity> reader = new FlatFileItemReader<>();
reader.setLinesToSkip(1);
reader.setLineMapper(new DefaultLineMapper<CDSBrokerBOIDMappingEntity>() {{
setLineTokenizer(new DelimitedLineTokenizer() {{
setNames(new String[]{"BOID", "CLIENT_MEMBER_CODE", "BROKER_ID", "LAST_MODIFIED_DATE", "ISVALID"});
}});
setFieldSetMapper(new BeanWrapperFieldSetMapper<CDSBrokerBOIDMappingEntity>() {{
setTargetType(CDSBrokerBOIDMappingEntity.class);
}});
}});
reader.setStrict(true);
reader.afterPropertiesSet();
return reader;
}
Writer is :
#Bean
public ItemWriter<CDSBrokerBOIDMappingEntity> writer() {
// log.info("Writer current thread. {}", Thread.currentThread().getName());
RepositoryItemWriter<CDSBrokerBOIDMappingEntity> writer = new RepositoryItemWriter<CDSBrokerBOIDMappingEntity>();
writer.setRepository(cdsBrokerBOIDMappingRepository);
// writer.setMethodName("save");
try {
writer.afterPropertiesSet();
} catch (Exception e) {
e.printStackTrace();
}
return writer;
}
And Step and Job :
#Bean
public Job importUserJob(MultiResourceItemReader<CDSBrokerBOIDMappingEntity> importReader, JobCompletionNotificationListener listener) {
return jobBuilderFactory
.get("importUserJob")
.incrementer(new RunIdIncrementer())
.listener(listener)
.flow(step1(importReader))
.end()
.build();
}
#Bean
public Step step1(#Qualifier("multiItemReader") MultiResourceItemReader<CDSBrokerBOIDMappingEntity> importReader) {
return stepBuilderFactory.get("step1").<CDSBrokerBOIDMappingEntity, CDSBrokerBOIDMappingEntity>chunk(200)
.reader(importReader)
.processor(processor())
.writer(writer())
.listener(stepListener())
.taskExecutor(taskExecutor())
.build();
}
I tried multiple times but only 1 file is read.
Is there anything wrong in the code ? Or my approach is wrong ?
The MultiResourceItemReader is working as expected with Spring Batch v4.3.3, here is a quick example:
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.item.ItemWriter;
import org.springframework.batch.item.file.FlatFileItemReader;
import org.springframework.batch.item.file.MultiResourceItemReader;
import org.springframework.batch.item.file.builder.FlatFileItemReaderBuilder;
import org.springframework.batch.item.file.mapping.PassThroughLineMapper;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.FileSystemResource;
import org.springframework.core.io.Resource;
#Configuration
#EnableBatchProcessing
public class SO68125366 {
#Bean
public MultiResourceItemReader<String> itemReader() {
FlatFileItemReader<String> itemReader = new FlatFileItemReaderBuilder<String>()
.name("itemReader")
.lineMapper(new PassThroughLineMapper())
.build();
MultiResourceItemReader<String> multiResourceItemReader = new MultiResourceItemReader<>();
multiResourceItemReader.setDelegate(itemReader);
Resource resource1 = new FileSystemResource("file1.txt");
Resource resource2 = new FileSystemResource("file2.txt");
multiResourceItemReader.setResources(new Resource[] {resource1, resource2});
return multiResourceItemReader;
}
#Bean
public ItemWriter<String> itemWriter() {
return items -> items.forEach(System.out::println);
}
#Bean
public Job job(JobBuilderFactory jobs, StepBuilderFactory steps) {
return jobs.get("job")
.start(steps.get("step")
.<String, String>chunk(5)
.reader(itemReader())
.writer(itemWriter())
.build())
.build();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(SO68125366.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
jobLauncher.run(job, new JobParameters());
}
}
With two files file1.txt and file2.txt containing respectively hello and world, the sample prints:
hello
world
which means the MultiResourceItemReader read both resources and not only one as you mention. The complete sample can be found in this repo.

#BeforeStep not being called in AsyncProcessor

I have been using the synchronous ItemProcessor and Writer but now I moved it to Asynchronous as the code below:
#Bean
public Job importFraudCodeJob(Step computeFormFileToDB) {
return jobBuilderFactory.get("Import-Entities-Risk-Codes")
.incrementer(new RunIdIncrementer())
.listener(notificationExecutionListener)
.start(computeFormFileToDB)
.build();
}
#Bean
public Step computeFormFileToDB(ItemReader<EntityRiskCodesDto> entityRiskCodeFileReader) {
return stepBuilderFactory.get("ImportFraudCodesStep")
.<EntityFraudCodesDto, Future<EntityFraudCodes>>chunk(chunkSize)
.reader(entityRiskCodeFileReader)
.processor(asyncProcessor())
.writer(asyncWriter())
.faultTolerant()
.skipPolicy(customSkipPolicy)
.listener(customStepListener)
.listener(chunkCounterListener())
.taskExecutor(taskExecutor())
.throttleLimit(6)
.build();
}
In my ItemPocessor<I,O> i use the #BeforeStep to get the value I've stored in a StepExecutionContext:
#BeforeStep
public void getKey(StepExecution stepExecution) {
log.info("Fetching batchNumber");
ExecutionContext context = stepExecution.getExecutionContext();
this.sequenceNumber = (Integer) context.get("sequenceNumber");
}
And here the declaration of my AsyncProcessor:
#Bean
public AsyncItemProcessor<EntityRiskCodesDto, EntityRiskCodes> asyncProcessor() {
var asyncItemProcessor = new AsyncItemProcessor<EntityRiskCodesDto, EntityRiskCodes>();
asyncItemProcessor.setDelegate(riskCodeItemProcessor());
asyncItemProcessor.setTaskExecutor(taskExecutor());
return asyncItemProcessor;
}
The problem is the fact that the method above is not being called.
How can I get values from StepExecution and pass them into an Asynchronous ItemProcessor or AsyncItemWiter?
The reason is that since your item processor is a delegate of an async item processor, it is not automatically registered as a listener and this should be done manually. Here is an excerpt from the Intercepting Step Execution section of the docs:
If the listener is nested inside another component, it needs to be explicitly
registered (as described previously under "Registering ItemStream with a Step").
So in your use case, you need to register the delegate riskCodeItemProcessor() as a listener in your step and the method annotated with #BeforeStep should be called. Here is a quick example:
import java.util.Arrays;
import java.util.concurrent.Future;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.annotation.BeforeStep;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.integration.async.AsyncItemProcessor;
import org.springframework.batch.integration.async.AsyncItemWriter;
import org.springframework.batch.item.ItemProcessor;
import org.springframework.batch.item.ItemReader;
import org.springframework.batch.item.ItemWriter;
import org.springframework.batch.item.support.ListItemReader;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.task.SimpleAsyncTaskExecutor;
#Configuration
#EnableBatchProcessing
public class MyJobConfig {
#Bean
public ItemReader<Integer> itemReader() {
return new ListItemReader<>(Arrays.asList(0, 1, 2, 3, 4, 5, 6, 7, 8, 9));
}
#Bean
public ItemProcessor<Integer, Integer> itemProcessor() {
return new MyItemProcessor();
}
#Bean
public AsyncItemProcessor<Integer, Integer> asyncItemProcessor() {
AsyncItemProcessor<Integer, Integer> asyncItemProcessor = new AsyncItemProcessor<>();
asyncItemProcessor.setDelegate(itemProcessor());
asyncItemProcessor.setTaskExecutor(new SimpleAsyncTaskExecutor());
return asyncItemProcessor;
}
#Bean
public ItemWriter<Integer> itemWriter() {
return items -> {
for (Integer item : items) {
System.out.println(Thread.currentThread().getName() + ": item = " + item);
}
};
}
#Bean
public AsyncItemWriter<Integer> asyncItemWriter() {
AsyncItemWriter<Integer> asyncItemWriter = new AsyncItemWriter<>();
asyncItemWriter.setDelegate(itemWriter());
return asyncItemWriter;
}
#Bean
public Job job(JobBuilderFactory jobs, StepBuilderFactory steps) {
return jobs.get("myJob")
.start(steps.get("myStep")
.<Integer, Future<Integer>>chunk(5)
.reader(itemReader())
.processor(asyncItemProcessor())
.writer(asyncItemWriter())
.listener(itemProcessor())
.build())
.build();
}
static class MyItemProcessor implements ItemProcessor<Integer, Integer> {
private StepExecution stepExecution;
#Override
public Integer process(Integer item) throws Exception {
String threadName = Thread.currentThread().getName();
System.out.println(threadName + ": processing item " + item
+ " as part of step " + stepExecution.getStepName());
return item + 1;
}
#BeforeStep
public void saveStepExecution(StepExecution stepExecution) {
this.stepExecution = stepExecution;
}
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(MyJobConfig.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
jobLauncher.run(job, new JobParameters());
}
}
This prints:
SimpleAsyncTaskExecutor-1: processing item 0 as part of step myStep
SimpleAsyncTaskExecutor-2: processing item 1 as part of step myStep
SimpleAsyncTaskExecutor-3: processing item 2 as part of step myStep
SimpleAsyncTaskExecutor-4: processing item 3 as part of step myStep
SimpleAsyncTaskExecutor-5: processing item 4 as part of step myStep
main: item = 1
main: item = 2
main: item = 3
main: item = 4
main: item = 5
SimpleAsyncTaskExecutor-6: processing item 5 as part of step myStep
SimpleAsyncTaskExecutor-7: processing item 6 as part of step myStep
SimpleAsyncTaskExecutor-8: processing item 7 as part of step myStep
SimpleAsyncTaskExecutor-9: processing item 8 as part of step myStep
SimpleAsyncTaskExecutor-10: processing item 9 as part of step myStep
main: item = 6
main: item = 7
main: item = 8
main: item = 9
main: item = 10
That said, it is not recommended to rely on the execution context in a multi-threaded setup as this context is shared between threads.
For inject StepExecution in Spring Batch +4.3.0 on ItemProcessor is:
#Component
#StepScope
public class CustomItemProcessor implements ItemProcessor<Client, Client> {
CustomService customService;
StepExecution stepExecution;
public CustomItemProcessor(CustomService customService,
#Value("#{stepExecution}") StepExecution stepExecution) {
this.customService= customService;
this.stepExecution = stepExecution;
}
#Override
public Client process(Client client) {
// Business
}
}

Spring Batch: Can't quite work out Conditional Flow

Edited to update my latest configuration: Is this on the right track for my use-case?
I have a flow that's supposed to go like this:
The FileRetrievingTasklet retrieves a remote file and places the
"type" of that file in the execution context.
If the file is of type "YEARLY", proceed to the yearlyStep().
If the file is of type "QUARTERLY", proceed to the quarterlyStep().
Finish.
This seems so simple, but what I have doesn't work. The job finishes with FAILED after the tasklet step.
Here's my job config:
#Bean
public Job fundsDistributionJob() {
return jobBuilderFactory
.get("fundsDistributionJob")
.start(retrieveFileStep(stepBuilderFactory))
.on("YEARLY").to(yearEndStep())
.from(retrieveFileStep(stepBuilderFactory))
.on("QUARTERLY").to(quarterlyStep())
.end()
.listener(new FileWorkerJobExecutionListener())
.build();
}
And one of the steps:
#Bean
public Step quarterlyStep() {
return stepBuilderFactory.get("quarterlyStep")
.<Item, Item>chunk(10)
.reader(quarterlyReader())
.processor(processor())
.writer(writer())
.listener(new StepItemReadListener())
.faultTolerant()
.skipPolicy(new DistSkipPolicy())
.build();
}
Can someone tell me what's missing?
The approach with a decider (before your edit) is the way to go. You just had an issue with your flow definition. Here is an example that works as you described:
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.job.flow.FlowExecutionStatus;
import org.springframework.batch.core.job.flow.JobExecutionDecider;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableBatchProcessing
public class MyJob {
private final JobBuilderFactory jobs;
private final StepBuilderFactory steps;
public MyJob(JobBuilderFactory jobs, StepBuilderFactory steps) {
this.jobs = jobs;
this.steps = steps;
}
#Bean
public Step retrieveFileStep() {
return steps.get("retrieveFileStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("Downloading file..");
chunkContext.getStepContext().getStepExecution()
.getExecutionContext().put("type", Type.YEARLY);
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public JobExecutionDecider fileMapperDecider() {
return (jobExecution, stepExecution) -> {
Type type = (Type) stepExecution.getExecutionContext().get("type");
return new FlowExecutionStatus(type == Type.YEARLY ? "yearly" : "quarterly");
};
}
#Bean
public Step yearlyStep() {
return steps.get("yearlyStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("running yearlyStep");
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Step quarterlyStep() {
return steps.get("quarterlyStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("running quarterlyStep");
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Job job() {
return jobs.get("job")
.start(retrieveFileStep())
.next(fileMapperDecider())
.from(fileMapperDecider()).on("yearly").to(yearlyStep())
.from(fileMapperDecider()).on("quarterly").to(quarterlyStep())
.build()
.build();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(MyJob.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
jobLauncher.run(job, new JobParameters());
}
enum Type {
YEARLY, QUARTERLY
}
}
It prints:
Downloading file..
running yearlyStep
If you change the type attribute in the execution context to Type.QUARTERLY in retrieveFileStep, it prints:
Downloading file..
running quarterlyStep

#OnSkipInWrite is Not Called in SkipListener

I am reading the csv file and inserting data to database using spring batch(read,process and write).I am using "jpaRepository.save" in itemWriter class to save the data into the database. And I am trying to catch the skipped item and the skipped message in #OnSkipInWrite method but this method is not called even if data are skipped. And in batch_step_execution table :
read_count = 18, write_count = 10, write_skip_count = 0, roll_back_count =8.
Why the write_skip_count is 0? I just want to know which item was skipped and what was the exceptional message. My step :
#Bean
public Step step() throws IOException {
return stepBuilderFactory.get("step").<Entity, Entity>chunk(1).reader(multiResourceItemReader())
.processor(processor()).writer(writer()).faultTolerant().skip(Exception.class).skipLimit(100)
.listener(new stepExecutionListener()).build();
}
This is my Listener class.
public class StepExecutionListener{
private static final Logger LOG = Logger.getLogger(StepExecutionListener.class);
#OnSkipInRead
public void onSkipInRead(Throwable t) {
LOG.error("On Skip in Read Error : " + t.getMessage());
}
#OnSkipInWrite
public void onSkipInWrite(Entity item, Throwable t) {
LOG.error("Skipped in write due to : " + t.getMessage());
}
#OnSkipInProcess
public void onSkipInProcess(Entity item, Throwable t) {
LOG.error("Skipped in process due to: " + t.getMessage());
}
#OnWriteError
public void onWriteError(Exception exception, List<? extends Entity> items) {
LOG.error("Error on write on " + items + " : " + exception.getMessage());
}}
Why #OnSkipInWrite and #OnWriteError is not called? Any help would be much appreciated. Thanks in advance.
I can't see from what you shared why the skip listener is not called but here is a self-contained example using your listener:
import java.util.Arrays;
import java.util.List;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.core.annotation.OnSkipInProcess;
import org.springframework.batch.core.annotation.OnSkipInRead;
import org.springframework.batch.core.annotation.OnSkipInWrite;
import org.springframework.batch.core.annotation.OnWriteError;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.item.ItemReader;
import org.springframework.batch.item.ItemWriter;
import org.springframework.batch.item.support.ListItemReader;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableBatchProcessing
public class MyJob {
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Bean
public ItemReader<Integer> itemReader() {
return new ListItemReader<>(Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10));
}
#Bean
public ItemWriter<Integer> itemWriter() {
return items -> {
for (Integer item : items) {
if (item.equals(3)) {
throw new Exception("No 3 here!");
}
System.out.println("item = " + item);
}
};
}
#Bean
public Step step() {
return steps.get("step")
.<Integer, Integer>chunk(5)
.reader(itemReader())
.writer(itemWriter())
.faultTolerant()
.skip(Exception.class)
.skipLimit(10)
.listener(new StepExecutionListener())
.build();
}
#Bean
public Job job() {
return jobs.get("job")
.start(step())
.build();
}
public class StepExecutionListener {
#OnSkipInRead
public void onSkipInRead(Throwable t) {
System.err.println("On Skip in Read Error : " + t.getMessage());
}
#OnSkipInWrite
public void onSkipInWrite(Integer item, Throwable t) {
System.err.println("Skipped in write due to : " + t.getMessage());
}
#OnSkipInProcess
public void onSkipInProcess(Integer item, Throwable t) {
System.err.println("Skipped in process due to: " + t.getMessage());
}
#OnWriteError
public void onWriteError(Exception exception, List<? extends Integer> items) {
System.err.println("Error on write on " + items + " : " + exception.getMessage());
}}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(MyJob.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
JobExecution jobExecution = jobLauncher.run(job, new JobParameters());
StepExecution stepExecution = jobExecution.getStepExecutions().iterator().next();
System.out.println("WriteSkipCount = " + stepExecution.getWriteSkipCount());
}
}
This example prints:
item = 1
item = 2
Error on write on [1, 2, 3, 4, 5] : No 3 here!
item = 1
item = 2
Error on write on [3] : No 3 here!
item = 4
Skipped in write due to : No 3 here!
item = 5
item = 6
item = 7
item = 8
item = 9
item = 10
WriteSkipCount = 1
Which means the skip listener is called when an item is skipped on write and the writeSkipCount is correct.
Hope this helps.
You can implement SkipListener interface instead of using #OnWriteError annotation.
Try that in your BatchConf:
#Bean
#StepScope
public StepExecutionListener stepExecutionListener() {
return new StepExecutionListener();
}
...
.skipLimit(1)
.listener(stepExecutionListener()
.build();

Spring batch example for single job to read from a table then split the results by type and process in parallel

Each parallel step will create a file, if all succeed then these files will be moved together to an output folder. If any of these steps fail then none of the files will go to the output folder and the whole job is failed. Help with / code example much appreciated for batch noob.
read from a table then split the results by type and process in parallel
You can partition data by type using a partition step. Partitions will be processed in parallel and each partition creates a file. Then you add step after the partition step to clean up the files if any of the partitions fail. Here is a quick example you can try:
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.HashMap;
import java.util.Map;
import org.springframework.batch.core.Job;
import org.springframework.batch.core.JobParameters;
import org.springframework.batch.core.Step;
import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing;
import org.springframework.batch.core.configuration.annotation.JobBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepBuilderFactory;
import org.springframework.batch.core.configuration.annotation.StepScope;
import org.springframework.batch.core.launch.JobLauncher;
import org.springframework.batch.core.partition.support.Partitioner;
import org.springframework.batch.core.step.tasklet.Tasklet;
import org.springframework.batch.item.ExecutionContext;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.task.SimpleAsyncTaskExecutor;
#Configuration
#EnableBatchProcessing
public class PartitionJobSample {
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Bean
public Step step1() {
return steps.get("step1")
.partitioner(workerStep().getName(), partitioner())
.step(workerStep())
.gridSize(3)
.taskExecutor(taskExecutor())
.build();
}
#Bean
public SimpleAsyncTaskExecutor taskExecutor() {
return new SimpleAsyncTaskExecutor();
}
#Bean
public Partitioner partitioner() {
return gridSize -> {
Map<String, ExecutionContext> map = new HashMap<>(gridSize);
for (int i = 0; i < gridSize; i++) {
ExecutionContext executionContext = new ExecutionContext();
executionContext.put("data", "data" + i);
String key = "partition" + i;
map.put(key, executionContext);
}
return map;
};
}
#Bean
public Step workerStep() {
return steps.get("workerStep")
.tasklet(getTasklet(null))
.build();
}
#Bean
#StepScope
public Tasklet getTasklet(#Value("#{stepExecutionContext['data']}") String partitionData) {
return (contribution, chunkContext) -> {
if (partitionData.equals("data2")) {
throw new Exception("Boom!");
}
System.out.println(Thread.currentThread().getName() + " processing partitionData = " + partitionData);
Files.createFile(Paths.get(partitionData + ".txt"));
return RepeatStatus.FINISHED;
};
}
#Bean
public Step moveFilesStep() {
return steps.get("moveFilesStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("moveFilesStep");
// add code to move files where needed
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Step cleanupFilesStep() {
return steps.get("cleanupFilesStep")
.tasklet((contribution, chunkContext) -> {
System.out.println("cleaning up..");
deleteFiles();
return RepeatStatus.FINISHED;
})
.build();
}
#Bean
public Job job() {
return jobs.get("job")
.flow(step1()).on("FAILED").to(cleanupFilesStep())
.from(step1()).on("*").to(moveFilesStep())
.from(moveFilesStep()).on("*").end()
.from(cleanupFilesStep()).on("*").fail()
.build()
.build();
}
public static void main(String[] args) throws Exception {
deleteFiles();
ApplicationContext context = new AnnotationConfigApplicationContext(PartitionJobSample.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
Job job = context.getBean(Job.class);
jobLauncher.run(job, new JobParameters());
}
private static void deleteFiles() throws IOException {
for (int i = 0; i <= 2; i++) {
Files.deleteIfExists(Paths.get("data" + i + ".txt"));
}
}
}
This example creates 3 dummy partitions ("data0", "data1" and "data2"). Each partition will create a file. If all partitions finish correctly, you will have three files "data0.txt", "data1.txt" and "data2.txt" which will be moved in the moveFilesStep.
Now let make one of the partitions fail, for example the second partition:
#Bean
#StepScope
public Tasklet getTasklet(#Value("#{stepExecutionContext['data']}") String partitionData) {
return (contribution, chunkContext) -> {
if (partitionData.equals("data2")) {
throw new Exception("Boom!");
}
System.out.println(Thread.currentThread().getName() + " processing partitionData = " + partitionData);
Files.createFile(Paths.get(partitionData + ".txt"));
return RepeatStatus.FINISHED;
};
}
In this case, the cleanupFilesStep will be triggered and will delete all files.
Hope this helps.

Resources