spring batch job with partitions : setting clientInfo in an Oracle session not working for all partitions - oracle

I have a spring batch job using partions and reader is JdbcCursorItemReader, so in this reader I need an authorisation to read correctely crypted data, so when I declare my reader a call the method just bellow .
the problem is that somme partions read null value for the field which need to be decrypted , the only reason is that the authorisation is not set ( I check in database and data are not null), so why it's work for some partions and not for all?
private void authorize() {
//Authorize
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.update(setClientInfo, authorization);
}
and this is how i declare my reader
#Bean
#StepScope
public JdbcCursorItemReader<MyEntity> reader(#Value("#{stepExecutionContext['modulo']}") Integer modulo)
throws IOException {
ClassPathResource resource = new ClassPathResource(SQL_FILE);
BufferedReader reader = new BufferedReader(new InputStreamReader(resource.getInputStream()));
String query = FileCopyUtils.copyToString(reader);
query = query.replace(MODULO_LABEL, String.valueOf(modulo));
query = query.replace(GRID_SIZE_LABEL, String.valueOf(gridSize));
authorize();
JdbcCursorItemReader<MyEntity> cursorItemReader = new JdbcCursorItemReader<>();
cursorItemReader.setSql(query);
final int partitionSize = maxNumberCards / gridSize;
cursorItemReader.setMaxItemCount(partitionSize);
cursorItemReader.setDataSource(dataSource);
cursorItemReader.setRowMapper(myRowMapper);
return cursorItemReader;
}
and my job configuration
#Configuration
#EnableBatchProcessing
#RefreshScope
public class MyFunctionJobConfiguration {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Autowired
JdbcCursorItemReader<MyEntity> reader;
#Value("${max-number-card-to-process}")
private Integer MAX_NUMBER_CARD;
#Value("${chunck-size:10}")
private int chunckSize;
#Value("${grid-size:1}")
private int gridSize;
private final static String JOB_DISABLED = "job is disabled, check the configuration file !";
#Value("${job.enabled}")
private boolean batchIsEnabled;
private static final Logger LOGGER = LoggerFactory.getLogger("FUNCTIONAL_LOGGER");
#Bean
#StepScope
#RefreshScope
public MyEntityWriter writer() {
return new MyEntityWriter();
}
#Bean
#StepScope
#RefreshScope
public MyFunctionProcessor processor() throws IOException {
return new MyFunctionProcessor();
}
#Bean
public MyPrationner partitioner() {
return new MyPrationner();
}
#Bean
public Step masterStep() throws SQLException, IOException, ClassNotFoundException {
return stepBuilderFactory.get("masterStep")
.partitioner("MyFunctionStep", partitioner())
.step(MyFunctionStep())
.gridSize(gridSize)
.taskExecutor(MyFunctionTaskExecutor())
.build();
}
#Bean
public TaskExecutor myFunctionTaskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setThreadNamePrefix("MyFunctionTaskExecutor_");
int corePoolSize = gridSize + 2;
int maxPoolSize = corePoolSize * 2;
taskExecutor.setMaxPoolSize(maxPoolSize);
taskExecutor.setAllowCoreThreadTimeOut(true);
taskExecutor.setCorePoolSize(corePoolSize);
taskExecutor.setQueueCapacity(Integer.MAX_VALUE);
return taskExecutor;
}
#Bean
public Step myFunctionStep() throws IOException, ClassNotFoundException, SQLException {
return stepBuilderFactory.get("MyFunctionStep")
.<MyEntity, MyEntity>chunk(chunckSize)
.reader(reader)
.faultTolerant()
.skipLimit(MAX_NUMBER_CARD)
.skip(InvalidCardNumberException.class)
.skip(TokenManagementException.class)
.processor(processor())
.listener(new MyEntityProcessListener())
.writer(writer())
.listener(new MyEntityWriteListener())
.build();
}
#Bean
public Job myFunctionJob(#Qualifier("MyFunctionStep") Step myFunctionStep)
throws SQLException, IOException, ClassNotFoundException {
if (!batchIsEnabled) {
LOGGER.error(JOB_DISABLED);
System.exit(0);
}
return jobBuilderFactory.get("MyFunctionJob")
.listener(new MyFunctionJobListener())
.incrementer(new RunIdIncrementer())
.flow(masterStep())
.end()
.build();
}
}
I try to run a spring batch job with partions to read data from oracle database( in the sql there is a decryption function ) this need to set an authorisation for every session of connexion
the problem when th batch run some partion not decrypt data and return null an the only reason fo that , is that the authorisation is not set

The JdbcCursorItemReader does not use a JdbcTemplate. It directly creates connections to the database from the data source object passed to it. So you should not be expecting to call authorize which operates on a separate JdbcTemplate instance to impact the behaviour of the JdbcCursorItemReader. You said it works for some partitions, and that's really surprising.
If you want to take control on how the connection to the database is configured and override the default settings (for example by adding some authorization attributes), you need to extend JdbcCursorItemReader and override the protected void openCursor(Connection con) method, something like:
class MyCustomJdbcCursorItemReader extends JdbcCursorItemReader {
#Override
protected void openCursor(Connection con) {
super.openCursor(con);
// con.setClientInfo(); // set client info as needed here
}
}

Related

Spring batch exception handling sended as ResponseEntity

i m new in Spring boot, i'm training on a small project with Spring batch to get experience, Here my context: I have 2 csv file, one hold employees, the other contains all managers of the compagny. I have to read files, then add each record in database. To make it simple , i just need to call an endpoint from my controller , upload my csv file (multipartfile), then the job will start. I actually was able to do that, my problem is the following.
I have to manage multiple kind of validation (i'm using jsr 380 validation for my entites and i have also to check business exception). A kind of buisness exception can be the following rule, An employee is supervised by a manager of his departement (the employee can't be supervised by a manager, if he's not on same departement, otherwise should throw exception). So for mistaken records, with some invalid or "Illogic" input, i have to skip them (don't save on database) but store them in an Map or List that should be sended as Responses Enity to the client. Hence the client would know which row need to be fixed. I suppose i have to take a look about** Listeners** , But i really can t store exceptions in a map or list then send it as ResponseEntity. Bellow Example of what i want to achieve.
My csv files screenshots
EmployeeBatchConfig.java
#Configuration
#EnableBatchProcessing
#AllArgsConstructor
public class EmployeeBatchConfig {
private JobBuilderFactory jobBuilderFactory;
private StepBuilderFactory stepBuilderFactory;
private EmployeeRepository employeeRepository;
private EmployeeItemWriter employeeItemWriter;
#Bean
#StepScope
public FlatFileItemReader<EmployeeDto> itemReader(#Value("#
{jobParameters[fullPathFileName]}") final String pathFile) {
FlatFileItemReader<EmployeeDto> flatFileItemReader = new
FlatFileItemReader<>();
flatFileItemReader.setResource(new FileSystemResource(new
File(pathFile)));
flatFileItemReader.setName("CSV-Reader");
flatFileItemReader.setLinesToSkip(1);
flatFileItemReader.setLineMapper(lineMapper());
return flatFileItemReader;
}
private LineMapper<EtudiantDto> lineMapper() {
DefaultLineMapper<EtudiantDto> lineMapper = new DefaultLineMapper<>
();
DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer();
lineTokenizer.setDelimiter(",");
lineTokenizer.setStrict(false);
lineTokenizer.setNames("Username", "lastName", "firstName",
"departement", "supervisor");
BeanWrapperFieldSetMapper<EmployeeDto> fieldSetMapper = new
BeanWrapperFieldSetMapper<>();
fieldSetMapper.setTargetType(EmployeeDto.class);
lineMapper.setLineTokenizer(lineTokenizer);
lineMapper.setFieldSetMapper(fieldSetMapper);
return lineMapper;
}
#Bean
public EmployeeProcessor processor() {
return new EmployeeProcessor(); /*Create a bean processor to skip
invalid rows*/
}
#Bean
public RepositoryItemWriter<Employee> writer() {
RepositoryItemWriter<Employee> writer = new RepositoryItemWriter<>();
writer.setRepository(employeeRepository);
writer.setMethodName("save");
return writer;
}
#Bean
public Step step1(FlatFileItemReader<EmployeeDto> itemReader) {
return stepBuilderFactory.get("slaveStep").<EmployeeDto,
Employee>chunk(5)
.reader(itemReader)
.processor(processor())
.writer(employeeItemWriter)
.faultTolerant()
.listener(skipListener())
.skip(SkipException.class)
.skipLimit(10)
.skipPolicy(skipPolicy())
.build();
}
#Bean
#Qualifier("executeJobEmployee")
public Job runJob(FlatFileItemReader<Employee> itemReader) {
return jobBuilderFactory
.get("importEmployee")
.flow(step1(itemReader))
.end()
.build();
}
#Bean
public SkipPolicy skipPolicy(){
return new ExceptionSkipPolicy();
}
#Bean
public SkipListener<EmployeeDto, Employee> skipListener(){
return new StepSkipListener();
}
/*#Bean
public ExecutionContext executionContext(){
return new ExecutionContext();
}*/
}
EmployeeProcessor.java
public class EmployeeProcessor implements ItemProcessor<EmployeeDto,
Employee>{
#Autowired
private SupervisorService managerService;
#Override
public Employee process(#Valid EmployeeDto item) throws Exception,
SkipException {
ManagerDto manager =
SupervisorService.findSupervisorById(item.getSupervisor());
//retrieve the manager of the employee and compare departement
if(!(manager.getDepartement().equals(item.getDepartement()))) {
throw new SkipException("Manager Invalid", item);
//return null;
}
return ObjectMapperUtils.map(item, Employee.class);
}
}
MySkipPolicy.java
public class MySkipPolicy implements SkipPolicy {
#Override
public boolean shouldSkip(Throwable throwable, int i) throws
SkipLimitExceededException {
return true;
}
}
StepSkipListenerPolicy.java
public class StepSkipListener implements SkipListener<EmployeeDto,
Number> {
#Override // item reader
public void onSkipInRead(Throwable throwable) {
System.out.println("In OnSkipReader");
}
#Override // item writter
public void onSkipInWrite(Number item, Throwable throwable) {
System.out.println("Nooooooooo ");
}
//#SneakyThrows
#Override // item processor
public void onSkipInProcess(#Valid EmployeeDto employee, Throwable
throwable){
System.out.println("Process... ");
/* I guess this is where I should work, but how do I deal with the
exception occur? How do I know which exception I would get ? */
}
}
SkipException.java
public class SkipException extends Exception {
private Map<String, EmployeeDto> errors = new HashMap<>();
public SkipException(String errorMessage, EmployeeDto employee) {
super();
this.errors.put(errorMessage, employee);
}
public Map<String, EmployeeDto> getErrors() {
return this.errors;
}
}
JobController.java
#RestController
#RequestMapping("/upload")
public class JobController {
#Autowired
private JobLauncher jobLauncher;
#Autowired
#Qualifier("executeJobEmployee")
private Job job;
private final String EMPLOYEE_FOLDER = "C:/Users/Project/Employee/";
#PostMapping("/employee")
public ResponseEntity<Object> importEmployee(#RequestParam("file")
MultipartFile multipartFile) throws JobInterruptedException,
SkipException, IllegalStateException, IOException,
FlatFileParseException{
try {
String fileName = multipartFile.getOriginalFilename();
File fileToImport= new File(EMPLOYEE_FOLDER + fileName);
multipartFile.transferTo(fileToImport);
JobParameters jobParameters = new JobParametersBuilder()
.addString("fullPathFileName", EMPLOYEE_FOLDER + fileName)
.addLong("startAt", System.currentTimeMillis())
.toJobParameters();
JobExecution jobExecution = this.jobLauncher.run(job,
jobParameters);
ExecutionContext executionContext =
jobExecution.getExecutionContext();
System.out.println("My Skiped items : " +
executionContext.toString());
} catch (ConstraintViolationException | FlatFileParseException |
JobRestartException | JobInstanceAlreadyCompleteException |
JobParametersInvalidException |
JobExecutionAlreadyRunningException e) {
e.printStackTrace();
return new ResponseEntity<>(e.getMessage(), HttpStatus.BAD_REQUEST);
}
return new ResponseEntity<>("Employee inserted succesfully",
HttpStatus.OK);
}
}
That requirement forces your implementation to wait for the job to finish before returning the web response, which is not the typical way of launching batch jobs from web requests. Typically, since batch jobs can run for several minutes/hours, they are launched in the background and a job ID is returned back to the client for later status check.
In Spring Batch, the SkipListener is the extension point that allows you to add custom code when a skippable exception happens when reading, processing or writing an item. I would add the business validation in an item processor and throw an exception with the skipped item and the reason for that skip (both encapsulated in the exception class that should be declared as skippable).
Skipped items are usually stored somewhere for later analysis (like a table or a file or the job execution context). In your case, you need to send them back in the web response, so you can read them from the store of your choice before returning them attached in the web response. In pseudo code in your controller, this should be something like the following:
- run the job and wait for its termination (the skip listener would write skipped items in the storage of your choice)
- get skipped items from storage
- return web response
For example, if you choose to store skipped items in the job execution context, you can do something like this in your controller:
JobExecution jobExecution = jobLauncher.run(job, jobParameters);
ExecutionContext executionContext = jobExecution.getExecutionContext();
// get skipped items from the execution context
// return the web response

How to use spring transaction support with Spring Batch

I am trying to use spring batch to read file from a .dat file and persist the data into database. My requirement says to either insert all of the data or insert none of the data into table i.e, atomicity. However, using spring batch i'm not able to achieve the same it is reading data in chunks and is inserting data as long as the records are fine. if at some point the record is inappropriate and some db exception is thrown then i want complete rollback which is not happening. Let's say we get error at 2051th record then my code saves 2050 records but i want complete rollback and if all data is good then all N records should be persisted. Thanks in advance for any help or relevant approach that may solve my issue...
NOTE: I have already used Spring Transactional annotation on caller method but it's not working and i'm reading data in a chunk size of 10 items.
MyConfiguration.java
#Configuration
public class MyConfiguration
{
#Autowired
JobBuilderFactory jobBuilderFactory;
#Autowired
StepBuilderFactory stepBuilderFactory;
#Autowired
#Qualifier("MyCompletionListener")
JobCompletionNotificationListener jobCompletionNotificationListener;
#StepScope
#Bean(name="MyReader")
public FlatFileItemReader<InputMapperDTO> reader(#Value("#{jobParameters['fileName']}") String fileName) throws IOException
{
FlatFileItemReader<InputMapperDTO> newBean = new FlatFileItemReader<>();
newBean.setName("MyReader");
newBean.setResource(new InputStreamResource(FileUtils.openInputStream(new File(fileName))));
newBean.setLineMapper(lineMapper());
newBean.setLinesToSkip(1);
return newBean;
}
#Bean(name="MyLineMapper")
public DefaultLineMapper<InputMapperDTO> lineMapper()
{
DefaultLineMapper<InputMapperDTO> lineMapper = new DefaultLineMapper<>();
lineMapper.setLineTokenizer(lineTokenizer());
Reader reader = new Reader();
lineMapper.setFieldSetMapper(reader);
return lineMapper;
}
#Bean(name="MyTokenizer")
public DelimitedLineTokenizer lineTokenizer()
{
DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer();
tokenizer.setDelimiter("|");
tokenizer.setNames("InvestmentAccountUniqueIdentifier", "BaseCurrencyUniqueIdentifier",
"OperatingCurrencyUniqueIdentifier", "PricingHierarchyUniqueIdentifier", "InvestmentAccountNumber",
"DummyAccountIndicator", "InvestmentAdvisorCompanyNumberLegacy","HighNetWorthAccountTypeCode");
tokenizer.setIncludedFields(0, 5, 7, 13, 29, 40, 49,75);
return tokenizer;
}
#Bean(name="MyBatchProcessor")
public ItemProcessor<InputMapperDTO, FinalDTO> processor()
{
return new Processor();
}
#Bean(name="MyWriter")
public ItemWriter<FinalDTO> writer()
{
return new Writer();
}
#Bean(name="MyStep")
public Step step1() throws IOException
{
return stepBuilderFactory.get("MyStep")
.<InputMapperDTO, FinalDTO>chunk(10)
.reader(this.reader(null))
.processor(this.processor())
.writer(this.writer())
.build();
}
#Bean(name=MyJob")
public Job importUserJob(#Autowired #Qualifier("MyStep") Step step1)
{
return jobBuilderFactory
.get("MyJob"+new Date())
.incrementer(new RunIdIncrementer())
.listener(jobCompletionNotificationListener)
.flow(step1)
.end()
.build();
}
}
Writer.java
public class Writer implements ItemWriter<FinalDTO>
{
#Autowired
SomeRepository someRepository;
#Override
public void write(List<? extends FinalDTO> listOfObjects) throws Exception
{
someRepository.saveAll(listOfObjects);
}
}
JobCompletionNotificationListener.java
public class JobCompletionNotificationListener extends JobExecutionListenerSupport
{
#Override
public void afterJob(JobExecution jobExecution)
{
if(jobExecution.getStatus() == BatchStatus.COMPLETED)
{
System.err.println("****************************************");
System.err.println("***** Batch Job Completed ******");
System.err.println("****************************************");
}
else
{
System.err.println("****************************************");
System.err.println("***** Batch Job Failed ******");
System.err.println("****************************************");
}
}
}
MyCallerMethod
#Transactional
public String processFile(String datFile) throws JobExecutionAlreadyRunningException, JobRestartException,
JobInstanceAlreadyCompleteException, JobParametersInvalidException
{
long st = System.currentTimeMillis();
JobParametersBuilder builder = new JobParametersBuilder();
builder.addString("fileName",datFile);
builder.addDate("date", new Date());
jobLauncher.run(job, builder.toJobParameters());
System.err.println("****************************************");
System.err.println("***** Total time consumed = "+(System.currentTimeMillis()-st)+" ******");
System.err.println("****************************************");
return response;
}
The operation I have tried is not provided in batch. For my requirement, I have implemented custom delete which flushes the database upon failure in any step.

How to reset MultiResourceItemReader for each job run . Step scope not working

How can I initilize the MultiResourceItemReader for each job run . currently with this setup its still using the same instance for each job run
I put the #StepScope still its using the same old list of files which it has already been processed. I am not sure what else I have to add in this code
I also tried with the #JobScope also it did not work out. there is something fundamental I am missing
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Value("file:ftp-inbound/*.csv")
#Autowired
private Resource[] inputResources;
#Autowired
private StepBuilderFactory steps;
#Autowired
private JobBuilderFactory jobs;
#Autowired
private ResourceLoader resourceLoader;
#Bean
public FlatFileItemReader<AccommodationRoomAvailability> itemReader() throws UnexpectedInputException, ParseException, IOException {
FlatFileItemReader<AccommodationRoomAvailability> reader = new FlatFileItemReader<AccommodationRoomAvailability>();
DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer();
String[] tokens = {"Product ID", "Allotment", "Kamertype", "Zoeknaam", "Hotel", "Datum", "Beschikbaar", "Nachten"};
tokenizer.setNames(tokens);
tokenizer.setDelimiter(";");
tokenizer.setStrict(true);
reader.setLinesToSkip(1);
DefaultLineMapper<AccommodationRoomAvailability> lineMapper = new DefaultLineMapper<AccommodationRoomAvailability>();
lineMapper.setLineTokenizer(tokenizer);
lineMapper.setFieldSetMapper(new RecordFieldSetMapper());
reader.setLineMapper(lineMapper);
return reader;
}
#Bean
#Qualifier("multiResourceReader")
#StepScope
public MultiResourceItemReader<AccommodationRoomAvailability> multiResourceItemReader() throws Exception {
MultiResourceItemReader<AccommodationRoomAvailability> resourceItemReader = new MultiResourceItemReader<AccommodationRoomAvailability>();
resourceItemReader.setResources(inputResources);
resourceItemReader.setDelegate(itemReader());
resourceItemReader.setStrict(false);
resourceItemReader.setSaveState(false);
// resourceItemReader.read();
return resourceItemReader;
}
#Bean
public ItemProcessor<AccommodationRoomAvailability, String> itemProcessor() {
return new AvailabilityProcessor();
}
#Bean
public ItemWriter itemWriter() {
return new ItemWriter() {
#Override
public void write(List list) throws Exception {
}
};
}
#Bean
protected Step step1(#Qualifier("multiResourceReader") MultiResourceItemReader<AccommodationRoomAvailability> reader, ItemProcessor<AccommodationRoomAvailability, String> processor,
ItemWriter writer) {
return steps.get("step1")/*.listener(new StepListener())*/.<AccommodationRoomAvailability, String>chunk(30000).reader(reader)
.processor(processor)
.writer(writer)
.build();
}
#Bean
public Step step2() throws IOException {
FileDeletingTasklet task = new FileDeletingTasklet();
task.setResources(inputResources);
return stepBuilderFactory.get("step2")
.tasklet(task)
.build();
}
#Bean(name = "job")
public Job job(#Qualifier("step1") Step step1, Step step2) throws IOException {
return jobs.get("job")
.start(step1).on("*").to(step2).end()
// .flow(step1).on("").to(step2()).end()
.build();
}
}
Once your application context is created, the injected resources #Value("file:ftp-inbound/*.csv") will be the same during the whole lifetime of your app. That's why the reader will always read the same values.
You need to pass these resources as a parameter to your job and late-bind them in your reader with Step scope. In your example it would be something like:
#Bean
#Qualifier("multiResourceReader")
#StepScope
public MultiResourceItemReader<AccommodationRoomAvailability> multiResourceItemReader(#Value("#{jobParameters['inputResources']}") Resource[] inputResources) throws Exception {
MultiResourceItemReader<AccommodationRoomAvailability> resourceItemReader = new MultiResourceItemReader<AccommodationRoomAvailability>();
resourceItemReader.setResources(inputResources);
resourceItemReader.setDelegate(itemReader());
resourceItemReader.setStrict(false);
resourceItemReader.setSaveState(false);
return resourceItemReader;
}
Then pass input resources as a parameter to your job:
JobParameters jobParameters = new JobParametersBuilder()
.addString("inputResources", "file:ftp-inbound/*.csv")
.toJobParameters();
currently with this setup its still using the same instance for each job run
That's because your resources are always the same when they are injected in a field of your configuration class. If you use the job parameters approach I mentioned in the previous example, you will have a different instance if you run the job with different set of files.

Spring boot batch partitioning JdbcCursorItemReader error

I have been unable to get this to work even after following Victor Jabor blog very comprehensive example. I have followed his configuration as he described and used all the latest dependencies. I, as Victor am trying to read from one db and write to another. I have this working without partitioning but need partitioning to improve performance as I need to be able to read 5 to 10 million rows within 5mins.
The following seems to work:
1) ColumnRangePartitioner
2) TaskExecutorPartitionHandler builds the correct number of step tasks based on the gridsize and spawns the correct number of threads
3) setPreparedStatementSetter from the stepExecution set by the ColumnRangePartitioner.
But when I run the application I get errors from JdbcCursorItemReader which are not consistent and which I dont understand. As a last resort I will have to debug the JdbcCursorItemReader. I am hoping to get some help before this and hopefully it will be a configuration issue.
ERROR:
Caused by: java.sql.SQLException: Exhausted Resultset
at oracle.jdbc.driver.OracleResultSetImpl.getInt(OracleResultSetImpl.java:901) ~[ojdbc6-11.2.0.2.0.jar:11.2.0.2.0]
at org.springframework.jdbc.support.JdbcUtils.getResultSetValue(JdbcUtils.java:160) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE]
at org.springframework.jdbc.core.BeanPropertyRowMapper.getColumnValue(BeanPropertyRowMapper.java:370) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE]
at org.springframework.jdbc.core.BeanPropertyRowMapper.mapRow(BeanPropertyRowMapper.java:291) ~[spring-jdbc-4.3.4.RELEASE.jar:4.3.4.RELEASE]
at org.springframework.batch.item.database.JdbcCursorItemReader.readCursor(JdbcCursorItemReader.java:139) ~[spring-batch-infrastructure-3.0.7.RELEASE.jar:3.0.7.RELEASE]
Configuration classes:
#Configuration #EnableBatchProcessing public class BatchConfiguration {
#Bean
public ItemProcessor<Archive, Archive> processor(#Value("${etl.region}") String region) {
return new ArchiveProcessor(region);
}
#Bean
public ItemWriter<Archive> writer(#Qualifier(value = "postgres") DataSource dataSource) {
JdbcBatchItemWriter<Archive> writer = new JdbcBatchItemWriter<>();
writer.setSql("insert into tdw_src.archive (id) " +
"values (:id)");
writer.setDataSource(dataSource);
writer.setItemSqlParameterSourceProvider(new org.springframework.batch.item.database.
BeanPropertyItemSqlParameterSourceProvider<>());
return writer;
}
#Bean
public Partitioner archivePartitioner(#Qualifier(value = "gmDataSource") DataSource dataSource,
#Value("ROWNUM") String column,
#Value("archive") String table,
#Value("${gm.datasource.username}") String schema) {
return new ColumnRangePartitioner(dataSource, column, schema + "." + table);
}
#Bean
public Job archiveJob(JobBuilderFactory jobs, Step partitionerStep, JobExecutionListener listener) {
return jobs.get("archiveJob")
.preventRestart()
.incrementer(new RunIdIncrementer())
.listener(listener)
.start(partitionerStep)
.build();
}
#Bean
public Step partitionerStep(StepBuilderFactory stepBuilderFactory,
Partitioner archivePartitioner,
Step step1,
#Value("${spring.batch.gridsize}") int gridSize) {
return stepBuilderFactory.get("partitionerStep")
.partitioner(step1)
.partitioner("step1", archivePartitioner)
.gridSize(gridSize)
.taskExecutor(taskExecutor())
.build();
}
#Bean(name = "step1")
public Step step1(StepBuilderFactory stepBuilderFactory, ItemReader<Archive> customReader,
ItemWriter<Archive> writer, ItemProcessor<Archive, Archive> processor) {
return stepBuilderFactory.get("step1")
.listener(customReader)
.<Archive, Archive>chunk(5)
.reader(customReader)
.processor(processor)
.writer(writer)
.build();
}
#Bean
public TaskExecutor taskExecutor(){
return new SimpleAsyncTaskExecutor();
}
#Bean
public SimpleJobLauncher getJobLauncher(JobRepository jobRepository) {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(jobRepository);
return jobLauncher;
}
Custom Reader:-
public class CustomReader extends JdbcCursorItemReader<Archive> implements StepExecutionListener {
private StepExecution stepExecution;
#Autowired
public CustomReader(#Qualifier(value = "gmDataSource") DataSource geomangerDataSource,
#Value("${gm.datasource.username}") String schema) throws Exception {
super();
this.setSql("SELECT TMP.* FROM (SELECT ROWNUM AS ID_PAGINATION, id FROM " + schema + ".archive) TMP " +
"WHERE TMP.ID_PAGINATION >= ? AND TMP.ID_PAGINATION <= ?");
this.setDataSource(geomangerDataSource);
BeanPropertyRowMapper<Archive> rowMapper = new BeanPropertyRowMapper<>(Archive.class);
this.setRowMapper(rowMapper);
this.setFetchSize(5);
this.setSaveState(false);
this.setVerifyCursorPosition(false);
// not sure if this is needed? this.afterPropertiesSet();
}
#Override
public synchronized void beforeStep(StepExecution stepExecution) {
this.stepExecution = stepExecution;
this.setPreparedStatementSetter(getPreparedStatementSetter());
}
private PreparedStatementSetter getPreparedStatementSetter() {
ListPreparedStatementSetter listPreparedStatementSetter = new ListPreparedStatementSetter();
List<Integer> list = new ArrayList<>();
list.add(stepExecution.getExecutionContext().getInt("minValue"));
list.add(stepExecution.getExecutionContext().getInt("maxValue"));
listPreparedStatementSetter.setParameters(list);
LOGGER.debug("getPreparedStatementSetter list: " + list);
return listPreparedStatementSetter;
}
#Override
public ExitStatus afterStep(StepExecution stepExecution) {
return null;
}
}
I've got this all working.
First I needed to order my select statement in my CustomReader so the rownum remains the same for all threads and lastly I had to scope the beans by using #StepScope for each bean used in the step.
In reality I wont be using rownum since this needs to be ordered which reduce loose performance and therefore I will use a pk column to get the best performance.

Spring batch with Spring Boot terminates before children process with AsyncItemProcessor

I'm using Spring Batch with a AsyncItemProcessor and things are behaving unexpectedly. Let me show first the code:
Followed a simple example as shown on the Spring Batch project:
#EnableBatchProcessing
#SpringBootApplication
#Import({HttpClientConfigurer.class, BatchJobConfigurer.class})
public class PerfilEletricoApp {
public static void main(String[] args) throws Exception {// NOSONAR
System.exit(SpringApplication.exit(SpringApplication.run(PerfilEletricoApp.class, args)));
//SpringApplication.run(PerfilEletricoApp.class, args);
}
}
-- EDIT
If I just sleep the main process go give a few seconds to slf4j to write the flush the logs, everything works as expected.
#EnableBatchProcessing
#SpringBootApplication
#Import({HttpClientConfigurer.class, BatchJobConfigurer.class})
public class PerfilEletricoApp {
public static void main(String[] args) throws Exception {// NOSONAR
//System.exit(SpringApplication.exit(SpringApplication.run(PerfilEletricoApp.class, args)));
ConfigurableApplicationContext context = SpringApplication.run(PerfilEletricoApp.class, args);
Thread.sleep(1000 * 5);
System.exit(SpringApplication.exit(context));
}
}
-- ENDOF EDIT
I'm reading a text file with a field and then using a AsyncItemProcessor to get a multithreaded processing, which consists of a Http GET on a URL to fetch some data, I'm also using a NoOpWriter to do nothing on the write part. I'm saving the results of the GET on the Processor part of the job (using log.trace / log.warn).
#Configuration
public class HttpClientConfigurer {
// [... property and configs omitted]
#Bean
public CloseableHttpClient createHttpClient() {
// ... creates and returns a poolable http client etc
}
}
As for the Job:
#Configuration
public class BatchJobConfigurer {
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Value("${async.tps:10}")
private Integer tps;
#Value("${com.bemobi.perfilelerico.sourcedir:/AppServer/perfil-eletrico/source-dir/}")
private String sourceDir;
#Bean
public ItemReader<String> reader() {
MultiResourceItemReader<String> reader = new MultiResourceItemReader<>();
reader.setResources( new Resource[] { new FileSystemResource(sourceDir)});
reader.setDelegate((ResourceAwareItemReaderItemStream<? extends String>) flatItemReader());
return reader;
}
#Bean
public ItemReader<String> flatItemReader() {
FlatFileItemReader<String> itemReader = new FlatFileItemReader<>();
itemReader.setLineMapper(new DefaultLineMapper<String>() {{
setLineTokenizer(new DelimitedLineTokenizer() {{
setNames(new String[] { "sample-field-001"});
}});
setFieldSetMapper(new SimpleStringFieldSetMapper<>());
}});
return itemReader;
}
#Bean
public ItemProcessor asyncItemProcessor(){
AsyncItemProcessor<String, OiPaggoResponse> asyncItemProcessor = new AsyncItemProcessor<>();
asyncItemProcessor.setDelegate(processor());
asyncItemProcessor.setTaskExecutor(getAsyncExecutor());
return asyncItemProcessor;
}
#Bean
public ItemProcessor<String,OiPaggoResponse> processor(){
return new PerfilEletricoItemProcessor();
}
/**
* Using a NoOpItemWriter<T> so we satisfy spring batch flow but don't use writer for anything else.
* #return a NoOpItemWriter<OiPaggoResponse>
*/
#Bean
public ItemWriter<OiPaggoResponse> writer() {
return new NoOpItemWriter<>();
}
#Bean
protected Step step1() throws Exception {
/*
Problem starts here, If Use the processor() everything ends nicely, but if I insist on the asyncItemProcessor(), the job ends and the logs from processor are not stored on the disk.
*/
return this.steps.get("step1").<String, OiPaggoResponse> chunk(10)
.reader(reader())
.processor(asyncItemProcessor())
.build();
}
#Bean
public Job job() throws Exception {
return this.jobs.get("consulta-perfil-eletrico").start(step1()).build();
}
#Bean(name = "asyncExecutor")
public TaskExecutor getAsyncExecutor()
{
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(tps);
executor.setMaxPoolSize(tps);
executor.setQueueCapacity(tps * 1000);
executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
executor.setThreadNamePrefix("AsyncExecutor-");
return executor;
}
}
-- UPDATED WITH AsyncItemWriter (Working version)
/*Wrapped Writer*/
#Bean
public ItemWriter asyncItemWriter(){
AsyncItemWriter<OiPaggoResponse> asyncItemWriter = new AsyncItemWriter<>();
asyncItemWriter.setDelegate(writer());
return asyncItemWriter;
}
/*AsyncItemWriter defined on the steps*/
#Bean
protected Step step1() throws Exception {
return this.steps.get("step1").<String, OiPaggoResponse> chunk(10)
.reader(reader())
.processor(asyncItemProcessor())
.writer(asyncItemWriter())
.build();
}
--
Any thoughts on why the AsyncItemProcessor don't wait for all the children to to complete before send a OK-Completed signal to the context?
The issue is that the AsyncItemProcessor is creating Futures that no one is waiting for. Wrap your NoOpItemWriter in the AsyncItemWriter so that someone is waiting for the Futures. That will cause the job to complete as expected.

Resources