My current Spring Boot application runs a scheduler job with Spring Batch configuration where I have a FlatFileItemReader for reading the CSV rows, and a simple ItemWriter.
FlatFileItemReader<MyCsvRowDto>
ItemWriter<MyCsvRowDto>
Based on the chunk setup the CSV rows are red 1-by-1, and the writer gets all the data in a list.
I need to extend this logic to be able to read the rows from CSV and additionally a few things from repositories.
ItemReader<MyData>
ItemWriter<MyData>
where MyData contains the rows from CSV and additional things from repositories:
public class MyData {
private MyDatabaseData dbData;
private List<MyCsvRowData> csvData;
}
I am wondering if it is possible to do it still with FlatFileItemReader somehow, or I need to write a custom ItemReader where I read the data from repositories and then separately the CSV rows with supercsv?
Related
Need a solution to write a compositer writer with two JdbcBatchItemWriter and also differ data sets
You can find an example in spring-batch-samples repository. This sample shows how to use a composite item writer with two flat file item writers, but you can adapt it with two jdbc batch item writers.
I am working on a spring batch application where I read from a stored procedure from database and write it to an xml file.
My writer is a org.springframework.batch.item.xml.StaxEventItemWriter
I am trying to implement a situation in which I find duplicates using this method - Spring Batch how to filter duplicated items before send it to ItemWriter
However, in my situation I don't want to skip a duplicate but override an existing record written to XML by my ItemWriter.
How can I achieve it ?
I know BeanWrapperFieldSetMapper class depends on POJO.
But here is the thing: If I want to take advantage of Spring Batch features but do not want to create separate jobs ( does not want to write POJOs and separate reader writes or mappers) how to do this?
My requirement is to read *.csv file which will have the headers so I should be able to supply header names in a map or string[] and create my sql statement based on it, instead of writing a RowMapper.
This will help me uploading various files to different tables.
Is it possible to change BeanWrapperFieldSetMapper to make it suitable to map the values from Map or String[]?
Also Even if I do not have headers in the *.cvs file, I can construct update statement and load using chunk delimeters setting and other advantages of Spring Batch.
I know BeanWrapperFieldSetMapper class depends on POJO.
But here is the thing: If I want to take advantage of Spring Batch features but do not want to create separate jobs ( does not want to write POJOs and separate reader writes or mappers) how to do this?
My requirement is to read *.csv file which will have the headers so I should be able to supply header names in a map or string[] and create my sql statement based on it, instead of writing a RowMapper.
This will help me uploading various files to different tables.
Is it possible to change BeanWrapperFieldSetMapper to make it suitable to map the values from Map or String[]?
Also Even if I do not have headers in the *.cvs file, I can construct update statement and load using chunk delimeters setting and other advantages of Spring Batch.
My Batch reads data from one table, processes and writes to another table.
I have MyBatisPagingITemReader, Custom Processor and Writer.
Currently Custom Writer INSERTS the data which is converted in processor and Writer does a BATCH INSERT to the other table.
Now, Reader will read some rows which has to be updated in the other table.In this case my writer should also be capable of batch updating those records.
What is the best way to implement it?
Here is where iam stuck
My Writer
public void write(final List unmodifiableItems) throws Exception {
// unmodifiable items will be a list of row to be inserted.....
}
How will i access the list of records which needs to be UPDATED?