I am trying to run spring batch job run infinitely. Main motive is that dont allow to spring batch sit idle
I am using below code to run job infinitely
private JobExecution execution = null;
#Scheduled(cron = "0 */2 * * * ?")
public void perform() throws JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException, JobParametersInvalidException {
System.out.println("=== STATUS STARTED ====");
if (execution != null && execution.isRunning()) {
System.out.println("Job is running. Please wait.");
return;
}
JobParameters jobParameters = new JobParametersBuilder().addString("JobId", String.valueOf(System.currentTimeMillis())).addDate("date", new Date()).addLong("time", System.currentTimeMillis()).toJobParameters();
execution = jobLauncher.run(job, jobParameters);
if (!execution.getStatus().isRunning()) {
perform();
}
System.out.println("STATUS :: " + execution.getStatus());
}
First we are checking that JOB is running or not. If not running then we re-run the same method again. Now job is running infinitely.
My question is that is this approach is good or bad? Or any other solution?
I have another query. If data is not available then break the infinite loop
How to break that loop..
FYI, Below is the batch configuration code
#Configuration
public class JobConfiguration {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private DataSource dataSource;
private Resource outputResource = new FileSystemResource("path\\output.csv");
private Resource inputResource = new FileSystemResource("path\\input.csv");
#Bean
public ColumnRangePartitioner partitioner() {
ColumnRangePartitioner columnRangePartitioner = new ColumnRangePartitioner();
columnRangePartitioner.setColumn("id");
columnRangePartitioner.setDataSource(dataSource);
columnRangePartitioner.setTable("customer");
return columnRangePartitioner;
}
#Bean
#StepScope
public FlatFileItemReader<Customer> pagingItemReader(#Value("#{stepExecutionContext['minValue']}") Long minValue, #Value("#{stepExecutionContext['maxValue']}") Long maxValue) {
System.out.println("reading " + minValue + " to " + maxValue);
// Create reader instance
FlatFileItemReader<Customer> reader = new FlatFileItemReader<>();
// Set input file location
reader.setResource(inputResource);
// Set number of lines to skips. Use it if file has header rows.
reader.setLinesToSkip(1);
// Configure how each line will be parsed and mapped to different values
reader.setLineMapper(new DefaultLineMapper() {
{
// 3 columns in each row
setLineTokenizer(new DelimitedLineTokenizer() {
{
setNames(new String[] { "id", "firstName", "lastName" });
}
});
// Set values in Employee class
setFieldSetMapper(new BeanWrapperFieldSetMapper<Customer>() {
{
setTargetType(Customer.class);
}
});
}
});
return reader;
}
#Bean
#StepScope
public FlatFileItemWriter<Customer> customerItemWriter() {
// Create writer instance
FlatFileItemWriter<Customer> writer = new FlatFileItemWriter<>();
// Set output file location
writer.setResource(outputResource);
// All job repetitions should "append" to same output file
writer.setAppendAllowed(true);
// Name field values sequence based on object properties
writer.setLineAggregator(new DelimitedLineAggregator<Customer>() {
{
setDelimiter(",");
setFieldExtractor(new BeanWrapperFieldExtractor<Customer>() {
{
setNames(new String[] { "id", "firstName", "lastName" });
}
});
}
});
return writer;
}
// Master
#Bean
public Step step1() {
return stepBuilderFactory.get("step1").partitioner(slaveStep().getName(), partitioner()).step(slaveStep()).gridSize(12).taskExecutor(new SimpleAsyncTaskExecutor()).build();
}
// slave step
#Bean
public Step slaveStep() {
return stepBuilderFactory.get("slaveStep").<Customer, Customer>chunk(1000).reader(pagingItemReader(null, null)).writer(customerItemWriter()).build();
}
#Bean
public Job job() {
return jobBuilderFactory.get("job").start(step1()).build();
}
}
Another way for running step continuously
<job id="notificationBatchJobProcess"
xmlns="http://www.springframework.org/schema/batch"
job-repository="jobRepository">
<step id="startLogStep" next="execute">
<tasklet ref="ref1" />
</step>
<step id="execute">
<batch:tasklet ref="ref2" />
<batch:next on="COMPLETED" to="endLogStep" />
</step>
<step id="endLogStep">
<batch:tasklet ref="ref3" />
<batch:next on="COMPLETED" to="startLogStep" />
</step>
</job>
What I am trying to achieve using above code once endLogStep task is completed then again it will call startLogStep.
And this process will continuing infinite time until or unless any exception is occurred.
Is this correct way for running those jobs?
Related
The implemented function is to send LMS to the user at the alarm time.
Send a total of 4 alarms (9:00, 13:00, 19:00, 21:00).
Log was recorded regardless of success.
It was not recorded in the Log, but when I looked at the batch data in the DB, I found an unintended COMPLETED.
Issue>
Batch was successfully executed at 9 and 13 on the 18th.
But at 13:37 it's not even a schedule, but it's executed. (and FAILED)
Subsequently, 13:38, 40, 42, 44 minutes executed. (all COMPLETED)
Q1. Why was it executed when it wasn't even the batch execution time?
Q2. I save the log even when executing batch and sending SMS. Log was printed normally at 9 and 13 o'clock.
But Log is not saved for non-schedule(13:37, 38, 40, 42, 44).
Check spring boot service and tomcat service with one
server CPU, memory usage is normal
Batch Problem
Spring Boot (2.2.6 RELEASE)
Spring Boot - Embedded Tomcat
===== Start Scheduler =====
#Component
public class DosageAlarmScheduler {
public static final int MORNING_HOUR = 9;
public static final int LUNCH_HOUR = 13;
public static final int DINNER_HOUR = 19;
public static final int BEFORE_SLEEP_HOUR = 21;
#Scheduled(cron = "'0 0 */1 * * *") // every hour
public void executeDosageAlarmJob() {
LocalDateTime nowDateTime = LocalDateTime.now();
try {
if(isExecuteTime(nowDateTime)) {
log.info("[Send LMS], {}", nowDateTime);
EatFixCd eatFixCd = currentEatFixCd(nowDateTime);
jobLauncher.run(
alarmJob,
new JobParametersBuilder()
.addString("currentDate", nowDateTime.toString())
.addString("eatFixCodeValue", eatFixCd.getCodeValue())
.toJobParameters()
);
} else {
log.info("[Not Send LMS], {}", nowDateTime);
}
} catch (JobExecutionAlreadyRunningException e) {
log.error("[JobExecutionAlreadyRunningException]", e);
} catch (JobRestartException e) {
log.error("[JobRestartException]", e);
} catch (JobInstanceAlreadyCompleteException e) {
log.error("[JobInstanceAlreadyCompleteException]", e);
} catch (JobParametersInvalidException e) {
log.error("[JobParametersInvalidException]", e);
} catch(Exception e) {
log.error("[Exception]", e);
}
/* Start private method */
private boolean isExecuteTime(LocalDateTime nowDateTime) {
return nowDateTime.getHour() == MORNING_TIME.getHour()
|| nowDateTime.getHour() == LUNCH_TIME.getHour()
|| nowDateTime.getHour() == DINNER_TIME.getHour()
|| nowDateTime.getHour() == BEFORE_SLEEP_TIME.getHour();
}
private EatFixCd currentEatFixCd(LocalDateTime nowDateTime) {
switch(nowDateTime.getHour()) {
case MORNING_HOUR:
return EatFixCd.MORNING;
case LUNCH_HOUR:
return EatFixCd.LUNCH;
case DINNER_HOUR:
return EatFixCd.DINNER;
case BEFORE_SLEEP_HOUR:
return EatFixCd.BEFORE_SLEEP;
default:
throw new RuntimeException("Not Dosage Time");
}
}
/* End private method */
}
}
===== End Scheduler =====
===== Start Job =====
#Configuration
public class DosageAlarmConfiguration {
private final int chunkSize = 20;
private final JobBuilderFactory jobBuilderFactory;
private final StepBuilderFactory stepBuilderFactory;
private final EntityManagerFactory entityManagerFactory;
#Bean
public Job dosageAlarmJob() {
log.info("[dosageAlarmJob excute]");
return jobBuilderFactory.get("dosageAlarmJob")
.start(dosageAlarmStep(null, null)).build();
}
#Bean
#JobScope
public Step dosageAlarmStep(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Step excute]");
return stepBuilderFactory.get("dosageAlarmStep")
.<Object[], DosageReceiverInfoDto>chunk(chunkSize)
.reader(dosageAlarmReader(currentDate, eatFixCodeValue))
.processor(dosageAlarmProcessor(currentDate, eatFixCodeValue))
.writer(dosageAlarmWriter(currentDate, eatFixCodeValue))
.build();
}
#Bean
#StepScope
public JpaPagingItemReader<Object[]> dosageAlarmReader(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Reader excute : {}, {}]", currentDate, eatFixCodeValue);
if(currentDate == null) {
return null;
} else {
JpaPagingItemReader<Object[]> jpaPagingItemReader = new JpaPagingItemReader<>();
jpaPagingItemReader.setName("dosageAlarmReader");
jpaPagingItemReader.setEntityManagerFactory(entityManagerFactory);
jpaPagingItemReader.setPageSize(chunkSize);
jpaPagingItemReader.setQueryString("select das from DosageAlarm das where :currentDate between das.startDate and das.endDate ");
HashMap<String, Object> parameterValues = new HashMap<>();
parameterValues.put("currentDate", LocalDateTime.parse(currentDate).toLocalDate());
jpaPagingItemReader.setParameterValues(parameterValues);
return jpaPagingItemReader;
}
}
#Bean
#StepScope
public ItemProcessor<Object[], DosageReceiverInfoDto> dosageAlarmProcessor(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Processor excute : {}, {}]", currentDate, eatFixCodeValue);
...
convert to DosageReceiverInfoDto
...
}
#Bean
#StepScope
public ItemWriter<DosageReceiverInfoDto> dosageAlarmWriter(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Writer excute : {}, {}]", currentDate, eatFixCodeValue);
...
make List
...
if(reqMessageDtoList != null) {
sendMessages(reqMessageDtoList);
} else {
log.info("[reqMessageDtoList not Exist]");
}
}
public SmsExternalSendResDto sendMessages(List<reqMessagesDto> reqMessageDtoList) {
log.info("[receiveList] smsTypeCd : {}, contentTypeCd : {}, messages : {}", smsTypeCd.LMS, contentTypeCd.COMM, reqMessageDtoList);
...
send Messages
}
}
===== End Job =====
Thank U.
i want to fix my problem and i hope this question is hepled another people.
I have a fixedlength input file reading by using SPRING BATCH.
I have already implemented Job, Step, Processor, etc.
Here are the sample code.
#Configuration
public class BatchConfig {
private JobBuilderFactory jobBuilderFactory;
private StepBuilderFactory stepBuilderFactory;
#Value("${inputFile}")
private Resource resource;
#Autowired
public BatchConfig(JobBuilderFactory jobBuilderFactory, StepBuilderFactory stepBuilderFactory) {
this.jobBuilderFactory = jobBuilderFactory;
this.stepBuilderFactory = stepBuilderFactory;
}
#Bean
public Job job() {
return this.jobBuilderFactory.get("JOB-Load")
.start(fileReadingStep())
.build();
}
#Bean
public Step fileReadingStep() {
return stepBuilderFactory.get("File-Read-Step1")
.<Employee,EmpOutput>chunk(1000)
.reader(itemReader())
.processor(new CustomFileProcesser())
.writer(new CustomFileWriter())
.faultTolerant()
.skipPolicy(skipPolicy())
.build();
}
#Bean
public FlatFileItemReader<Employee> itemReader() {
FlatFileItemReader<Employee> flatFileItemReader = new FlatFileItemReader<Employee>();
flatFileItemReader.setResource(resource);
flatFileItemReader.setName("File-Reader");
flatFileItemReader.setLineMapper(LineMapper());
return flatFileItemReader;
}
#Bean
public LineMapper<Employee> LineMapper() {
DefaultLineMapper<Employee> defaultLineMapper = new DefaultLineMapper<Employee>();
FixedLengthTokenizer fixedLengthTokenizer = new FixedLengthTokenizer();
fixedLengthTokenizer.setNames(new String[] { "employeeId", "employeeName", "employeeSalary" });
fixedLengthTokenizer.setColumns(new Range[] { new Range(1, 9), new Range(10, 20), new Range(20, 30)});
fixedLengthTokenizer.setStrict(false);
defaultLineMapper.setLineTokenizer(fixedLengthTokenizer);
defaultLineMapper.setFieldSetMapper(new CustomFieldSetMapper());
return defaultLineMapper;
}
#Bean
public JobSkipPolicy skipPolicy() {
return new JobSkipPolicy();
}
}
For Processing I have added some sample code What I need, But if I add BufferedReader here then it's taking more times to do the job.
#Component
public class CustomFileProcesser implements ItemProcessor<Employee, EmpOutput> {
#Override
public EmpOutput process(Employee item) throws Exception {
EmpOutput emp = new EmpOutput();
emp.setEmployeeSalary(checkSal(item.getEmployeeSalary()));
return emp;
}
public String checkSal(String sal) {
// need to read the another file
// required to do some kind of validation
// after that final result need to return
File f1 = new File("C:\\Users\\John\\New\\salary.txt");
FileReader fr;
try {
fr = new FileReader(f1);
BufferedReader br = new BufferedReader(fr);
String s = br.readLine();
while (s != null) {
String value = s.substring(5, 7);
if(value.equals(sal))
sal = value;
else
sal = "5000";
s = br.readLine();
}
} catch (Exception e) {
e.printStackTrace();
}
return sal;
}
// other fields need to check by reading different different file.
// These new files contains more than 30k records.
// all are fixedlength file.
// I need to get the field by giving the index
}
While doing the processing for one or more field, I need to check In another file by reading that file (it's a file I will read from fileSystem/Cloud).
While processing the data for 5 fields I need to read 5 different different file again, I will check the fields details inside those file and then I will gererate the result , That result will process forther.
You can cache the content of the file in memory and do your check against the cache instead of re-reading the entire file from disk for each item.
You can find an example here: Spring Batch With Annotation and Caching.
Basically I have a Spring Batch that queries a Database and implements Partitioner to get the Jobs, and assign the Jobs to a ThreadPoolTaskExecutors in a SlaveStep.
The Reader reads (Job) from the Database. The Writer loads the data into a csv file in an Azure Blob Storage.
The Job Partitioner and Reader works fine. The Writer writes to one file, then it closes, and the other jobs cannot finish because the stream is closed. I get the following error:
Reading: market1
Reading: market2
Reading: market3
Reading: market4
Reading: market5
Writter: /upload-demo/market3_2021-06-01.csv
Writter: /upload-demo/market5_2021-06-01.csv
Writter: /upload-demo/market4_63_2021-06-01.csv
Writter: /upload-demo/market2_2021-06-01.csv
Writter: /upload-demo/market1_11_2021-06-01.csv
2021-06-02 08:24:42.304 ERROR 20356 --- [ taskExecutor-3] c.a.storage.common.StorageOutputStream : Stream is already closed.
2021-06-02 08:24:42.307 WARN 20356 --- [ taskExecutor-3] o.s.b.f.support.DisposableBeanAdapter : Destroy method 'close' on bean with name 'scopedTarget.writer2' threw an exception: java.lang.RuntimeException: Stream is already closed.
Reading: market6
Writter: /upload-demo/market6_2021-06-01.csv
Here is my Batch Configuration:
#EnableBatchProcessing
#Configuration
public class BatchConfig extends DefaultBatchConfigurer {
String connectionString = "azureConnectionString";
String containerName = "upload-demo";
String endpoint = "azureHttpsEndpoint";
String accountName ="azureAccountName";
String accountKey = "accountKey";
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
BlobServiceClient client = new BlobServiceClientBuilder().connectionString(connectionString).endpoint(endpoint).buildClient();
#Autowired
private StepBuilderFactory steps;
#Autowired
private JobBuilderFactory jobs;
#Autowired
#Qualifier("verticaDb")
private DataSource verticaDataSource;
#Autowired
private PlatformTransactionManager transactionManager;
#Autowired
private ConsoleItemWriter consoleItemWriter;
#Autowired
private ItemWriter itemWriter;
#Bean
public Job job() throws Exception {
return jobs.get("job1")
.start(masterStep(null, null))
.incrementer(new RunIdIncrementer())
.build();
}
#Bean
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(5);
taskExecutor.setMaxPoolSize(10);
taskExecutor.initialize();
return taskExecutor;
}
#Bean
#JobScope
public Step masterStep(#Value("#{jobParameters['startDate']}") String startDate,
#Value("#{jobParameters['endDate']}") String endDate) throws Exception {
return steps.get("masterStep")
.partitioner(slaveStep().getName(), new RangePartitioner(verticaDataSource, startDate, endDate))
.step(slaveStep())
.gridSize(5)
.taskExecutor(taskExecutor())
.build();
}
#Bean
public Step slaveStep() throws Exception {
return steps.get("slaveStep")
.<MarketData, MarketData>chunk(100)
.reader(pagingItemReader(null, null, null))
.faultTolerant()
.skip(NullPointerException.class)
.skipPolicy(new AlwaysSkipItemSkipPolicy())
.writer(writer2(null, null, null)) //consoleItemWriter
.build();
}
#Bean
#StepScope
public JdbcPagingItemReader pagingItemReader(
#Value("#{stepExecutionContext['MarketName']}") String marketName,
#Value("#{jobParameters['startDate']}") String startDate,
#Value("#{jobParameters['endDate']}") String endDate
) throws Exception {
System.out.println("Reading: " + marketName);
SqlPagingQueryProviderFactoryBean provider = new SqlPagingQueryProviderFactoryBean();
Map<String, Order> sortKey = new HashMap<>();
sortKey.put("xbin", Order.ASCENDING);
sortKey.put("ybin", Order.ASCENDING);
provider.setDataSource(this.verticaDataSource);
provider.setDatabaseType("POSTGRES");
provider.setSelectClause("SELECT MARKET AS market, EPSG AS epsg, XBIN AS xbin, YBIN AS ybin, " +
"LATITUDE AS latitude, LONGITUDE AS longitude, " +
"SUM(TOTALUPLINKVOLUME) AS totalDownlinkVol, SUM(TOTALDOWNLINKVOLUME) AS totalUplinkVol");
provider.setFromClause("FROM views.geo_analytics");
provider.setWhereClause(
"WHERE market='" + marketName + "'" +
" AND STARTTIME >= '" + startDate + "'" +
" AND STARTTIME < '" + endDate + "'" +
" AND TOTALUPLINKVOLUME IS NOT NULL" +
" AND TOTALUPLINKVOLUME > 0" +
" AND TOTALDOWNLINKVOLUME IS NOT NULL" +
" AND TOTALDOWNLINKVOLUME > 0" +
" AND EPSG IS NOT NULL" +
" AND LATITUDE IS NOT NULL" +
" AND LONGITUDE IS NOT NULL" +
" AND XBIN IS NOT NULL" +
" AND YBIN IS NOT NULL"
);
provider.setGroupClause("GROUP BY XBIN, YBIN, MARKET, EPSG, LATITUDE, LONGITUDE");
provider.setSortKeys(sortKey);
JdbcPagingItemReader reader = new JdbcPagingItemReader();
reader.setDataSource(this.verticaDataSource);
reader.setQueryProvider(provider.getObject());
reader.setFetchSize(1000);
reader.setRowMapper(new BeanPropertyRowMapper() {
{
setMappedClass((MarketData.class));
}
});
return reader;
}
#Bean
#StepScope
public FlatFileItemWriter<MarketData> writer2(#Value("#{jobParameters['yearMonth']}") String yearMonth,
#Value("#{stepExecutionContext['marketName']}") String marketName,
#Value("#{jobParameters['startDate']}") String startDate) throws URISyntaxException, InvalidKeyException, StorageException, IOException {
AZBlobWriter<MarketData> writer = new AZBlobWriter<>();
String fullPath =marketName + "_" + startDate + ".csv";
String resourceString = "azure-blob://upload-demo/" + fullPath;
CloudStorageAccount storageAccount = CloudStorageAccount.parse(connectionString);
CloudBlobClient blobClient = storageAccount.createCloudBlobClient();
CloudBlobContainer container2 = blobClient.getContainerReference(containerName);
container2.createIfNotExists();
AzureStorageResourcePatternResolver storageResourcePatternResolver = new AzureStorageResourcePatternResolver(client);
Resource resource = storageResourcePatternResolver.getResource(resourceString);
System.out.println("Writter: " + resource.getURI().getPath().toString());
writer.setResource(resource);
writer.setStorage(container2);
writer.setLineAggregator(new DelimitedLineAggregator<MarketData>() {
{
setDelimiter(",");
setFieldExtractor(new BeanWrapperFieldExtractor<MarketData>() {
{
setNames(new String[] {
"market",
"epsg",
"xbin",
"ybin",
"latitude",
"longitude",
"totalDownlinkVol",
"totalUplinkVol"
});
}
});
}
});
return writer;
}
}
Previously I ran into other issues, such as setting up the Resource for FlatFileWriter to Azure Blob, Spring Batch / Azure Storage account blob resource [container"foo", blob='bar'] cannot be resolved to absolute file path.
As suggested by #Mahmoud Ben Hassine, make an implementation of the FlatFileWriter for the Azure Blob.
The implementation for the FlatFileWriter I used as a base (GCP) from this post: how to configure FlatFileItemWriter to output the file to a ByteArrayRecource?
Here is the implementation of the Azure Blob:
public class AZBlobWriter<T> extends FlatFileItemWriter<T> {
private CloudBlobContainer storage;
private Resource resource;
private static final String DEFAULT_LINE_SEPARATOR = System.getProperty("line.separator");
private OutputStream os;
private String lineSeparator = DEFAULT_LINE_SEPARATOR;
#Override
public void write(List<? extends T> items) throws Exception {
StringBuilder lines = new StringBuilder();
for (T item : items) {
lines.append(item).append(lineSeparator);
}
byte[] bytes = lines.toString().getBytes();
try {
os.write(bytes);
}
catch (IOException e) {
throw new WriteFailedException("Could not write data. The file may be corrupt.", e);
}
os.flush();
}
#Override
public void open(ExecutionContext executionContext) {
try {
os = ((WritableResource)resource).getOutputStream();
String bucket = resource.getURI().getHost();
String filePath = resource.getURI().getPath().substring(1);
CloudBlockBlob blob = storage.getBlockBlobReference(filePath);
} catch (IOException e) {
e.printStackTrace();
} catch (StorageException e) {
e.printStackTrace();
} catch (URISyntaxException e) {
e.printStackTrace();
}
}
#Override
public void update(ExecutionContext executionContext) {
}
#Override
public void close() {
super.close();
try {
os.close();
} catch (IOException e) {
e.printStackTrace();
}
}
public void setStorage(CloudBlobContainer storage) {
this.storage = storage;
}
#Override
public void setResource(Resource resource) {
this.resource = resource;
}
}
Any help is greatly I appreciated. My apologies for the "dirt code", as I am still testing/developing it.
thx, Markus.
You did not share the entire stack trace to see when this error happens exactly, but it seems that the close method is called more than once. I think this is not due to a concurrency issue, as I see you are using one writer per thread in a partitioned step. So I would make this method "re-entrant" by checking if the output stream is already closed before closing it (there is no isClosed method on an output stream, so you can use a custom boolean around that).
That said, I would first confirm that the close method is called twice and if so, investigate why is that and fix the root cause.
I have a header that looks like this:
// Writer
#Bean(name = "cms200Writer")
#StepScope
public FlatFileItemWriter<Cms200Item> cmsWriter(#Value("#{jobExecutionContext}") Map<Object, Object> ec, //
#Qualifier("cms200LineAggregator") FormatterLineAggregator<Cms200Item> lineAgg) throws IOException {
#SuppressWarnings("unchecked")
String fileName = ((Map<String, MccFtpFile>) ec.get(AbstractSetupTasklet.BATCH_FTP_FILES)).get("cms").getLocalFile();
//Ensure the file can exist.
PrintWriter fos = getIoHarness().getFileOutputStream(fileName);
fos.close();
FlatFileItemWriter<Cms200Item> writer = new FlatFileItemWriter<>();
writer.setResource(new FileSystemResource(fileName));
writer.setLineAggregator(lineAgg);
Calendar cal = Calendar.getInstance();
Date date = cal.getTime();
DateFormat dateFormat = new SimpleDateFormat("HH:mm:ss");
String formattedDate=dateFormat.format(date);
writer.setHeaderCallback(new FlatFileHeaderCallback() {
public void writeHeader(Writer writer) throws IOException {
writer.write(" Test Company. " + formattedDate);
writer.write("\n CMS200 CUSTOMER SHIPMENT MANIFEST AUTHORIZATION BY CUSTOMER NAME Page 1");
writer.write("\n\n");
writer.write(" CUSTOMER NAME CITY ST CONTROL MNFST ID AUTH CODE I03 CLS EDI EXPRESS POV MOST CURRENT DEACTIVE");
writer.write("\n");
writer.write(" NBR TRL 214 WORK ACCESS DATE ");
}
});
return writer;
}
I want to print this header everytime 53 records are processed. I can't figure out how to implement that logic into my Spring Batch job. I have the writeCount added to my execution context, but not sure how to access that here, or if that's the correct approach.
The writer I posted is in my BatchConfiguration.java file
EDIT:
Below I have my filestep and added chunk size
#Bean(name = "cms200FileStep")
public Step createFileStep(StepBuilderFactory stepFactory, //
#Qualifier("cms200Reader") ItemReader<Cms200Item> reader, //
Cms200Processor processor, //
#Qualifier("cms200Writer") ItemWriter<Cms200Item> writer) {
return stepFactory.get("cms200FileStep") //
.<Cms200Item, Cms200Item>chunk(100000) //
.reader(reader) //
.processor(processor) //
.writer(writer).chunk(53) //
.allowStartIfComplete(true)//
.build();//
}
Edit: Added job config
// Job
#Bean(name = "mccCMSCLRPTjob")
public Job mccCmsclrptjob(JobBuilderFactory jobFactory, //
#Qualifier("cms200SetupStep") Step setupStep, //
#Qualifier("cms200FileStep") Step fileStep, //
#Qualifier("putFtpFilesStep") Step putFtpStep, //
#Qualifier("cms200TeardownStep") Step teardownStep, //
#Autowired SingleInstanceListener listener,
#Autowired ChunkSizeListener chunkListener) { //
return jobFactory.get("mccCMSCLRPTjob") //
.incrementer(new RunIdIncrementer()) //
.listener(listener) //
.start(setupStep) //
.next(fileStep) //
.next(putFtpStep) //
.next(teardownStep) //
.build();
}
Edit: adding the listener
#Bean(name = "cms200FileStep")
public Step createFileStep(StepBuilderFactory stepFactory, //
#Qualifier("cms200Reader") ItemReader<Cms200Item> reader, //
Cms200Processor processor, //
#Qualifier("cms200Writer") ItemWriter<Cms200Item> writer,
#Autowired ChunkSizeListener listener) {
return stepFactory.get("cms200FileStep") //
.<Cms200Item, Cms200Item>chunk(100000) //
.reader(reader) //
.processor(processor) //
.writer(writer).chunk(53) //
.allowStartIfComplete(true)//
.listener(listener) //
.build();//
}
EDIT: After a lot of back and forth this is where I'm at
// Utility Methods
#Bean(name = "cms200FileStep")
public Step createFileStep(StepBuilderFactory stepFactory, Map<Object, Object> ec, //
#Qualifier("cms200Reader") ItemReader<Cms200Item> reader, //
Cms200Processor processor, //
#Qualifier("cms200Writer") ItemWriter<Cms200Item> writer) throws IOException {
#SuppressWarnings("unchecked")
String fileName = ((Map<String, MccFtpFile>) ec.get(AbstractSetupTasklet.BATCH_FTP_FILES)).get("cms").getLocalFile();
return stepFactory.get("cms200FileStep") //
.<Cms200Item, Cms200Item>chunk(100000) //
.reader(reader) //
.processor(processor) //
.writer(writer).chunk(53) //
.allowStartIfComplete(true)//
// .listener((ChunkListener) listener) //
.listener((ChunkListener) new ChunkSizeListener(new File(fileName))) //
.build();//
}
The FlatFileHeaderCallback is called only once before the chunk-oriented step, aka before all chunks.
I want to print this header everytime 53 records are processed
What you can do is set the chunk-size to 53 and use a ChunkListener or ItemWriteListener to write the required data.
EDIT: Add an example
class MyChunkListener extends StepListenerSupport {
private FileWriter fileWriter;
public MyChunkListener(File file) throws IOException {
this.fileWriter = new FileWriter(file, true);
}
#Override
public void beforeChunk(ChunkContext context) {
try {
fileWriter.write("your custom header");
fileWriter.flush();
} catch (IOException e) {
System.err.println("Unable to write header to file");
}
}
#Override
public ExitStatus afterStep(StepExecution stepExecution) {
try {
fileWriter.close();
} catch (IOException e) {
System.err.println("Unable to close writer");
}
return super.afterStep(stepExecution);
}
}
I have a Spring integration application that normally polls daily for a file via SFTP using a cron trigger. But if it doesn't find the file it expects, it should poll every x minutes via a periodic trigger until y attempts. To do this I use the following component:
#Component
public class RetryCompoundTriggerAdvice extends AbstractMessageSourceAdvice {
private final static Logger logger = LoggerFactory.getLogger(RetryCompoundTriggerAdvice.class);
private final CompoundTrigger compoundTrigger;
private final Trigger override;
private final ApplicationProperties applicationProperties;
private final Mail mail;
private int attempts = 0;
public RetryCompoundTriggerAdvice(CompoundTrigger compoundTrigger,
#Qualifier("secondaryTrigger") Trigger override,
ApplicationProperties applicationProperties,
Mail mail) {
this.compoundTrigger = compoundTrigger;
this.override = override;
this.applicationProperties = applicationProperties;
this.mail = mail;
}
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
final int maxOverrideAttempts = applicationProperties.getMaxFileRetry();
attempts++;
if (result == null && attempts < maxOverrideAttempts) {
logger.info("Unable to find load file after " + attempts + " attempt(s). Will reattempt");
this.compoundTrigger.setOverride(this.override);
} else if (result == null && attempts >= maxOverrideAttempts) {
mail.sendAdminsEmail("Missing File");
attempts = 0;
this.compoundTrigger.setOverride(null);
}
else {
attempts = 0;
this.compoundTrigger.setOverride(null);
logger.info("Found load file");
}
return result;
}
public void setOverrideTrigger() {
this.compoundTrigger.setOverride(this.override);
}
public CompoundTrigger getCompoundTrigger() {
return compoundTrigger;
}
}
If a file doesn't exist, this works great. That is, the override (i.e. periodic trigger) takes effect and polls every x minutes until y attempts.
However, if a file does exist but it's not the expected file (e.g. the data is at the wrong date), another class (that reads the file) calls the setOverrideTrigger of the RetryCompoundTriggerAdvice class. But afterReceive is not subsequently called at every x minutes. Why would this be?
Here's more of the application code:
SftpInboundFileSynchronizer:
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory(applicationProperties.getSftpDirectory());
CompositeFileListFilter<ChannelSftp.LsEntry> compositeFileListFilter = new CompositeFileListFilter<ChannelSftp.LsEntry>();
compositeFileListFilter.addFilter(new SftpPersistentAcceptOnceFileListFilter(store, "sftp"));
compositeFileListFilter.addFilter(new SftpSimplePatternFileListFilter(applicationProperties.getLoadFileNamePattern()));
fileSynchronizer.setFilter(compositeFileListFilter);
fileSynchronizer.setPreserveTimestamp(true);
return fileSynchronizer;
}
Session factory is:
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory sftpSessionFactory = new DefaultSftpSessionFactory();
sftpSessionFactory.setHost(applicationProperties.getSftpHost());
sftpSessionFactory.setPort(applicationProperties.getSftpPort());
sftpSessionFactory.setUser(applicationProperties.getSftpUser());
sftpSessionFactory.setPassword(applicationProperties.getSftpPassword());
sftpSessionFactory.setAllowUnknownKeys(true);
return new CachingSessionFactory<LsEntry>(sftpSessionFactory);
}
The SftpInboundFileSynchronizingMessageSource is set to poll using the compound trigger.
#Bean
#InboundChannelAdapter(autoStartup="true", channel = "sftpChannel", poller = #Poller("pollerMetadata"))
public SftpInboundFileSynchronizingMessageSource sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
source.setLocalDirectory(applicationProperties.getScheduledLoadDirectory());
source.setAutoCreateLocalDirectory(true);
CompositeFileListFilter<File> compositeFileFilter = new CompositeFileListFilter<File>();
compositeFileFilter.addFilter(new LastModifiedFileListFilter());
compositeFileFilter.addFilter(new FileSystemPersistentAcceptOnceFileListFilter(store, "dailyfilesystem"));
source.setLocalFilter(compositeFileFilter);
source.setCountsEnabled(true);
return source;
}
#Bean
public PollerMetadata pollerMetadata(RetryCompoundTriggerAdvice retryCompoundTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
List<Advice> adviceChain = new ArrayList<Advice>();
adviceChain.add(retryCompoundTriggerAdvice);
pollerMetadata.setAdviceChain(adviceChain);
pollerMetadata.setTrigger(compoundTrigger());
pollerMetadata.setMaxMessagesPerPoll(1);
return pollerMetadata;
}
#Bean
public CompoundTrigger compoundTrigger() {
CompoundTrigger compoundTrigger = new CompoundTrigger(primaryTrigger());
return compoundTrigger;
}
#Bean
public CronTrigger primaryTrigger() {
return new CronTrigger(applicationProperties.getSchedule());
}
#Bean
public PeriodicTrigger secondaryTrigger() {
return new PeriodicTrigger(applicationProperties.getRetryInterval());
}
Update
Here's the message handler:
#Bean
#ServiceActivator(inputChannel = "sftpChannel")
public MessageHandler dailyHandler(SimpleJobLauncher jobLauncher, Job job, Mail mail) {
JobRunner jobRunner = new JobRunner(jobLauncher, job, store, mail);
jobRunner.setDaily("true");
jobRunner.setOverwrite("false");
return jobRunner;
}
JobRunner kicks off a Spring Batch job. After processing the job, my application looks to see if the file had the data it expected for the day. If not, it is setting the override trigger.
That's the way triggers work - you only get an opportunity to change the trigger when the trigger fires.
Since you reset to the cron trigger, the next opportunity for change is when that trigger fires (if the poller thread is released by the downstream flow before changing the trigger).
Are you handing off the file to another thread (queue channel or executor)? If not, I would expect any changes to the trigger should be applied, because nextExecutionTime() will not be called until the downstream flow returns.
If there's a thread handoff, you have no opportunity to change the trigger.