In the below piece of code, when StepA fails only StepB and StepC should execute but what actually happens is that all the 3 steps are getting executed! I want to split a spring batch job depending upon whether a step passes or not. I know that there are other ways of doing this by using JobDecider, setting some job parameter, etc but I wanted to know I was doing wrongly here?
#Configuration
#EnableBatchProcessing
public class JobConfig {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Bean
public PlatformTransactionManager transactionManager() {
return new ResourcelessTransactionManager();
}
#Bean
public JobRepository jobRepository() {
try {
return new MapJobRepositoryFactoryBean(transactionManager())
.getJobRepository();
} catch (Exception e) {
return null;
}
}
#Bean
public JobLauncher jobLauncher() {
final SimpleJobLauncher launcher = new SimpleJobLauncher();
launcher.setJobRepository(jobRepository());
return launcher;
}
#Bean
public Job job() {
return jobBuilderFactory.get("job").
flow(stepA()).on("FAILED").to(stepC()).next(stepD()).
from(stepA()).on("*").to(stepB()).next(stepC()).end().build();
}
#Bean
public Step stepA() {
return stepBuilderFactory.get("stepA")
.tasklet(new RandomFailTasket("stepA")).build();
}
#Bean
public Step stepB() {
return stepBuilderFactory.get("stepB")
.tasklet(new PrintTextTasklet("stepB")).build();
}
#Bean
public Step stepC() {
return stepBuilderFactory.get("stepC")
.tasklet(new PrintTextTasklet("stepC")).build();
}
#Bean
public Step stepD() {
return stepBuilderFactory.get("stepD")
.tasklet(new PrintTextTasklet("stepD")).build();
}
#SuppressWarnings("resource")
public static void main(String[] args) {
// create spring application context
final ApplicationContext appContext = new AnnotationConfigApplicationContext(
JobConfig.class);
// get the job config bean (i.e this bean)
final JobConfig jobConfig = appContext.getBean(JobConfig.class);
// get the job launcher
JobLauncher launcher = jobConfig.jobLauncher();
try {
// launch the job
JobExecution execution = launcher.run(jobConfig.job(), new JobParameters());
System.out.println(execution.getJobInstance().toString());
} catch (JobExecutionAlreadyRunningException e) {
e.printStackTrace();
} catch (JobRestartException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JobInstanceAlreadyCompleteException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (JobParametersInvalidException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
StepA: is a dummy job which fails i.e it throws some exception
public class RandomFailTasket extends PrintTextTasklet {
public RandomFailTasket(String text) {
super(text);
}
public RepeatStatus execute(StepContribution arg0, ChunkContext arg1)
throws Exception {
if (Math.random() < 0.5){
throw new Exception("fail");
}
return RepeatStatus.FINISHED;
}
}
StepB, StepC, StepD are also dummy tasklets:
public class PrintTextTasklet implements Tasklet {
private final String text;
public PrintTextTasklet(String text){
this.text = text;
}
public RepeatStatus execute(StepContribution arg0, ChunkContext arg1)
throws Exception {
System.out.println(text);
return RepeatStatus.FINISHED;
}
}
need to have a look at the xml structure that you are using.
Try using Step listener - and then in the after step method you can check the Step status and then you can implement your logic to call the next step or not
Related
#Configuration
public class BookStorePoliciesJobConfiguration {
private static final String CRON_EXPRESSION_FOR_JOB = "0 0/10 * 1/1 * ? *";//change
private static final String JOB_NAME = "bookStorePolicyJob";
private static final String JOB_STEP_NAME = JOB_NAME + "Step";
#Autowired
private DataSource nonXAcrmDataSource;
#Autowired
private JtaTransactionManager transactionManager;
#Bean
public CanerScheduledJobFactoryBean bookStorePoliciesScheduledJob() {
MafScheduledJobFactoryBean bean = new MafScheduledJobFactoryBean();
bean.setBatchJobName(JOB_NAME);
bean.setCronExp(CRON_EXPRESSION_FOR_JOB);
bean.setNonXAcrmDataSource(this.nonXAcrmDataSource);
return bean;
}
#Bean
public Job bookStorePolicyJob(#Autowired BookStorePoliciesJobTasklet tasklet,
#Autowired JobRepository batchJobRepository) {
SimpleJob job = new SimpleJob(JOB_NAME);
job.setSteps(Collections.singletonList(bookStorePolicyJobStep(tasklet, batchJobRepository)));
job.setJobRepository(batchJobRepository);
return job;
}
public Step bookStorePolicyJobStep(BookStoreJobTasklet tasklet, JobRepository batchJobRepository) {
TaskletStep step = new TaskletStep(JOB_STEP_NAME);
step.setTasklet(tasklet);
step.setJobRepository(batchJobRepository);
transactionManager.setAllowCustomIsolationLevels(true);
step.setTransactionManager(transactionManager);
return step;
}
}
This is the job:
#Component
public class BookStoreJobTasklet extends CanerTasklet {
#Override
public RepeatStatus doExecute(StepContribution contribution, ChunkContext chunkContext) {
///
//
sendFileBySftp(fileName, file);//if this throws, should not go down line, should exit but it goes
updateParameters();//if it is successfull only!
return RepeatStatus.FINISHED;
}
And the method:
private void sendFileBySftp(String fileName, File file) {
//
//
try{
//
} catch (JSchException | SftpException | IOException e) {
logger.error("error in sendFileFTP {}", e.getMessage());//it comes here and continues. it should exit?
}
logger.info("Session connection closed....");
}
What should i do to stop batch?
i look at here
Make a spring-batch job exit with non-zero code if an exception is thrown
Should i return RepeatStatus for each method so i can check if failed?
Maybe a boolean value and for each catch block, putting it to false?
I have created a sample spring batch application which is trying to read record from a DB and in writer, it displays those records. However, I could see that only even numbered (alternate) records are printed.
It's not the problem of database as the behavior is consistent with both H2 database or Oracle database.
There are total 100 records in my DB.
With JDBCCursorItemReader, only 50 records are read and that too alternate one as can be seen from log snapshot
With JdbcPagingItemReader, only 5 records are read and that too alternate one as can be seen from log snapshot
My code configurations are given below. Why reader is skipping odd numbered records?
#Bean
public ItemWriter<Safety> safetyWriter() {
return items -> {
for (Safety item : items) {
log.info(item.toString());
}
};
}
#Bean
public JdbcCursorItemReader<Safety> cursorItemReader() throws Exception {
JdbcCursorItemReader<Safety> reader = new JdbcCursorItemReader<>();
reader.setSql("select * from safety " );
reader.setDataSource(dataSource);
reader.setRowMapper(new SafetyRowMapper());
reader.setVerifyCursorPosition(false);
reader.afterPropertiesSet();
return reader;
}
#Bean
JdbcPagingItemReader<Safety> safetyPagingItemReader() throws Exception {
JdbcPagingItemReader<Safety> reader = new JdbcPagingItemReader<>();
reader.setDataSource(dataSource);
reader.setFetchSize(10);
reader.setRowMapper(new SafetyRowMapper());
H2PagingQueryProvider queryProvider = new H2PagingQueryProvider();
queryProvider.setSelectClause("*");
queryProvider.setFromClause("safety");
Map<String, Order> sortKeys = new HashMap<>(1);
sortKeys.put("id", Order.ASCENDING);
queryProvider.setSortKeys(sortKeys);
reader.setQueryProvider(queryProvider);
return reader;
}
#Bean
public Step importSafetyDetails() throws Exception {
return stepBuilderFactory.get("importSafetyDetails")
.<Safety, Safety>chunk(chunkSize)
//.reader(cursorItemReader())
.reader(safetyPagingItemReader())
.writer(safetyWriter())
.listener(new StepListener())
.listener(new ChunkListener())
.build();
}
#Bean
public Job job() throws Exception {
return jobBuilderFactory.get("job")
.start(importSafetyDetails())
.build();
}
Domain classes looks like below:
#NoArgsConstructor
#AllArgsConstructor
#Data
public class Safety {
private int id;
}
public class SafetyRowMapper implements RowMapper<Safety> {
#Override
public Safety mapRow(ResultSet resultSet, int i) throws SQLException {
if(resultSet.next()) {
Safety safety = new Safety();
safety.setId(resultSet.getInt("id"));
return safety;
}
return null;
}
}
#SpringBootApplication
#EnableBatchProcessing
public class SpringBatchSamplesApplication {
public static void main(String[] args) {
SpringApplication.run(SpringBatchSamplesApplication.class, args);
}
}
application.yml configuration is as below:
spring:
application:
name: spring-batch-samples
main:
allow-bean-definition-overriding: true
datasource:
url: jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
username: sa
password:
driver-class-name: org.h2.Driver
hikari:
connection-timeout: 20000
maximum-pool-size: 10
h2:
console:
enabled: true
batch:
initialize-schema: never
server:
port: 9090
sqls are as below:
CREATE TABLE safety (
id int NOT NULL,
CONSTRAINT PK_ID PRIMARY KEY (id)
);
INSERT INTO safety (id) VALUES (1);
...100 records are inserted
Listeners classes are as below:
#Slf4j
public class StepListener{
#AfterStep
public ExitStatus afterStep(StepExecution stepExecution) {
log.info("In step {} ,Exit Status: {} ,Read Records: {} ,Committed Records: {} ,Skipped Read Records: {} ,Skipped Write Records: {}",
stepExecution.getStepName(),
stepExecution.getExitStatus().getExitCode(),
stepExecution.getReadCount(),
stepExecution.getCommitCount(),
stepExecution.getReadSkipCount(),
stepExecution.getWriteSkipCount());
return stepExecution.getExitStatus();
}
}
#Slf4j
public class ChunkListener {
#BeforeChunk
public void beforeChunk(ChunkContext context) {
log.info("<< Before the chunk");
}
#AfterChunk
public void afterChunk(ChunkContext context) {
log.info("<< After the chunk");
}
}
I tried to reproduce your problem, but I couldn't. Maybe it would be great if you could share more code.
Meanwhile I created a simple job to read 100 records from "safety" table a print them to the console. And it is working fine.
.
#SpringBootApplication
#EnableBatchProcessing
public class ReaderWriterProblem implements CommandLineRunner {
#Autowired
DataSource dataSource;
#Autowired
StepBuilderFactory stepBuilderFactory;
#Autowired
JobBuilderFactory jobBuilderFactory;
#Autowired
private JobLauncher jobLauncher;
#Autowired
private ApplicationContext context;
public static void main(String[] args) {
String[] arguments = new String[]{LocalDateTime.now().toString()};
SpringApplication.run(ReaderWriterProblem.class, arguments);
}
#Bean
public ItemWriter<Safety> safetyWriter() {
return new ItemWriter<Safety>() {
#Override
public void write(List<? extends Safety> items) throws Exception {
for (Safety item : items) {
//log.info(item.toString());
System.out.println(item);
}
}
};
}
// #Bean
// public JdbcCursorItemReader<Safety> cursorItemReader() throws Exception {
// JdbcCursorItemReader<Safety> reader = new JdbcCursorItemReader<>();
//
// reader.setSql("select * from safety ");
// reader.setDataSource(dataSource);
// reader.setRowMapper(new SafetyRowMapper());
// reader.setVerifyCursorPosition(false);
// reader.afterPropertiesSet();
//
// return reader;
// }
#Bean
JdbcPagingItemReader<Safety> safetyPagingItemReader() throws Exception {
JdbcPagingItemReader<Safety> reader = new JdbcPagingItemReader<>();
reader.setDataSource(dataSource);
reader.setFetchSize(10);
reader.setRowMapper(new SafetyRowMapper());
PostgresPagingQueryProvider queryProvider = new PostgresPagingQueryProvider();
queryProvider.setSelectClause("*");
queryProvider.setFromClause("safety");
Map<String, Order> sortKeys = new HashMap<>(1);
sortKeys.put("id", Order.ASCENDING);
queryProvider.setSortKeys(sortKeys);
reader.setQueryProvider(queryProvider);
return reader;
}
#Bean
public Step importSafetyDetails() throws Exception {
return stepBuilderFactory.get("importSafetyDetails")
.<Safety, Safety>chunk(5)
//.reader(cursorItemReader())
.reader(safetyPagingItemReader())
.writer(safetyWriter())
.listener(new MyStepListener())
.listener(new MyChunkListener())
.build();
}
#Bean
public Job job() throws Exception {
return jobBuilderFactory.get("job")
.listener(new JobListener())
.start(importSafetyDetails())
.build();
}
#Override
public void run(String... args) throws Exception {
JobParametersBuilder jobParametersBuilder = new JobParametersBuilder();
jobParametersBuilder.addString("date", LocalDateTime.now().toString());
try {
Job job = (Job) context.getBean("job");
jobLauncher.run(job, jobParametersBuilder.toJobParameters());
} catch (JobExecutionAlreadyRunningException | JobRestartException | JobInstanceAlreadyCompleteException | JobParametersInvalidException e) {
e.printStackTrace();
}
}
public static class JobListener implements JobExecutionListener {
#Override
public void beforeJob(JobExecution jobExecution) {
System.out.println("Before job");
}
#Override
public void afterJob(JobExecution jobExecution) {
System.out.println("After job");
}
}
private static class SafetyRowMapper implements RowMapper<Safety> {
#Override
public Safety mapRow(ResultSet resultSet, int i) throws SQLException {
Safety safety = new Safety();
safety.setId(resultSet.getLong("ID"));
return safety;
}
}
public static class MyStepListener implements StepExecutionListener {
#Override
public void beforeStep(StepExecution stepExecution) {
System.out.println("Before Step");
}
#Override
public ExitStatus afterStep(StepExecution stepExecution) {
System.out.println("After Step");
return ExitStatus.COMPLETED;
}
}
private static class MyChunkListener implements ChunkListener {
#Override
public void beforeChunk(ChunkContext context) {
System.out.println("Before Chunk");
}
#Override
public void afterChunk(ChunkContext context) {
System.out.println("After Chunk");
}
#Override
public void afterChunkError(ChunkContext context) {
}
}
}
Hope this helps
So I'm toying around with Spring Batch for the first time and trying to understand how to do things other than process a CSV file.
Attempting to read every music file in a directory for example, I have the following code but I'm not sure how to handle the Delegate part.
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Bean
public MusicItemProcessor processor() {
return new MusicItemProcessor();
}
#Bean
public Job readFiles() {
return jobBuilderFactory.get("readFiles").incrementer(new RunIdIncrementer()).
flow(step1()).end().build();
}
#Bean
public Step step1() {
return stepBuilderFactory.get("step1").<String, String>chunk(10)
.reader(reader())
.processor(processor()).build();
}
#Bean
public ItemReader<String> reader() {
Resource[] resources = null;
ResourcePatternResolver patternResolver = new PathMatchingResourcePatternResolver();
try {
resources = patternResolver.getResources("file:/music/*.flac");
} catch (IOException e) {
e.printStackTrace();
}
MultiResourceItemReader<String> reader = new MultiResourceItemReader<>();
reader.setResources(resources);
reader.setDelegate(new FlatFileItemReader<>()); // ??
return reader;
}
}
At the moment I can see that resources has a list of music files, but looking at the stacktrace I get back, it looks to me like new FlatFileItemReader<>() is trying to read the actual content of the files (I'll want to do that at some point, just not right now).
At the moment I just want the information about the file (absolute path, size, filename etc), not what's inside.
Have I gone completely wrong with this? Or do I just need to configure something a little different?
Any examples of code that does more than process CSV lines would also be awesome
After scouring the internet I've managed to pull together something that I think works... Some feedback would be welcome.
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Bean
public VideoItemProcessor processor() {
return new VideoItemProcessor();
}
#Bean
public Job readFiles() {
return jobBuilderFactory.get("readFiles")
.start(step())
.build();
}
#Bean
public Step step() {
try {
return stepBuilderFactory.get("step").<File, Video>chunk(500)
.reader(directoryItemReader())
.processor(processor())
.build();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
#Bean
public DirectoryItemReader directoryItemReader() throws IOException {
return new DirectoryItemReader("file:/media/media/Music/**/*.flac");
}
}
The part that had me stuck with creating a custom reader for files. If anyone else comes across this, this is how I've done it. I'm sure there are better ways but this works for me
public class DirectoryItemReader implements ItemReader<File>, InitializingBean {
private final String directoryPath;
private final List<File> foundFiles = Collections.synchronizedList(new ArrayList<>());
public DirectoryItemReader(final String directoryPath) {
this.directoryPath = directoryPath;
}
#Override
public File read() {
if (!foundFiles.isEmpty()) {
return foundFiles.remove(0);
}
synchronized (foundFiles) {
final Iterator files = foundFiles.iterator();
if (files.hasNext()) {
return foundFiles.remove(0);
}
}
return null;
}
#Override
public void afterPropertiesSet() throws Exception {
for (final Resource file : getFiles()) {
this.foundFiles.add(file.getFile());
}
}
private Resource[] getFiles() throws IOException {
ResourcePatternResolver patternResolver = new PathMatchingResourcePatternResolver();
return patternResolver.getResources(directoryPath);
}
}
The only thing you'd need to do is implement your own processor. I've used Videos in this example, so I have a video processor
#Slf4j
public class VideoItemProcessor implements ItemProcessor<File, Video> {
#Override
public Video process(final File item) throws Exception {
Video video = Video.builder()
.filename(item.getAbsoluteFile().getName())
.absolutePath(item.getAbsolutePath())
.fileSize(item.getTotalSpace())
.build();
log.info("Created {}", video);
return video;
}
}
my goal is passing some value from a tasklet to another step which is consist of itemreader, processor and writer. based on the reference, i should use ExecutionContextPromotionListener. however, for some reason #BeforeStep is not being called. this is what I have.
Tasklet
#Component
public class RequestTasklet implements Tasklet {
#Autowired
private HistoryRepository historyRepository;
#Override
public RepeatStatus execute(StepContribution sc, ChunkContext cc) throws Exception {
List<History> requests
= historyRepository.findHistory();
ExecutionContext stepContext = cc.getStepContext().getStepExecution().getJobExecution().getExecutionContext();
stepContext.put("someKey", requests);
return RepeatStatus.FINISHED;
}
}
ItemReader
#Component
public class RequestReader implements ItemReader<History> {
private List<History> requests;
#Override
public History read() throws UnexpectedInputException,
ParseException,
NonTransientResourceException {
System.out.println("requests====>" + requests);
if (CollectionUtils.isNotEmpty(requests)) {
History request = requests.remove(0);
return request;
}
return null;
}
#BeforeStep
public void beforeStep(StepExecution stepExecution) {
System.out.println("====================here======================");
JobExecution jobExecution = stepExecution.getJobExecution();
ExecutionContext jobContext = jobExecution.getExecutionContext();
this.requests = (List<History>) jobContext.get("someKey");
}
}
Configuration
#Bean(name = "update-job")
public Job updateUserAttributes(
JobBuilderFactory jbf,
StepBuilderFactory sbf,
ExecutionContextPromotionListener promoListener,
HistoryProcessor processor,
RequestReader reader,
HistoryWriter writer,
RequestTasklet loadRequestTasklet) {
Step preStep = sbf.get("load-request-tasklet")
.tasklet(loadRequestTasklet)
.listener(promoListener)
.build();
Step step = sbf.get("update-step")
.<History, History>chunk(2)
.reader(reader)
.processor(processor)
.writer(writer)
.taskExecutor(taskExecutor())
.build();
return jbf.get("update-job")
.incrementer(new RunIdIncrementer())
.start(preStep).next(step).
build();
}
#Bean
public ExecutionContextPromotionListener promoListener() {
ExecutionContextPromotionListener listener = new ExecutionContextPromotionListener();
listener.setKeys(new String[]{
"someKey",
});
return listener;
}
I also tried to extend StepExecutionListenerSupport in ItemReader but got the same result.
I googled around and looked at this question answered due to spring proxy which is not my case beforestep issue.
little more information as i am testing. I set listener.setStrict(Boolean.TRUE); to see if keys are set. but i am getting
java.lang.IllegalArgumentException: The key [someKey] was not found in the Step's ExecutionContext.
at org.springframework.batch.core.listener.ExecutionContextPromotionListener.afterStep(ExecutionContextPromotionListener.java:61) ~[spring-batch-core-4.0.0.M1.jar:4.0.0.M1]
appreciate any help.
I would like to get data in the Writer that I've set in the Reader of my step. I know about ExecutionContexts (step and job) and about ExecutionContextPromotionListener via http://docs.spring.io/spring-batch/trunk/reference/html/patterns.html#passingDataToFutureSteps
The problem is that in Writer I'm retrieving a null value of 'npag'.
Line on ItemWriter:
LOG.info("INSIDE WRITE, NPAG: " + nPag);
I've being doing some workarounds without luck, looking answer for other similar questions... Any help? thanks!
Here's my code:
READER
#Component
public class LCItemReader implements ItemReader<String> {
private StepExecution stepExecution;
private int nPag = 1;
#Override
public String read() throws CustomItemReaderException {
ExecutionContext stepContext = this.stepExecution.getExecutionContext();
stepContext.put("npag", nPag);
nPag++;
return "content";
}
#BeforeStep
public void saveStepExecution(StepExecution stepExecution) {
this.stepExecution = stepExecution;
}
}
WRITER
#Component
#StepScope
public class LCItemWriter implements ItemWriter<String> {
private String nPag;
#Override
public void write(List<? extends String> continguts) throws Exception {
try {
LOG.info("INSIDE WRITE, NPAG: " + nPag);
} catch (Throwable ex) {
LOG.error("Error: " + ex.getMessage());
}
}
#BeforeStep
public void retrieveInterstepData(StepExecution stepExecution) {
JobExecution jobExecution = stepExecution.getJobExecution();
ExecutionContext jobContext = jobExecution.getExecutionContext();
this.nPag = jobContext.get("npag").toString();
}
}
JOB/STEP BATCH CONFIG
#Bean
public Job lCJob() {
return jobs.get("lCJob")
.listener(jobListener)
.start(lCStep())
.build();
}
#Bean
public Step lCStep() {
return steps.get("lCStep")
.<String, String>chunk(1)
.reader(lCItemReader)
.processor(lCProcessor)
.writer(lCItemWriter)
.listener(promotionListener())
.build();
}
LISTENER
#Bean
public ExecutionContextPromotionListener promotionListener() {
ExecutionContextPromotionListener executionContextPromotionListener = new ExecutionContextPromotionListener();
executionContextPromotionListener.setKeys(new String[]{"npag"});
return executionContextPromotionListener;
}
The ExecutionContextPromotionListener specifically states that it works at the end of a step so that would be after the writer executes. So the promotion I think you are counting on does not occur when you think it does.
If i were you I would set it in the step context and get it from the step if you need the value with in a single step. Otherwise I would set it to the job context.
The other aspect is the #BeforeStep. That marks a method for executing before the step context exists. The way you are setting the nPag value in the reader would be after the step had started executing.
You are trying to read the value for nPag even before it is set in the reader, ending up with a default value which is null. You need to read the value on nPag at the time of logging from the execution context directly. You can keep a reference to the jobContext. Try this
#Component
#StepScope
public class LCItemWriter implements ItemWriter<String> {
private String nPag;
private ExecutionContext jobContext;
#Override
public void write(List<? extends String> continguts) throws Exception {
try {
this.nPag = jobContext.get("npag").toString();
LOG.info("INSIDE WRITE, NPAG: " + nPag);
} catch (Throwable ex) {
LOG.error("Error: " + ex.getMessage());
}
}
#BeforeStep
public void retrieveInterstepData(StepExecution stepExecution) {
JobExecution jobExecution = stepExecution.getJobExecution();
jobContext = jobExecution.getExecutionContext();
}
}
In your Reader and Writer you need to implement ItemStream interface and use ExecutionContext as member variable.Here i have given example with Processor instead of Writer but same is applicable for Writer as well .Its working fine for me and i am able to take values from reader to processor.
I have set the value in context in reader and getting the value in processor.
public class EmployeeItemReader implements ItemReader<Employee>, ItemStream {
ExecutionContext context;
#Override
public Employee read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
context.put("ajay", "i am going well");
Employee emp=new Employee();
emp.setEmpId(1);
emp.setFirstName("ajay");
emp.setLastName("goswami");
return emp;
}
#Override
public void close() throws ItemStreamException {
// TODO Auto-generated method stub
}
#Override
public void open(ExecutionContext arg0) throws ItemStreamException {
context = arg0;
}
#Override
public void update(ExecutionContext arg0) throws ItemStreamException {
// TODO Auto-generated method stub
context = arg0;
}
}
My processor
public class CustomItemProcessor implements ItemProcessor<Employee,ActiveEmployee>,ItemStream{
ExecutionContext context;
#Override
public ActiveEmployee process(Employee emp) throws Exception {
//See this line
System.out.println(context.get("ajay"));
ActiveEmployee actEmp=new ActiveEmployee();
actEmp.setEmpId(emp.getEmpId());
actEmp.setFirstName(emp.getFirstName());
actEmp.setLastName(emp.getLastName());
actEmp.setAdditionalInfo("Employee is processed");
return actEmp;
}
#Override
public void close() throws ItemStreamException {
// TODO Auto-generated method stub
}
#Override
public void open(ExecutionContext arg0) throws ItemStreamException {
// TODO Auto-generated method stub
}
#Override
public void update(ExecutionContext arg0) throws ItemStreamException {
context = arg0;
}
}
Hope this helps.