I am trying to use Spring Integration with Spring batch.I am transforming my message Message to JobLaunchRequest
public class FileMessageToJobRequest {
private Job job;
private String fileParameterName;
public void setFileParameterName(String fileParameterName) {
this.fileParameterName = fileParameterName;
}
public void setJob(Job job) {
this.job = job;
}
public JobLaunchRequest toRequest(Message<File> message) throws IOException {
JobParametersBuilder jobParametersBuilder = new JobParametersBuilder();
String fileName = message.getPayload().getName();
String currentDateTime = LocalDateTime.now().toString();
logger.info("currentDateTime"+currentDateTime);
jobParametersBuilder.addString(this.fileName, fileName);
jobParametersBuilder.addString(this.currentDateTime, currentDateTime);
return new JobLaunchRequest(job, jobParametersBuilder.toJobParameters());
}
}
here is my Integration flow
return IntegrationFlows.
from(Files.inboundAdapter(new File("landing"))
.preventDuplicates() .patternFilter("*.txt")
, e -> e.poller(Pollers.fixedDelay(2500) .maxMessagesPerPoll(15)
.taskExecutor(getFileProcessExecutor())))
.transform(fileMessageToJobRequest)
.handle(jobLaunchingGw(jobLauncher)).get();
#Bean
public MessageHandler jobLaunchingGw(JobLauncher jobLauncher)
{
return new JobLaunchingGateway(jobLauncher);
}
For some reason one of the files of the two that print the same time is not getting processed if two files print the same currentDateTime.The files are most likely to come at the same time.I am using ThreadPoolTaskExecutor corepool size,max pool size ,queue capacity is 15
String currentDateTime = LocalDateTime.now().toString();
the file is not getting processed.I am assuming this is some kind of race condition.I even tried synchronized on the JobLaunchRequest.Is this a race condition?Why would a message drop?
I am using Spring Integration 4.2.6 all latest versions(almost) of Spring.
My reference for this config is this question
Java DSL config
Related
We have a Spring Java application using RabbitMQ, and here is the scenario:
There is a consumer receiving messages from a queue and sending them to another one. We are using "SimpleRabbitListenerContainerFactory" as the container factory, but when sending the messages to the other queue inside a "parallelStream" we've got an IllegalStateException "Cannot determine target ConnectionFactory for lookup key" Exception
When we remove the "parallelStream" it works flawlessly.
public void sendMessage(final StagingMessage stagingMessage, final Long timestamp, final String country) {
final List<TransformedMessage> messages = processMessageList(stagingMessage);
messages.parallelStream().forEach(message -> {
final TransformedMessage transformedMessage = buildMessage(timestamp, ApiConstants.POST_METHOD, country);
myMessageSender.sendQueue(country, transformedMessage);
});
}
Connectio Facotory, where the lookup key is set:
#Configuration
#EnableRabbit
public class RabbitBaseConfig {
#Autowired
private QueueProperties queueProperties;
#Bean
#Primary
public ConnectionFactory connectionFactory(final ConnectionFactory connectionFactoryA, final ConnectionFactory connectionFactoryB) {
final SimpleRoutingConnectionFactory simpleRoutingConnectionFactory = new SimpleRoutingConnectionFactory();
final Map<Object, ConnectionFactory> map = new HashMap<>();
for (final String queue : queueProperties.getAQueueMap().values()) {
map.put("[" + queue + "]", connectionFactoryA);
}
for (final String queue : queueProperties.getBQueueMap().values()) {
map.put("[" + queue + "]", connectionFactoryB);
}
simpleRoutingConnectionFactory.setTargetConnectionFactories(map);
return simpleRoutingConnectionFactory;
}
#Bean
public Jackson2JsonMessageConverter jackson2JsonMessageConverter() {
return new Jackson2JsonMessageConverter();
}
}
Welcome to stack overflow!
You should always show the pertinent code and configuration beans when asking questions like this.
I assume you are using the RoutingConnectionFactory.
It uses a ThreadLocal to store the lookup key so the send has to happen on the same thread that set the key.
You generally should never go asynchronous in a listener anyway; you risk message loss. To increase concurrency, use the concurrency properties on the container.
EDIT
One technique would be to convey the lookup key in a message header:
#Bean
public RabbitTemplate template(ConnectionFactory rcf) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(rcf);
Expression expression = new SpelExpressionParser().parseExpression("messageProperties.headers['cfSelector']");
rabbitTemplate.setSendConnectionFactorySelectorExpression(expression);
return rabbitTemplate;
}
#RabbitListener(queues = "foo")
public void listen1(String in) {
IntStream.range(0, 10)
.parallel()
.mapToObj(i -> in + i)
.forEach(val -> {
this.template.convertAndSend("bar", val.toUpperCase(), msg -> {
msg.getMessageProperties().setHeader("cfSelector", "[bar]");
return msg;
});
});
}
has anyone come across that? basically I aim to process the file once even if the application is bounced
#Bean
public S3InboundFileSynchronizer s3InboundFileSynchronizer() throws Exception {
S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(amazonS3());
synchronizer.setDeleteRemoteFiles(false);
synchronizer.setPreserveTimestamp(true);
synchronizer.setRemoteDirectory(sourceBucket + "/dir/");
synchronizer.setFilter(new S3PersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "simpleMetadataStore"));
return synchronizer;
}
private AmazonS3 amazonS3() throws Exception {
return clientFactory.getClient(AmazonS3.class);
}
#Bean
#InboundChannelAdapter(value = "s3FilesChannel", poller = #Poller(fixedDelay = "5000"))
public S3InboundFileSynchronizingMessageSource s3InboundFileSynchronizingMessageSource() throws Exception {
S3InboundFileSynchronizingMessageSource messageSource =
new S3InboundFileSynchronizingMessageSource(s3InboundFileSynchronizer());
messageSource.setAutoCreateLocalDirectory(true);
messageSource.setLocalDirectory(new File("c:/temp/"));
messageSource.setLocalFilter(new FileSystemPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "fsSimpleMetadataStore"));
return messageSource;
}
Your problem that you use an in-memory SimpleMetadataStore, so after application restart you lose all the information stored there.
Consider to use some persistent store implementation instead, e.g. for AWS we have a DynamoDbMetadataStore: https://github.com/spring-projects/spring-integration-aws#metadata-store-for-amazon-dynamodb
When trying to implement a Unit-test in a spring-boot application, I can't retrieve a ConsumerRecord, though a custom Serializer using an own POJO is working. I checked it with the kafka-console-consumer, where a new message is each and every time I run the test generated and appears on the console.
What do I have to do to get the record instead of a null?
#RunWith(SpringRunner.class)
#SpringBootTest
#DisplayName("Testing GlobalMessageTest")
#DirtiesContext
public class NumberPlateSenderTest {
private static Logger log = LogManager.getLogger(NumberPlateSenderTest.class);
#Autowired
KafkaeskAdapterApplication kafkaeskAdapterApplication;
#Autowired
private NumberPlateSender numberPlateSender;
private KafkaMessageListenerContainer<String, NumberPlate> container;
private BlockingQueue<ConsumerRecord<String, NumberPlate>> records;
private static final String SENDER_TOPIC = "numberplate_test_topic";
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, SENDER_TOPIC);
#Before
public void setUp() throws Exception {
// set up the Kafka consumer properties
Map<String, Object> consumerProperties = KafkaTestUtils.consumerProps("sender", "false", embeddedKafka);
consumerProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, NumberPlateDeserializer.class);
// create a Kafka consumer factory
DefaultKafkaConsumerFactory<String, NumberPlate> consumerFactory =
new DefaultKafkaConsumerFactory<>(consumerProperties);
// set the topic that needs to be consumed
ContainerProperties containerProperties = new ContainerProperties(SENDER_TOPIC);
// create a Kafka MessageListenerContainer
container = new KafkaMessageListenerContainer<>(consumerFactory, containerProperties);
// create a thread safe queue to store the received message
records = new LinkedBlockingQueue<>();
// setup a Kafka message listener
container.setupMessageListener((MessageListener<String, NumberPlate>) record -> {
log.info("Message Listener received message='{}'", record.toString());
records.add(record);
});
// start the container and underlying message listener
container.start();
// wait until the container has the required number of assigned partitions
ContainerTestUtils.waitForAssignment(container, embeddedKafka.getPartitionsPerTopic());
}
#DisplayName("Should send a Message to a Producer and retrieve it")
#Test
public void TestProducer() throws InterruptedException {
//Test instance of Numberplate to send
NumberPlate localNumberplate = new NumberPlate();
byte[] bytes = "0x33".getBytes();
localNumberplate.setImageBlob(bytes);
localNumberplate.setNumberString("ABC123");
log.info(localNumberplate.toString());
//Send it
numberPlateSender.sendNumberPlateMessage(localNumberplate);
//Retrieve it
ConsumerRecord<String, NumberPlate> received = records.poll(3, TimeUnit.SECONDS);
log.info("Received the following content of ConsumerRecord: {}", received);
if (received == null) {
assert false;
} else {
NumberPlate retrNumberplate = received.value();
Assert.assertEquals(retrNumberplate, localNumberplate);
}
}
#After
public void tearDown() {
// stop the container
container.stop();
}
}
The complete code can be seen at my github repository.
I read a load of different SO questions and searched the web, but can't find an approach what is wrong with my code. Other users posted similar problems but to no avail.
The kafka version which runs on my Craptop is kafka_2.11-1.0.1
The springframework kafka Client is of version 2.1.5.RELEASE
Your problem that you start consumer against embedded Kafka, but send data to the real one. I don't know what is your goal, but I made it working against an embedded Kafka like this:
#BeforeClass
public static void setup() {
System.setProperty("kafka.bootstrapAddress", embeddedKafka.getBrokersAsString());
}
I override your kafka.bootstrapAddress configuration property for the producer with the broker address provided by the embedded Kafka.
In this case I fail with the:
java.lang.AssertionError: expected: dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}> but was: dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}>
Expected :dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}>
Actual :dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}>
But that's just because you use this assertion:
Assert.assertEquals(retrNumberplate, localNumberplate);
Meanwhile your NumberPlate doesn't provide a proper equals() implementation. Something like this:
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
NumberPlate that = (NumberPlate) o;
return Objects.equals(numberString, that.numberString) &&
Arrays.equals(imageBlob, that.imageBlob);
}
#Override
public int hashCode() {
int result = Objects.hash(numberString);
result = 31 * result + Arrays.hashCode(imageBlob);
return result;
}
Thank you for providing the whole project to play and reproduce! With the "question-answer-question-answer" game we would spend too much time here :-).
I am creating a Kafka Spring producer under Spring Boot which will send data to Kafka and then write to a database; I want all that work to be in one transaction. I am new to Kafka and no expert on Spring, and am having some difficulty. Any pointers much appreciated.
So far my code writes to Kafka successfully in a loop. I have not yet set up
the DB, but have proceeded to set up global transactioning by adding a transactionIdPrefix to the producerFactory in the configuration:
producerFactory.setTransactionIdPrefix("MY_SERVER");
and added #Transactional to the method that does the Kafka send. Eventually I plan to do my DB work in that same method.
Problem: the code runs great the first time. But if I stop the program, even cleanly, I find that the code hangs the 2nd time I run it as soon as it enters the #Transactional method. If I comment out the #Transactional, it enters the method but hangs on the kafa template send().
The problem seems to be the transaction ID. If I change the prefix and rerun, the program runs fine again the first time but hangs when I run it again, until a new prefix is chosen. Since after a restart the trans ID counter starts at zero, if the trans ID prefix does not change then the same trans ID will be used upon restart.
It seems to me that the original transID is still open on the server, and was never committed. (I can read the data off the topic using the console-consumer, but that will read uncommitted). But if that is the case, how do I get spring to commit the trans? I am thinking my coniguration must be wrong. Or-- is the issue possibly that trans ID's can never be reused? (In which case, how does one solve that?)
Here is my relevant code. Config is:
#SpringBootApplication
public class MYApplication {
#Autowired
private static ChangeSweeper changeSweeper;
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> producerFactory=new DefaultKafkaProducerFactory<>(configProps);
producerFactory.setTransactionIdPrefix("MY_SERVER");
return producerFactory;
}
#Bean
public KafkaTransactionManager<String, String> KafkaTransactionManager() {
return new KafkaTransactionManager<String, String>((producerFactory()));
}
#Bean(name="kafkaProducerTemplate")
public KafkaTemplate<String, String> kafkaProducerTemplate() {
return new KafkaTemplate<>(producerFactory());
}
And the method that does the transaction is:
#Transactional
public void send( final List<Record> records) {
logger.debug("sending {} records; batchSize={}; topic={}", records.size(),batchSize, kafkaTopic);
// Divide the record set into batches of size batchSize and send each batch with a kafka transaction:
for (int batchStartIndex = 0; batchStartIndex < records.size(); batchStartIndex += batchSize ) {
int batchEndIndex=Math.min(records.size()-1, batchStartIndex+batchSize-1);
List<Record> nextBatch = records.subList(batchStartIndex, batchEndIndex);
logger.debug("## batch is from " + batchStartIndex + " to " + batchEndIndex);
for (Record record : nextBatch) {
kafkaProducerTemplate.send( kafkaTopic, record.getKey().toString(), record.getData().toString());
logger.debug("Sending> " + record);
}
// I will put the DB writes here
}
This works fine for me no matter how many times I run it (but I have to run 3 broker instances on my local machine because transactions require that by default)...
#SpringBootApplication
#EnableTransactionManagement
public class So47817034Application {
public static void main(String[] args) {
SpringApplication.run(So47817034Application.class, args).close();
}
private final CountDownLatch latch = new CountDownLatch(2);
#Bean
public ApplicationRunner runner(Foo foo) {
return args -> {
foo.send("foo");
foo.send("bar");
this.latch.await(10, TimeUnit.SECONDS);
};
}
#Bean
public KafkaTransactionManager<Object, Object> KafkaTransactionManager(KafkaProperties properties) {
return new KafkaTransactionManager<Object, Object>(kafkaProducerFactory(properties));
}
#Bean
public ProducerFactory<Object, Object> kafkaProducerFactory(KafkaProperties properties) {
DefaultKafkaProducerFactory<Object, Object> factory =
new DefaultKafkaProducerFactory<Object, Object>(properties.buildProducerProperties());
factory.setTransactionIdPrefix("foo-");
return factory;
}
#KafkaListener(id = "foo", topics = "so47817034")
public void listen(String in) {
System.out.println(in);
this.latch.countDown();
}
#Component
public static class Foo {
#Autowired
private KafkaTemplate<Object, Object> template;
#Transactional
public void send(String go) {
this.template.send("so47817034", go);
}
}
}
Background
I am working on designing a file reading layer that can read delimited files and load it in a List. I have decided to use Spring Batch because it provides a lot of scalability options which I can leverage for different sets of files depending on their size.
The requirement
I want to design a generic Job API that can be used to read any delimited file.
There should be a single Job structure that should be used for parsing every delimited file. For example, if the system needs to read 5 files, there will be 5 jobs (one for each file). The only way the 5 jobs will be different from each other is that they will use a different FieldSetMapper, column name, directory path and additional scaling parameters such as commit-interval and throttle-limit.
The user of this API should not need to configure a Spring
batch job, step, chunking, partitioning, etc on his own when a new file type is introduced in the system.
All that the user needs to do is to provide the FieldsetMapperto be used by the job along with the commit-interval, throttle-limit and the directory where each type of file will be placed.
There will be one predefined directory per file. Each directory can contain multiple files of the same type and format. A MultiResourcePartioner will be used to look inside a directory. The number of partitions = number of files in the directory.
My requirement is to build a Spring Batch infrastructure that gives me a unique job I can launch once I have the bits and pieces that will make up the job.
My solution :
I created an abstract configuration class that will be extended by concrete configuration classes (There will be 1 concrete class per file to be read).
#Configuration
#EnableBatchProcessing
public abstract class AbstractFileLoader<T> {
private static final String FILE_PATTERN = "*.dat";
#Autowired
JobBuilderFactory jobs;
#Autowired
ResourcePatternResolver resourcePatternResolver;
public final Job createJob(Step s1, JobExecutionListener listener) {
return jobs.get(this.getClass().getSimpleName())
.incrementer(new RunIdIncrementer()).listener(listener)
.start(s1).build();
}
public abstract Job loaderJob(Step s1, JobExecutionListener listener);
public abstract FieldSetMapper<T> getFieldSetMapper();
public abstract String getFilesPath();
public abstract String[] getColumnNames();
public abstract int getChunkSize();
public abstract int getThrottleLimit();
#Bean
#StepScope
#Value("#{stepExecutionContext['fileName']}")
public FlatFileItemReader<T> reader(String file) {
FlatFileItemReader<T> reader = new FlatFileItemReader<T>();
String path = file.substring(file.indexOf(":") + 1, file.length());
FileSystemResource resource = new FileSystemResource(path);
reader.setResource(resource);
DefaultLineMapper<T> lineMapper = new DefaultLineMapper<T>();
lineMapper.setFieldSetMapper(getFieldSetMapper());
DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer(",");
tokenizer.setNames(getColumnNames());
lineMapper.setLineTokenizer(tokenizer);
reader.setLineMapper(lineMapper);
reader.setLinesToSkip(1);
return reader;
}
#Bean
public ItemProcessor<T, T> processor() {
// TODO add transformations here
return null;
}
#Bean
#JobScope
public ListItemWriter<T> writer() {
ListItemWriter<T> writer = new ListItemWriter<T>();
return writer;
}
#Bean
#JobScope
public Step readStep(StepBuilderFactory stepBuilderFactory,
ItemReader<T> reader, ItemWriter<T> writer,
ItemProcessor<T, T> processor, TaskExecutor taskExecutor) {
final Step readerStep = stepBuilderFactory
.get(this.getClass().getSimpleName() + " ReadStep:slave")
.<T, T> chunk(getChunkSize()).reader(reader)
.processor(processor).writer(writer).taskExecutor(taskExecutor)
.throttleLimit(getThrottleLimit()).build();
final Step partitionedStep = stepBuilderFactory
.get(this.getClass().getSimpleName() + " ReadStep:master")
.partitioner(readerStep)
.partitioner(
this.getClass().getSimpleName() + " ReadStep:slave",
partitioner()).taskExecutor(taskExecutor).build();
return partitionedStep;
}
/*
* #Bean public TaskExecutor taskExecutor() { return new
* SimpleAsyncTaskExecutor(); }
*/
#Bean
#JobScope
public Partitioner partitioner() {
MultiResourcePartitioner partitioner = new MultiResourcePartitioner();
Resource[] resources;
try {
resources = resourcePatternResolver.getResources("file:"
+ getFilesPath() + FILE_PATTERN);
} catch (IOException e) {
throw new RuntimeException(
"I/O problems when resolving the input file pattern.", e);
}
partitioner.setResources(resources);
return partitioner;
}
#Bean
#JobScope
public JobExecutionListener listener(ListItemWriter<T> writer) {
return new JobCompletionNotificationListener<T>(writer);
}
/*
* Use this if you want the writer to have job scope (JIRA BATCH-2269). Also
* change the return type of writer to ListItemWriter for this to work.
*/
#Bean
public TaskExecutor taskExecutor() {
return new SimpleAsyncTaskExecutor() {
#Override
protected void doExecute(final Runnable task) {
// gets the jobExecution of the configuration thread
final JobExecution jobExecution = JobSynchronizationManager
.getContext().getJobExecution();
super.doExecute(new Runnable() {
public void run() {
JobSynchronizationManager.register(jobExecution);
try {
task.run();
} finally {
JobSynchronizationManager.close();
}
}
});
}
};
}
}
Let's say I have to read Invoice data for the sake of discussion. I can therefore extend the above class for creating an InvoiceLoader :
#Configuration
public class InvoiceLoader extends AbstractFileLoader<Invoice>{
private class InvoiceFieldSetMapper implements FieldSetMapper<Invoice> {
public Invoice mapFieldSet(FieldSet f) {
Invoice invoice = new Invoice();
invoice.setNo(f.readString("INVOICE_NO");
return e;
}
}
#Override
public FieldSetMapper<Invoice> getFieldSetMapper() {
return new InvoiceFieldSetMapper();
}
#Override
public String getFilesPath() {
return "I:/CK/invoices/partitions/";
}
#Override
public String[] getColumnNames() {
return new String[] { "INVOICE_NO", "DATE"};
}
#Override
#Bean(name="invoiceJob")
public Job loaderJob(Step s1,
JobExecutionListener listener) {
return createJob(s1, listener);
}
#Override
public int getChunkSize() {
return 25254;
}
#Override
public int getThrottleLimit() {
return 8;
}
}
Let's say I have one more class called Inventory that extends AbstractFileLoader.
On application startup, I can load these two annotation configurations as follows :
AbstractApplicationContext context1 = new AnnotationConfigApplicationContext(InvoiceLoader.class, InventoryLoader.class);
Somewhere else in my application two different threads can launch the jobs as follows :
Thread 1 :
JobLauncher jobLauncher1 = context1.getBean(JobLauncher.class);
Job job1 = context1.getBean("invoiceJob", Job.class);
JobExecution jobExecution = jobLauncher1.run(job1, jobParams1);
Thread 2 :
JobLauncher jobLauncher1 = context1.getBean(JobLauncher.class);
Job job1 = context1.getBean("inventoryJob", Job.class);
JobExecution jobExecution = jobLauncher1.run(job1, jobParams1);
The advantage of this approach is that everytime there is a new file to be read, all that the developer/user has to do is subclass AbstractFileLoader and implement the required abstract methods without the need to get into the details of how to assemble the job.
The questions :
I am new to Spring batch so I may have overlooked some of the not-so-obvious issues with this approach such as shared internal objects in Spring batch that may cause two jobs running together to fail or obvious issues such as scoping of the beans.
Is there a better way to achieve my objective?
The fileName attribute of the #Value("#{stepExecutionContext['fileName']}") is always being assigned the value as I:/CK/invoices/partitions/ which is the value returned by getPathmethod in InvoiceLoader even though the getPathmethod inInventoryLoader`returns a different value.
One option is passing them as job parameters. For instance:
#Bean
Job job() {
jobs.get("myJob").start(step1(null)).build()
}
#Bean
#JobScope
Step step1(#Value('#{jobParameters["commitInterval"]}') commitInterval) {
steps.get('step1')
.chunk((int) commitInterval)
.reader(new IterableItemReader(iterable: [1, 2, 3, 4], name: 'foo'))
.writer(writer(null))
.build()
}
#Bean
#JobScope
ItemWriter writer(#Value('#{jobParameters["writerClass"]}') writerClass) {
applicationContext.classLoader.loadClass(writerClass).newInstance()
}
With MyWriter:
class MyWriter implements ItemWriter<Integer> {
#Override
void write(List<? extends Integer> items) throws Exception {
println "Write $items"
}
}
Then executed with:
def jobExecution = launcher.run(ctx.getBean(Job), new JobParameters([
commitInterval: new JobParameter(3),
writerClass: new JobParameter('MyWriter'), ]))
Output is:
INFO: Executing step: [step1]
Write [1, 2, 3]
Write [4]
Feb 24, 2016 2:30:22 PM org.springframework.batch.core.launch.support.SimpleJobLauncher$1 run
INFO: Job: [SimpleJob: [name=myJob]] completed with the following parameters: [{commitInterval=3, writerClass=MyWriter}] and the following status: [COMPLETED]
Status is: COMPLETED, job execution id 0
#1 step1 COMPLETED
Full example here.