In my batch job, I have a single step with reading from Database , processing the record and writing back the same record to same table.(ie updating record with processed values or error reason if processing failed).
I am using AsyncItemProcessor for multi thread processing. When I get error in ItemProcessor.process() method, I throw an exception and batch job ends with FAILED status. This failed status is a requirement.
Because, its AsyncItemProcessor, I am unable to access ItemProcessListener.onProcessError().
How do I write the errorMessage to Item Table when there is an error ?
This is a known limitation of using the AsyncItemProcessor which is mentioned in its Javadoc:
While not an exhaustive list, things like StepExecution.filterCount will not
reflect the number of filtered items and
ItemProcessListener.onProcessError(Object, Exception) will not be called.
There is an open issue to update the reference documentation as well.
How do I write the errorMessage to Item Table when there is an error ?
The AsyncItemProcessor submits a FutureTask to the task executor and the only way to know if an exception happened in the task is by unwrapping the future (the exception will be actually wrapped in a java.util.concurrent.ExecutionException when the FutureTask.get is called). Now since the future is unwrapped in the AsyncItemWriter, you can use an ItemWriteListener and react to processing errors. You can find a complete example here.
Related
I need to record failure reason in metrics for each failed http call when using Vert.x WebClient. This compiles:
.onFailure()
.retry()
.withBackOff(Duration.ofMillis(INITIAL_RETRY_DELAY_MS))
.until(retryTimeExpired(wrapper))
I'm recording metrics in retryTimeExpired method. But at runtime I get this:
Caused by: java.lang.IllegalArgumentException: Invalid retry configuration, `when` cannot be used with a back-off configuration
at io.smallrye.mutiny.groups.UniRetry.when(UniRetry.java:156)
at io.smallrye.mutiny.groups.UniRetry.until(UniRetry.java:137)
I could of course add sleep but this is reactive. It would be possible to block for a short time but I would hate to block the thread. Any ideas how to do this without sleep?
You could try using many sequential onFailures. As long as the first doesn't handle the exception (recoverWithItem, recoverWithNull, recoverWithUni) or throw its own the next should observe the same failure.
service.apiMethod(key)
.invoke(record -> logger.info("Succeeded to read record."))
.onFailure()
.invoke(exception -> logger.warn("Failed to read record."))
.onFailure()
.retry().withBackOff(delay).indefinitely();
I'm using SpringBatch for my app. In one of the batch jobs, I need to process multiple data. Each data requires several database updates. And I need to make one transaction for one data. Meaning, if when processing one data an exception is thrown, database updates are rolled back for that data, then keep processing the next data.
I've put all database updates in one method in service layer. In my springbatch tasklet, I call that method for each data, like this;
for (RequestViewForBatch request : requestList) {
orderService.processEachRequest(request);
}
In the service class the method is like this;
Transactional(propagation = Propagation.NESTED, timeout = 100, rollbackFor = Exception.class)
public void processEachRequest(RequestViewForBatch request) {
//update database
}
When executing the task, it gives me this error message
org.springframework.transaction.NestedTransactionNotSupportedException: Transaction manager does not allow nested transactions by default - specify 'nestedTransactionAllowed' property with value 'true'
but i don't know how to solve this error.
Any suggestion would be appreciated. Thanks in advance.
The tasklet step will be executed in a transaction driven by Spring Batch. You need to remove the #Transactional on your processEachRequest method.
You would need a fault-tolerant chunk-oriented step configured with a skip policy. In this case, only faulty items will be skipped. Please refer to the Configuring Skip Logic section of the documentation. You can find an example here.
I want skip all exceptions (using AlwaysSkipPolicy) and then handle all skipped exceptions with errors in StepListener.
I want create summary message at the end of step with written/read items and when any exception occured then about how many and with what exception that items was skipped.
When I using skip policy always true + step listener, I have 0 exceptions in "failureExceptions". When I turned off skip policy i got exception there but then when exception occured, job stoping.
SkipListener is what you are looking for. It allows you to intercept skipped items during all phases of a chunk-oriented step (ie read, process and write). This listener gives you access to the skipped item and the exception that caused it to be skipped, so you should be able to implement the reporting you need with this listener.
In my application,I am using Session_OnEnd event on global.asa file to log the logged out user details to one of my table. For that I am creating an object of one of my VB component from Session_OnEnd event and from there I am inserting into one of my table.
In certain scenarios (Multiple user’s session end happens at a time), we are getting an Unhandled exception while creating above object, since my 1st request is already in processing status.
Anybody of you suggest any good method to overcome this issue?
Dim objClean
Set objClean= Server.CreateObject("Clean.clsClean")
Call objClean.Cleanup(Session, Application)
Set objClean= Nothing
Set objClean= Server.CreateObject("Clean.clsClean") this line is throwing an exception in my case.
I am trying to see the execution status of a job I ran, but at some random points I get the following error:
2015-10-14T14:41:24-0400 1.2.0.RELEASE ERROR qtp195949131-28 rest.RestControllerAdvice - Caught exception while handling a request
org.springframework.http.converter.HttpMessageNotWritableException: Could not write content: java.lang.Integer cannot be cast to java.lang.String (through reference chain: org.springframework.xd.rest.domain.JobExecutionInfoResource["jobExecution"]->org.springframework.batch.core.JobExecution["executionContext"]->org.springframework.batch.item.ExecutionContext["values"]->java.util.concurrent.EntrySetView[0]->java.util.concurrent.MapEntry["value"]->java.util.ArrayList[0]); nested exception is com.fasterxml.jackson.databind.JsonMappingException: java.lang.Integer cannot be cast to java.lang.String (through reference chain: org.springframework.xd.rest.domain.JobExecutionInfoResource["jobExecution"]->org.springframework.batch.core.JobExecution["executionContext"]->org.springframework.batch.item.ExecutionContext["values"]->java.util.concurrent.EntrySetView[0]->java.util.concurrent.MapEntry["value"]->java.util.ArrayList[0])
Now, I say "random", but the truth is I don't even know which step causes this exception since those are the only logs I have. The jobs run successfully with seemingly no errors, but this really worries me. I've been looking online for days for this, but I don't see anything that can either help me debug this, or even gives an inkling of what might cause this. Any help?
Thanks, N.S.
Okay, so I figured out that the problem was that we were serializing in the execution context a List>, and one of those internal maps contained an Integer as an Object instead of a String. This seems to cause the deserialization of the context to crash.
Solution? Don't store that list within the execution context (instead we wrote the whole list object to a file for transferring between various steps).