Difference between StatefulRetryOperationsInterceptor and RetryOperationsInterceptor - spring-retry

I went through the javadoc of both StatefulRetryOperationsInterceptor and RetryOperationsInterceptor but still didn't understand the difference between. Can anyone explain to me the difference between them and when to use them. Below is the explanation given on javado
RetryOperationsInterceptor
A MethodInterceptor that can be used to automatically retry calls to a method on a service if it fails. The injected RetryOperations is used to control the number of retries. By default it will retry a fixed number of times, according to the defaults in RetryTemplate. Hint about transaction boundaries. If you want to retry a failed transaction you need to make sure that the transaction boundary is inside the retry, otherwise the successful attempt will roll back with the whole transaction. If the method being intercepted is also transactional, then use the ordering hints in the advice declarations to ensure that this one is before the transaction interceptor in the advice chain.
StatefulRetryOperationsInterceptor
A MethodInterceptor that can be used to automatically retry calls to a method on a service if it fails. The argument to the service method is treated as an item to be remembered in case the call fails. So the retry operation is stateful, and the item that failed is tracked by its unique key (via MethodArgumentsKeyGenerator) until the retry is exhausted, at which point the MethodInvocationRecoverer is called. The main use case for this is where the service is transactional, via a transaction interceptor on the interceptor chain. In this case the retry (and recovery on exhausted) always happens in a new transaction. The injected RetryOperations is used to control the number of retries. By default it will retry a fixed number of times, according to the defaults in RetryTemplate.
Both the description looks same. Please respond your help is appreciated

The RetryOperationsInterceptor does everything within the context of the internal RetryTemplate. The execute method does not exit until retries are exhausted. No other work can occur on the calling thread until retries are complete.
If the advised method also has, for example, a transaction interceptor after the retry interceptor, the proxy is cloned on each doWithRetry call, allowing a new transaction to be started each time.
With the StatefulRetryOperationsInterceptor, the exception is re-thrown to the caller and the caller is responsible for calling again until retries are exhausted after which, either the recoverer is called, or an ExhaustedRetryException is thrown (by default).
Typical use of the stateful interceptor is with a message-driven application (e.g. JMS, RabbitMQ). Given that these systems can redeliver the failed message, and the redelivery can be later (based on other settings), other work can be processed on the calling thread before the failed request is retried.
IMPORTANT: The stateful interceptor has support for a rollbackClassifier; which indicates to the RetryTemplate that a particular request should be retried within the same transaction (i.e. the execute method does not throw the exception, it retries without starting a new transaction).
This only works when the retry interceptor is AFTER the transaction interceptor. Do not use this feature when the transaction interceptor is after the retry interceptor. In that case, the calling code must decide whether or not to retry the call, based on the exception type.

Related

Is it possible to skip any missed #schedule events instead of catching them up?

Using Websphere 9, I have a schedule service, e.g:
#Schedule(minute="30", hour="6-20", dayOfWeek="Mon-Fri",
dayOfMonth="*", month="*", year="*", info="TimerName", persistent=true)
public void scheduledTimeout(final Timer t)
{
// do something
}
It's persistent so that it will only trigger on one of the nodes on the cluster.
If for some reason the timer runs long, or otherwise doesn't run; I don't want WebSphere to try again - I just want it to wait until the next trigger.
Is this possible?
I don't see any relevant settings in WAS v9 in regard of this, as what EJB spec says it is responsibility of bean provider to handle any out of sequence or additional events. So you would have to implement that logic in your bean using timer parameter.
However you could consider WebSphere/Open Liberty server which adds additional configuration (see details here https://github.com/OpenLiberty/open-liberty/issues/10563)
And allows you for example to specify what to do which such events:
New missedPersistentTimerAction element will have the following 2
options:
ALL The timeout method is invoked immediately for all missed
expirations. When multiple expirations have been missed for the same
timer, each invocation will occur synchronously until all missed
expirations have been processed, then the timer will resume with the
next future expiration.
ALL is the current behavior, and will be the default when failover is
not enabled.
ONCE The timeout method is invoked once immediately. All other missed
expirations are skipped and the timer will resume with the next future
expiration.
ONCE will be the default behavior when failover is enabled. This is
the minimal level of support required by the specification.
When the timer runs on server start, calling getNextTimeout() will
return the next timeout in the future, accounting for all the
expirations that will be skipped, not the next timeout based on the
missed expiration (i.e. so not a time in the past)
Note: Does not apply to single action timers. Single action timers
will always run once on server start, and then removed.

How to run blocking codes on another thread and make http request return immediately

We started a new project with Quarkus and Mutiny, and created a bunch of endpoints with Quarkus #Funq, everything has been working fine so far. Now we want to process something very time-consuming in one of the endpoints, and what we are expecting is, once user clicks a button to send the http request from frontend and hits this specific endpoint, we are going to return 202 Accepted immediately, leaving the time-consuming operation processing in another thread from backend, then send notification email accordingly to user once it completes.
I understand this can be done with #Async or CompletableFuture, but now we want to do this with Mutiny. Based on how I read Mutiny documentation here https://smallrye.io/smallrye-mutiny/guides/imperative-to-reactive, runSubscriptionOn will avoid blocking the caller thread by running the time-consuming method on another thread, and my testing showed the time-consuming codes did get executed on a different thread. However, the http request does not return immediately, it is still pending until the time-consuming method finishes executing (as I observe in the browser's developer tool). Did I misunderstand how runSubscriptionOn works? How do I implement this feature with Mutiny?
My #Funq endpoint looks like this
#Inject
MyService myService;
#Funq("api/report")
public Uni<String> sendReport(MyRequest request) {
ExecutorService executor = Executors.newFixedThreadPool(10, r -> new Thread(r, "CUSTOM_THREAD"));
return Uni.createFrom()
.item(() -> myService.timeConsumingMethod(request))
.runSubscriptionOn(executor);
}
Edit: I found the solution using Uni based on #Ladicek's answer. After digging deeper into Quarkus and Uni I have a follow-up question:
Currently most of our blocking methods are not returning Uni on Service level, instead we create Uni object from what they return (i.e. object or list), and return the Uni on Controller level in their endpoints like this
return Uni.createFrom().item(() -> myService.myIOBlockingMethod(request)).
As #Ladicek explained, I do not have to use .runSubscriptionOn explicitly as the IO blocking method will automatically run on a worker thread (as my method on Service level does not return Uni). Is there any downside for this? My understanding is, this will lead to longer response time because it has to jump between the I/O thread and worker thread, am I correct?
What is the best practice for this? Should I always return Uni for those blocking methods on Service level so that they can run on the I/O threads as well? If so I guess I will always need to call .runSubscriptionOn to run it on a different worker thread so that the I/O thread is not blocked, correct?
By returning a Uni, you're basically saying that the response is complete when the Uni completes. What you want is to run the action on a thread pool and return a complete response (Uni or not, that doesn't matter).
By the way, you're creating an extra thread pool in the method, for each request, and don't shut it down. That's wrong. You want to create one thread pool for all requests (e.g. in a #PostConstruct method) and ideally also shut it down when the application ends (in a #PreDestroy method).

Strategy for passing same payload between messages when optional outbound gateways fail

I have a workflow whose message payload (MasterObj) is being enriched several times. During the 2nd enrichment () an UnknownHostException was thrown by an outbound gateway. My error channel on the enricher is called but the message the error-channel receives is an exception, and the failed msg in that exception is no longer my MasterObj (original payload) but it is now the object gotten from request-payload-expression on the enricher.
The enricher calls an outbound-gateway and business-wise this is optional. I just want to continue my workflow with the payload that I've been enriching. The docs say that the error-channel on the enricher can be used to provide an alternate object (to what the enricher's request-channel would return) but even when I return an object from the enricher's error-channel, it still takes me to the workflow's overall error channel.
How do I trap errors from enricher's + outbound-gateways, and continue processing my workflow with the same payload I've been working on?
Is trying to maintain a single payload object for the entire workflow the right strategy? I need to be able to access it whenever I need.
I was thinking of using a bean scoped to the session where I store the payload but that seems to defeat the purpose of SI, no?
Thanks.
Well, if you worry about your MasterObj in the error-channel flow, don't use that request-payload-expression and let the original payload go to the enricher's sub-flow.
You always can use in that flow a simple <transformer expression="">.
On the other hand, you're right: it isn't good strategy to support single object through the flow. You carry messages via channel and it isn't good to be tied on each step. The Spring Integration purpose is to be able to switch from different MessageChannel types at any time with small effort for their producers and consumers. Also you can switch to the distributed mode when consumers and producers are on different machines.
If you still need to enrich the same object several times, consider to write some custom Java code. You can use a #MessagingGateway on the matter to still have a Spring Integration gain.
And right, scope is not good for integration flow, because you can simply switch there to a different channel type and lose a ThreadLocal context.

use aspectj to retry deadlocked transactions in spring boot

I created a deadlock situation by having 2 functions lock 2 rows in a mysql table in opposite orders. I have #Transactional on both functions. I also have an aspectj aspect on both functions that retries the failed one at most 2 more times.
After the deadlock, one thread succeeds and one fails. The failed one retries. So far so good. However, there are 2 problems after this point.
When the failed function was retried a second time, it reads the 2 rows again. However, the value for the first one is old and the second one is new.
At the end, the transaction fails because the transaction was already marked for rollback. So the #Transactional proxy is around the retry proxy. Is there a way to reverse the order? I tried to have the retry proxy inherit Ordered and set the order to Ordered.HIGHEST_VALUE and Ordered.LOWEST_VALUE but neither worked.
I tried spring retry and it worked like a charm. Still looking into how it does it magic though.
I basically did this:
add dependency on org.springframework.retry spring-retry
add #EnableRetry to application
add #Retryable(maxAttempts = 3, backoff = #Backoff(delay = 2000)) above
#Transactional annotated functions.
This one has the more direct answer:
Intercepting #Transactional After Optimistic Lock for Asynchronous Calls in Restful App
Just added #Order(1) below the aspect.
Turned out the key was to use 1 as the order. I tried the Ordered interface again just like in the problem description. This time using 1 and it worked just the same as the Order annotation is just a shortcut to the interface.

Spring jdbcTemplate Rollback for multiple database operations

I have a program that uses handler, businessObject and DAO for program execution. Control starts from handler to businessObject and finally to DAO for Database operations.
For example my program does 3 operations: insertEmployee(), updateEmployee() and deleteEmployee() every method being called one after the other from handler. once insertEmployee() called control get back to handler then it calls updateEmployee() again control back to handler then it calls deleteEmployee().
Problem Statement: If my first two methods in dao are successful and control is back to handler and next method it request to dao is deleteEmployee(). Meanwhile it faces some kind of exception in deleteEmployee(). It should be able to rollback the earlier insertEmployee() and updateEmployee() operation also. It should not rollback only deleteEmployee(). It should behave as this program never ran in system.
Can any one point me how to achieve this in spring jdbcTemplate Transaction management.
You should check about transaction propagation, in special: PROPAGATION_REQUIRED.
More info:
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/transaction.html#tx-propagation

Resources