EventListener and retryable - spring

I would like to invoke some code after my application start. Is there any way to handle event:
Started SomeApp in 14.905 seconds (JVM running for 16.268)
I'm going to try if another application is up. I've tried to use Retryable but not its executed before application started and exception is thrown so application exits.
#EventListener
fun handleContextRefresh(event: ContextRefreshedEvent) {
retryableInvokeConnection()
}
#Retryable(
value = [RetryableException::class, ConnectionException::class],
maxAttempts = 100000,
backoff = Backoff(delay = 5)
)
private fun retryableInvokeConnection() {
}
#Recover
private fun retryableInvokeConnectionExceptionHandler(ex: ConnectionException) {
}
Maybe I should use PostConstruct and while loop.

You can't call a #Retryable method within the same bean, it bypasses the proxy with the retry interceptor. Move the method to another bean and inject it.
The event is a better way than using #PostConstruct.

Related

Spring event multicaster strange behavior with transaction event listener

I am using application events in my service and decided to go for multicaster since I can set up the error handler and get the stacktrace in the console (normally runtime exceptions are not caught and are silently surpressed). So I defined my multicaster config as following:
#Configuration
class ApplicationEventMulticasterConfig {
companion object {
private val log = LoggerFactory.getLogger(ApplicationEventMulticasterConfig::class.java)
}
#Bean(name = ["applicationEventMulticaster"])
fun simpleApplicationEventMulticaster(multicasterExecutor: TaskExecutor): ApplicationEventMulticaster {
val eventMulticaster = SimpleApplicationEventMulticaster()
eventMulticaster.setTaskExecutor(multicasterExecutor)
eventMulticaster.setErrorHandler { throwable ->
log.error(throwable.stackTraceToString())
}
return eventMulticaster
}
#Bean(name = ["multicasterExecutor"])
fun taskExecutor(): TaskExecutor {
val executor = ThreadPoolTaskExecutor()
executor.corePoolSize = 4
executor.maxPoolSize = 40
executor.initialize()
return executor
}
}
Listener Case1:
#TransactionalEventListener
fun onEvent(event: Events.Created) {
Listener Case2:
#TransactionalEventListener(fallbackExecution=true)
fun onEvent(event: Events.Created) {
Publishing with multicaster.multicastEvent(Events.Created()). This simply does not work as expected, in case 1, the listener is not started at all (if transaction commits or rolls back) and in case 2 is listener triggered EACH time (in case of commit or failure).
If I delete the whole ApplicationEventMulticasterConfig everything is working fine but I do not have error handler set. Do you have any idea what could be wrong? It might be something with the way how I set up those beans.

Spring Retry with Transactional Annotation

Is the below code the correct way to use Spring Retry with Transactional?
Or do I need to take care of anything extra ? I am using latest Spring Boot version
Is retry tried after the failed transaction is closed ?
#Repository
public class MyRepository {
#Retryable( value = CustomRetryAbleException.class, maxAttempts = 2, backoff = #Backoff(delay = 30000))
#Transactional
Employee updateAndGetEmployee(String date) throw CustomRetryAbleException;
{
try{
jdbcTemplate.exceute( ....) ; //Call Stored Proc
}
catch(CustomRetryAbleException c )
{
throw CustomRetryAbleException (" Retry this Exception " );
}
}
'This is the way.'
Do not forget to put the #EnableRetry annotation on either your config-class (annotated with #Configuration) or your application-class (annotated with #SpringBootApplication).
Read this for more information.
You can just log something and intentionally make it fail to see if it gets logged again after the delay.

How to throw exception from Spring AOP declarative retry methods?

I'm implementing some retry handling in my methods using Spring Retry.
I have a Data Access Layer (DAL) in my application and a Service Layer in my application.
My Service layer calls the DAL to make a remote connection to retrieve information. If the DAL fails it will retry. However, if the number of retries fails I would like to rethrow an exception.
In my current project I something very similar to this:
#Configuration
#EnableRetry
public class Application {
#Bean
public Service service() {
return new Service();
}
}
#Service
class Service {
#Autowired
DataAccessLayer dal;
public void doSomethingWithFoo() {
Foo foo = dal.getFoo()
// do something with Foo
}
}
#Service
class DataAccessLayer {
#Retryable(RemoteAccessException.class)
public Foo getFoo() {
// call remote HTTP service to get Foo
}
#Recover
public Foo recover(RemoteAccessException e) {
// log the error?
// how to rethrow such that DataAccessLayer.getFoo() shows it throws an exception as well?
}
}
My Application has a Service and the Service calls DataAccessLayer getFoo. If getFoo fails a number of times the DAL will handle the retries. If it fail's after that I'd like my Service layer to do something about it. However I'm not sure how to let that be known. I'm using intelliJ and when I try to throw e; in the #Recover recover method I don't get any warnings that DataAccessLayer.getFoo throws any exceptions. I'm not sure if it will. But I'd like the IDE to warn me that when the retries fail a new exception will be thrown to let the Service layer know to expect it. Otherwise if it calls dal.getFoo it doesn't know to handle any errors. How is this typically handled? Should I not use the AOP declarative style and go for imperative?
You can change getFoo() (and recover()) to add throws <some checked exception> and wrap the RemoteAccessException in it (in recover()).
That will force the service layer to catch that exception.

How to initialize/enable Bean after another process finishes?

The idea is that I would like to first let a #Scheduled method retrieve some data and only when that process has finished enable/initialize my #KafkaListener. Currently the Kafka listener starts up immediately without waiting for the scheduler to be done.
I've tried to use #Conditional with a custom Condition, but this only is executed on context creation (aka startup). Also #ConditionalOnBean didn't work because actually my Scheduler bean is already created before it finishes the process.
This is how my setup looks like.
Kafka Listener:
#Service
class KafkaMessageHandler(private val someRepository) {
#KafkaListener(topics = ["myTopic"])
fun listen(messages: List<ConsumerRecord<*, *>>) {
// filter messages based on data in someRepository
// Do fancy stuff
}
}
Scheduler:
#Component
class Scheduler(private val someRepository) {
#Scheduled(fixedDelayString = "\${schedule.delay}")
fun updateData() {
// Fetch data from API
// update someRepository with this data
}
}
Is there any nice Spring way of waiting for the scheduler to finish before initializing the KafkaMessageHandler?

what the usage of method start() in AbstractApplicationContext

I'm a spring user. and I start to read the source code of spring.
when I read AbstractApplicationContext, I found there's one method start(), I found that the method doesn't be called when ApplicationContext is initialized.
My questions:
1)what the usage of the method? according to the word's(start) meaning, I think it should be called before the ApplicationContext can work. but it doesn't.
2)how can I listen the event which applicationContext starting working? after reading the code, I found the method will publish ContextStartedEvent. but if I just initialize the context, the context still can work and don't publish event.I can't listen the event to track the start of applicationcontext.
The start method is part of the Lifecycle interface, which is called as part of the application startup process.
If you want to be notified when the context is starting you should declare a bean that implements the Lifecycle interface.
public class org.example.MyLifecycle implements Lifecycle {
private boolean started = false;
public boolean isRunning() {
return started;
}
public void start() {
System.err.println("MyLifecycle starting");
started = true;
}
public void stop() {
System.err.println("MyLifecycle stopping");
started = false;
}
}
Then
<bean class="org.example.MyLifecycle"/>
This is all handled, by default, by DefaultLifecycleProcessor unless there's a bean in the context called lifecycleProcessor which implements the LifecycleProcessor interface

Resources