Spring Retry with Transactional Annotation - spring-boot

Is the below code the correct way to use Spring Retry with Transactional?
Or do I need to take care of anything extra ? I am using latest Spring Boot version
Is retry tried after the failed transaction is closed ?
#Repository
public class MyRepository {
#Retryable( value = CustomRetryAbleException.class, maxAttempts = 2, backoff = #Backoff(delay = 30000))
#Transactional
Employee updateAndGetEmployee(String date) throw CustomRetryAbleException;
{
try{
jdbcTemplate.exceute( ....) ; //Call Stored Proc
}
catch(CustomRetryAbleException c )
{
throw CustomRetryAbleException (" Retry this Exception " );
}
}

'This is the way.'
Do not forget to put the #EnableRetry annotation on either your config-class (annotated with #Configuration) or your application-class (annotated with #SpringBootApplication).
Read this for more information.
You can just log something and intentionally make it fail to see if it gets logged again after the delay.

Related

Spring event multicaster strange behavior with transaction event listener

I am using application events in my service and decided to go for multicaster since I can set up the error handler and get the stacktrace in the console (normally runtime exceptions are not caught and are silently surpressed). So I defined my multicaster config as following:
#Configuration
class ApplicationEventMulticasterConfig {
companion object {
private val log = LoggerFactory.getLogger(ApplicationEventMulticasterConfig::class.java)
}
#Bean(name = ["applicationEventMulticaster"])
fun simpleApplicationEventMulticaster(multicasterExecutor: TaskExecutor): ApplicationEventMulticaster {
val eventMulticaster = SimpleApplicationEventMulticaster()
eventMulticaster.setTaskExecutor(multicasterExecutor)
eventMulticaster.setErrorHandler { throwable ->
log.error(throwable.stackTraceToString())
}
return eventMulticaster
}
#Bean(name = ["multicasterExecutor"])
fun taskExecutor(): TaskExecutor {
val executor = ThreadPoolTaskExecutor()
executor.corePoolSize = 4
executor.maxPoolSize = 40
executor.initialize()
return executor
}
}
Listener Case1:
#TransactionalEventListener
fun onEvent(event: Events.Created) {
Listener Case2:
#TransactionalEventListener(fallbackExecution=true)
fun onEvent(event: Events.Created) {
Publishing with multicaster.multicastEvent(Events.Created()). This simply does not work as expected, in case 1, the listener is not started at all (if transaction commits or rolls back) and in case 2 is listener triggered EACH time (in case of commit or failure).
If I delete the whole ApplicationEventMulticasterConfig everything is working fine but I do not have error handler set. Do you have any idea what could be wrong? It might be something with the way how I set up those beans.

Message are not commited (loss) when using #TransactionalEventListener to send a message in a JPA Transaction

Background of the code:
In order to replicate a production scenario, I have created a dummy app that will basically save something in DB in a transaction, and in the same method, it publishEvent and publishEvent send a message to rabbitMQ.
Classes and usages
Transaction Starts from this method.:
#Override
#Transactional
public EmpDTO createEmployeeInTrans(EmpDTO empDto) {
return createEmployee(empDto);
}
This method saves the record in DB and also triggers publishEvent
#Override
public EmpDTO createEmployee(EmpDTO empDTO) {
EmpEntity empEntity = new EmpEntity();
BeanUtils.copyProperties(empDTO, empEntity);
System.out.println("<< In Transaction : "+TransactionSynchronizationManager.getCurrentTransactionName()+" >> Saving data for employee " + empDTO.getEmpCode());
// Record data into a database
empEntity = empRepository.save(empEntity);
// Sending event , this will send the message.
eventPublisher.publishEvent(new ActivityEvent(empDTO));
return createResponse(empDTO, empEntity);
}
This is ActivityEvent
import org.springframework.context.ApplicationEvent;
import com.kuldeep.rabbitMQProducer.dto.EmpDTO;
public class ActivityEvent extends ApplicationEvent {
public ActivityEvent(EmpDTO source) {
super(source);
}
}
And this is TransactionalEventListener for the above Event.
//#Transactional(propagation = Propagation.REQUIRES_NEW)
#TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)
public void onActivitySave(ActivityEvent activityEvent) {
System.out.println("Activity got event ... Sending message .. ");
kRabbitTemplate.convertAndSend(exchange, routingkey, empDTO);
}
This is kRabbitTemplate is a bean config like this :
#Bean
public RabbitTemplate kRabbitTemplate(ConnectionFactory connectionFactory) {
final RabbitTemplate kRabbitTemplate = new RabbitTemplate(connectionFactory);
kRabbitTemplate.setChannelTransacted(true);
kRabbitTemplate.setMessageConverter(kJsonMessageConverter());
return kRabbitTemplate;
}
Problem Definition
When I am saving a record and sending a message on rabbitMQ using the above code flow, My messages are not delivered on the server means they lost.
What I understand about the transaction in AMQP is :
If the template is transacted, but convertAndSend is not called from Spring/JPA Transaction then messages are committed within the template's convertAndSend method.
// this is a snippet from org.springframework.amqp.rabbit.core.RabbitTemplate.doSend()
if (isChannelLocallyTransacted(channel)) {
// Transacted channel created by this template -> commit.
RabbitUtils.commitIfNecessary(channel);
}
But if the template is transacted and convertAndSend is called from Spring/JPA Transaction then this isChannelLocallyTransacted in doSend method will evaluate false and commit will be done in the method which initiated Spring/JPA Transaction.
What I found after investigating the reason for message loss in my above code.
Spring transaction was active when I called convertAndSend method, so it was supposed to commit the message in Spring transaction.
For that, RabbitTemplate binds the resources and registers the Synchronizations before sending the message in bindResourceToTransaction of org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.
public static RabbitResourceHolder bindResourceToTransaction(RabbitResourceHolder resourceHolder,
ConnectionFactory connectionFactory, boolean synched) {
if (TransactionSynchronizationManager.hasResource(connectionFactory)
|| !TransactionSynchronizationManager.isActualTransactionActive() || !synched) {
return (RabbitResourceHolder) TransactionSynchronizationManager.getResource(connectionFactory); // NOSONAR never null
}
TransactionSynchronizationManager.bindResource(connectionFactory, resourceHolder);
resourceHolder.setSynchronizedWithTransaction(true);
if (TransactionSynchronizationManager.isSynchronizationActive()) {
TransactionSynchronizationManager.registerSynchronization(new RabbitResourceSynchronization(resourceHolder,
connectionFactory));
}
return resourceHolder;
}
In my code, after resource bind, it is not able to registerSynchronization because TransactionSynchronizationManager.isSynchronizationActive()==false. and since it fails to registerSynchronization, spring commit did not happen for the rabbitMQ message as AbstractPlatformTransactionManager.triggerAfterCompletion calls RabbitMQ's commit for each synchronization.
What problem I faced because of the above issue.
Message was not committed in the spring transaction, so the message lost.
As resource was added in bindResourceToTransaction, this resource remained bind and did not let add the resource for any other message to send in the same thread.
Possible Root Cause of TransactionSynchronizationManager.isSynchronizationActive()==false
I found the method which starts the transaction removed the synchronization in triggerAfterCompletion of org.springframework.transaction.support.AbstractPlatformTransactionManager class. because status.isNewSynchronization() evaluated true after DB opertation (this usually not happens if I call convertAndSend without ApplicationEvent).
private void triggerAfterCompletion(DefaultTransactionStatus status, int completionStatus) {
if (status.isNewSynchronization()) {
List<TransactionSynchronization> synchronizations = TransactionSynchronizationManager.getSynchronizations();
TransactionSynchronizationManager.clearSynchronization();
if (!status.hasTransaction() || status.isNewTransaction()) {
if (status.isDebug()) {
logger.trace("Triggering afterCompletion synchronization");
}
// No transaction or new transaction for the current scope ->
// invoke the afterCompletion callbacks immediately
invokeAfterCompletion(synchronizations, completionStatus);
}
else if (!synchronizations.isEmpty()) {
// Existing transaction that we participate in, controlled outside
// of the scope of this Spring transaction manager -> try to register
// an afterCompletion callback with the existing (JTA) transaction.
registerAfterCompletionWithExistingTransaction(status.getTransaction(), synchronizations);
}
}
}
What I Did to overcome on this issue
I simply added #Transactional(propagation = Propagation.REQUIRES_NEW) along with on #TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT) in onActivitySave method and it worked as a new transaction was started.
What I need to know
Why this status.isNewSynchronization in triggerAfterCompletion method when using ApplicationEvent?
If the transaction was supposed to terminate in the parent method, why I got TransactionSynchronizationManager.isActualTransactionActive()==true in Listner class?
If Actual Transaction Active, was it supposed to remove the synchronization?
In bindResourceToTransaction, do spring AMQP assumed an active transaction without synchronization? if the answer is yes, why not to synchronization. init if it is not activated?
If I am propagating a new transaction then I am losing the parent transaction, is there any better way to do it?
Please help me on this, it is a hot production issue, and I am not very sure about the fix I have done.
This is a bug; the RabbitMQ transaction code pre-dated the #TransactionalEventListener code, by many years.
The problem is, with this configuration, we are in a quasi-transactional state, while there is indeed a transaction in process, the synchronizations are already cleared because the transaction has already committed.
Using #TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT) works.
I see you already raised an issue:
https://github.com/spring-projects/spring-amqp/issues/1309
In future, it's best to ask questions here, or raise an issue if you feel there is a bug. Don't do both.

How to increase transaction timeout in Quarkus?

I have some configurations in my application.properties file:
...
quarkus.datasource.url=jdbc:postgresql://...:5432/....
quarkus.datasource.driver=org.postgresql.Driver
quarkus.datasource.username=user
quarkus.datasource.password=password
quarkus.hibernate-orm.database.generation=update
...
I have a scheduler with a #Transactional method that takes a long time to finish executing:
#ApplicationScoped
class MyScheduler {
...
#Transactional
#Scheduled(every = "7200s")
open fun process() {
... my slow proccess goes here...
entityManager.persist(myObject)
}
}
And then, the transactional method receives a timeout error like that:
2019-06-24 20:11:59,874 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check timeout for TX 0:ffff0a000020:d58d:5cdad26e:81 in state RUN
2019-06-24 20:12:47,198 WARN [com.arj.ats.arjuna] (DefaultQuartzScheduler_Worker-3) ARJUNA012077: Abort called on already aborted atomic action 0:ffff0a000020:d58d:5cdad26e:81
Caused by: javax.transaction.RollbackException: ARJUNA016102: The transaction is not active! Uid is 0:ffff0a000020:d58d:5cdad26e:81
I believe that I must increase the timeout of my transacional method.
But I dont know how I can do this.
Someone could help me, please?
Thanks!
Seems that this has changed -> it is now possible to set the Transaction timeout:
https://quarkus.io/guides/transaction
You can configure the default transaction timeout, the timeout that applies to all transactions managed by the transaction manager, via the property:
quarkus.transaction-manager.default-transaction-timeout = 240s
-> specified as a duration (java.time.Duration format). Default is 60 sec
Quarkus don't allow you to globally configure the default transaction timeout yet (see https://github.com/quarkusio/quarkus/pull/2984).
But you should be able to do this at the user transaction level.
You can inject the UserTransaction object and set the transaction timeout in a postconstruct bloc.
Something like this should work :
#ApplicationScoped
class MyScheduler {
#Inject UserTransaction userTransaction;
#PostConstruct
fun init() {
//set a timeout as high as you need
userTransaction.setTransactionTimeout(3600);
}
#Transactional
#Scheduled(every = "7200s")
open fun process() {
entityManager.persist(myObject)
}
}
If you extract the code that make the transaction inside a Service, you can have a service with a #Transactional annotation, inject the UserTransaction in your scheduler and set the transaction timeout before calling the service.
All this works, I just tested both solution ;)
Thanks #loicmathieu for the answer!
I will just append some more details below.
You need to remove #Transactional and set transaction timeout before begin the transaction. In the end, you must commit the transaction:
import io.quarkus.scheduler.Scheduled
import javax.enterprise.context.ApplicationScoped
import javax.inject.Inject
import javax.transaction.UserTransaction
#ApplicationScoped
open class MyScheduler {
#Inject
lateinit var em: EntityManager
#Inject
lateinit var ut: UserTransaction
#Scheduled(every = "3600s")
open fun process() {
ut.setTransactionTimeout(3600)
ut.begin()
offerService.processOffers()
ut.commit()
}
}
Use the #TransactionConfiguration annotation and specify the seconds:
#Transactional
#TransactionConfiguration(timeout = 9876)
#Scheduled(every = "7200s")
open fun process() {
... my slow proccess goes here...
entityManager.persist(myObject)
}

How to fail Spring application startup if AOP Pointcut expression was not matched?

I have 2 datasources in my Spring Boot app. Whenever I take a connection and there is a user's principal within Security Context, I would like to set user's id in the context of database package by invoking procedure.
To achieve this I created an AOP advice like this:
#Configuration
#Aspect
class SqlAuthAopConfig {
#AfterReturning(
value = "bean(myDataSource) && execution(java.sql.Connection javax.sql.DataSource+.getConnection(..))",
returning = "connection")
fun initUser(connection: Connection) {
val principal = SecurityContextHolder.getContext().authentication.principal as? MyUser ?: return
connection.prepareStatement("BEGIN P_AUTH.SET_ID(?);END;").use { ps ->
ps.setLong(1, principal.id)
ps.execute()
}
}
}
As you can see I used beans pointcut designator (because I have 2 datasources). This does not seem to be type-safe. If DS bean name will change in future, the pointcut expression won't match any bean, but the app will be silently started. How can I configure this aspect to fail application startup if pointcut expression was not matched?
You can use #AfterThrowing spring annotation then you can intercept by following way:
#AfterThrowing(value = "bean(...) && execution(...)", throwing = "ex")
public void interceptDataSourceErrors(Exception ex) {
// Doing something here with exception.
logger.debug( ex.getCause().getMessage());
}

Spring Boot JPA and HikariCP maintaining active connections

Brief:
Is there a way to ensure that a connection to the database is returned to the pool?
Not-brief:
Data flow:
I have some long running tasks that could be sent to the server in large volume bursts.
Each of the requests is recorded in the DB that the submission was started. Then send that request off for processing.
If failure or success the request is recorded after the task is completed.
The issue is that after the submission is recorded all the way through the long running task, the connection pool uses an "active" connection. This could potential use up any size pool I have if the burst was large enough.
I am using spring boot with the following structure:
Controller - responds at "/" and has the "service" autowired.
Service - Contains all the JPA repositories and #Transactional methods to interact with the database.
When every the first service method call is made from the controller it opens an active connection and doesn't release it until the controller method returns.
So, Is there a way to return the connection to the pool after each service method?
Here is the service class in total:
#Service
#Slf4j
class SubmissionService {
#Autowired
CompanyRepository companyRepository;
#Autowired
SubmissionRepository submissionRepository;
#Autowired
FailureRepository failureRepository;
#Autowired
DataSource dataSource
#Transactional(readOnly = true)
public Long getCompany(String apiToken){
if(!apiToken){
return null
}
return companyRepository.findByApiToken(apiToken)?.id
}
#Transactional
public void successSubmission(Long id) {
log.debug("updating submission ${id} to success")
def submissionInstance = submissionRepository.findOne(id)
submissionInstance.message = "successfully analyzed."
submissionInstance.success = true
submissionRepository.save(submissionInstance)
}
#Transactional
public long createSubmission(Map properties) {
log.debug("creating submission ${properties}")
dataSource.pool.logPoolState()
def submissionInstance = new Submission()
for (key in properties.keySet()) {
if(submissionInstance.hasProperty(key)){
submissionInstance."${key}" = properties.get(key)
}
}
submissionInstance.company = companyRepository.findOne(properties.companyId)
submissionRepository.save(submissionInstance)
return submissionInstance.id
}
#Transactional
public Long failureSubmission(Exception e, Object analysis, Long submissionId){
//Track the failures
log.debug("updating submission ${submissionId} to failure")
def submissionInstance
if (submissionId) {
submissionInstance = submissionRepository.findOne(submissionId)
submissionRepository.save(submissionInstance)
}
def failureInstance = new Failure(submission: submissionInstance, submittedJson: JsonOutput.toJson(analysis), errorMessage: e.message)
failureRepository.save(failureInstance)
return failureInstance.id
}
}
It turns out that #M.Deinum was onto the right track. Spring Boot JPA automatically turns on the "OpenEntityManagerInViewFilter" if the application property spring.jpa.open_in_view is set to true, which it is by default. I found this in the JPA Configuration Source.
After setting this to false, the database session wasn't held onto, and my problems went away.

Resources