I have a situation in Spring where I am writing data to some external source,
Now before writing the data to the external source,
i take a lock
read the object
perform some operation
write the oject back. and unlock the object.
Below piece of code explaing roughly how i do it.
//Code begins here
Lock lck = new ReentrantLock();
public void manipulateData(){
lck.lock();
//Object obj = read the data
//modify it
Write(obj)
lck.unlock();
}
//Code End here
Now in a multi-threaded environment what currently happens is after write call I am calling unlock but my transaction is not committed until my function execution completes. However since I am calling unlock. Other thread gets the lock and read the data which is actually in correct.
So I want something like the lock should be obtained by other thread only when the transaction commit.
Also I cannot use programmatic transaction.
You can consider extracting the code that reads the object and modifies it to a different method (annotated with #Transactional). The lock should then be taken just before invoking the method and released after the method returns. Something like this:
#Autowired
private Dao dao;
public void manipulateData(){
lck.lock();
dao.update(myObj);
lck.unlock();
}
class Dao {
#Transactional
public void update(MyObject obj) {
//read and modify
}
}
You can implement an Aspect (AOP) similar to this:
First create a proprietary Transactional similar to this:
#Target({ ElementType.METHOD })
#Retention(RetentionPolicy.RUNTIME)
public #interface LockingTransactional {
}
Then the aspect's interesting code should be similar to this:
...
private final Lock lck = new ReentrantLock();
...
#Around("#annotation(<FULL-PACKAGE-NAME-HERE>.LockingTransactional)")
public Object intercept(ProceedingJoinPoint pjp) throws Throwable {
try {
lck.lock();
TransactionStatus status = createTransaction(); // create the transaction "manually"
Object result;
try {
result = pjp.proceed();
} catch (Throwable t) {
txManager.rollback(status);
throw t;
}
txManager.commit(status);
return result;
} finally {
lck.unlock();
}
}
...
private TransactionStatus createTransaction() {
DefaultTransactionDefinition definition = new DefaultTransactionDefinition();
def.setIsolationLevel(<YOUR-LEVEL-HERE>);
def.setPropagationBehavior(<YOUR-PROPAGATION-HERE>);
return txManager.getTransaction(definition);
}
...
Related
I have a job method in a class-annotated #Transactional. This job method calls inner methods for persistence of individual records. If I simulate an error in the following inner update() method somewhere in the middle of my result set processing, I see that all successful records before/after this exception do not get saved after job completion. Why is that? All outside persistence should remain, with the exception of the individual record that failed. The inner update alone has rollbackFor.
#Service("mailService")
#Transactional
#EnableScheduling
public class MailServiceImpl implements MailService {
#Override
#Scheduled(cron = "${mail.cron.pubmed.autosynch.job}")
public void autoSynchPubMedJob() {
//... Fetch result set
for (Result r: resultset) {
try {
pubService.updatePublication(r);
} catch (Exception e) {
// Silently log and continue
log.error("Error on record: ", e);
}
}
}
The updatePublication method, this is the one with rollbackFor:
#Override
#Transactional(readOnly = false, rollbackFor = Exception.class)
public void updatePublication(Publication publication) throws Exception {
dao.update1(..);
dao.update2(..);
// Simulate exception for a specific record for testing
if (publication.getId() == 123) {
throw new Exception("Test Exception");
}
}
Result: no successful data persisted at all at the end of job completion. There should be partial persistence (for other successful records).
When I remove this Exception simulation, all data is successfully persisted at the end. Also, all data is persisted if I remove the inner call's rollbackFor.
Probaby because it uses existing transaction. Try opening a new one with propagation = REQUIRES_NEW.
Note: New transaction won't be opened if you call the method from the same service. You should use either self-reference call or extract logic to another #Service.
I have two transaction manager for two database. I need to persist same data into both databases. If one transaction failed, other one need rollback. I have done like below
public interface DataService {
void saveData();
}
#Service
public class DataServiceImpl implements DataService {
#Autowired
private DataRepository dataRepository;
#Autowired
private OrDataRepository orDataRepository;
#Autowired
#Qualifier("orService")
private OrService orDataServiceImpl;
#Override
#Transactional(transactionManager = "transactionManager", rollbackFor = {RuntimeException.class})
public void saveData() {
Data data = new Data();
data.setCompKey(UUID.randomUUID().toString().substring(1,5));
data.setName("data");
dataRepository.save(data);
orDataServiceImpl.save();
//throw new RuntimeException("");
}
}
public interface OrService {
void save();
}
#Service("orService")
public class OrDataServiceImpl implements OrService {
#Autowired
private OrDataRepository orDataRepository;
#Override
#Transactional(rollbackFor = {RuntimeException.class})
public void save() {
OrData data = new OrData();
data.setCompKey(UUID.randomUUID().toString().substring(1,5));
data.setName("ordata");
orDataRepository.save(data);
}
}
I have two transaction manager (entityManager & orEntityManager) for two different DB.
If any exception in OrDataServiceImpl save method, data is not getting persisted in both DB. But if any exception in DataServiceImpl saveData method, data is getting persisted into OrData table.
I want to rollback the data from both DB if any exception.
chainedTransactionManager is deprecated. So can't use. atomikos and bitronix also can't use due to some restrictions. Kindly suggest better way to achieve distributed transation
The code need to be refactored, edit the DataServiceImpl.save() method.
Comment the orDataServiceImpl.save() line
public void saveData() {
Data data = new Data();
data.setCompKey(UUID.randomUUID().toString().substring(1,5));
data.setName("data");
dataRepository.save(data);
//orDataServiceImpl.save();
//throw new RuntimeException("");
}
Refactor/Edit the OrDataService Interface
public interface OrDataService {
void save(String uuid);
void delete(String uuid);
//will be use for compensating transaction
}
Update the OrDataServiceImpl class to implement above interface
Write new orchestration Method and use compensating transaction to rollback
pseudo code
call OrDataServiceImpl.save()
if step#1 was success
-> DataServiceImpl.saveData()
if Exception at step#3,
->OrDataServiceImpl.delete() [//to rollback]
else if, Exception at step#1
//do nothing
Let's say I have a rest call which creates some object "A" in the database. Method is marked with #Transactional annotation. And just after creation I need to launch another asynchronous process in another thread or through some messaging system or in some other async way. That new process depends on the object "A" and needs to see it.
How can I make sure that transaction is commited before new process starts execution?
For example in Spring there is
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization(){
void afterCommit(){
//do what you want to do after commit
}
})
Does Quarkus has something similar?
You can inject TransactionManager
#Inject
TransactionManager transactionManager;
and do
Transaction transaction = transactionManager.getTransaction();
transaction.registerSynchronization(new Synchronization() {
#Override
public void beforeCompletion() {
//nothing here
}
#Override
public void afterCompletion(int status) {
//do some code after completion
}
});
I try save list of entities to Oracle Db.
#Transactional
public void save() {
//logick
for (QuittanceType quittanceType : quittance) {
quittancesService.parseQuittance(quittanceType);
}
//logick
}
On each step I call this method:
#Transactional
#Override
public void parseQuittance(QuittanceType quittance) {
try {
//logick create payToChargeDb
paymentToChargeService.saveAndFlush(payToChargeDb);
} catch (Exception e) {
log.warn("Ignore.", e);
}
}
and method
#Override
public PaymentsToCharge saveAndFlushIn(PaymentsToCharge paymentsToCharge) {
return paymentToChargeRepository.saveAndFlush(paymentsToCharge);
}
When I try save entity with constraint My main transaction rollback and I get stacktrace:
Caused by: java.sql.BatchUpdateException: ORA-02290: CHECK integrity constraint violated(MYDB.PAYMENTS_TO_CHARGE_CHK1)
But I want skip not success entities and save success. I marck my method
#Transactional(propagation = Propagation.REQUIRES_NEW)
and it look like this:
#Transactional
#Override
public void parseQuittance(QuittanceType quittance) {
try {
//logick create payToChargeDb
paymentToChargeService.saveAndFlushInNewTransaction(payToChargeDb);
} catch (Exception e) {
log.warn("Ignore.", e);
}
}
and
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Override
public PaymentsToCharge saveAndFlushInNewTransaction(PaymentsToCharge paymentsToCharge) {
return paymentToChargeRepository.saveAndFlush(paymentsToCharge);
}
But when I try save entity with constraint I not get exception and not enter to catcj block. just stop working debugging and the application continues to work. I do not get any errors. and as if rollback is happening
The proxy created by #Transactional does not intercept calls within the object.
In proxy mode (which is the default), only external method calls
coming in through the proxy are intercepted. This means that
self-invocation (in effect, a method within the target object calling
another method of the target object) does not lead to an actual
transaction at runtime even if the invoked method is marked with
#Transactional. Also, the proxy must be fully initialized to provide
the expected behavior, so you should not rely on this feature in your
initialization code (that is, #PostConstruct).
https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction-declarative
The same documentation recommends use of AspectJ if you want this behaviour.
I have one class that extends DeferredResults and extends Runnable as shown below
public class EventDeferredObject<T> extends DeferredResult<Boolean> implements Runnable {
private Long customerId;
private String email;
#Override
public void run() {
RestTemplate restTemplate=new RestTemplate();
EmailMessageDTO emailMessageDTO=new EmailMessageDTO("dineshshe#gmail.com", "Hi There");
Boolean result=restTemplate.postForObject("http://localhost:9080/asycn/sendEmail", emailMessageDTO, Boolean.class);
this.setResult(result);
}
//Constructor and getter and setters
}
Now I have controller that return the object of the above class,whenever new request comes to controller we check if that request is present in HashMap(That stores unprocessed request at that instance).If not present then we are creating object of EventDeferredObject class can store that in HashMap and call start() method on it.If this type request is already present then we will return that from HashMap.On completion on request we will delete that request from HashMap.
#RequestMapping(value="/sendVerificationDetails")
public class SendVerificationDetailsController {
private ConcurrentMap<String , EventDeferredObject<Boolean>> requestMap=new ConcurrentHashMap<String , EventDeferredObject<Boolean>>();
#RequestMapping(value="/sendEmail",method=RequestMethod.POST)
public EventDeferredObject<Boolean> sendEmail(#RequestBody EmailDTO emailDTO)
{
EventDeferredObject<Boolean> eventDeferredObject = null;
System.out.println("Size:"+requestMap.size());
if(!requestMap.containsKey(emailDTO.getEmail()))
{
eventDeferredObject=new EventDeferredObject<Boolean>(emailDTO.getCustomerId(), emailDTO.getEmail());
requestMap.put(emailDTO.getEmail(), eventDeferredObject);
Thread t1=new Thread(eventDeferredObject);
t1.start();
}
else
{
eventDeferredObject=requestMap.get(emailDTO.getEmail());
}
eventDeferredObject.onCompletion(new Runnable() {
#Override
public void run() {
if(requestMap.containsKey(emailDTO.getEmail()))
{
requestMap.remove(emailDTO.getEmail());
}
}
});
return eventDeferredObject;
}
}
Now this code works fine if there no identical request comes to that stored in HashMap. If we give number of different request at same time code works fine.
Well, I do not know if I understood correctly, but I think you might have race conditions in the code, for example here:
if(!requestMap.containsKey(emailDTO.getEmail()))
{
eventDeferredObject=new EventDeferredObject<Boolean>(emailDTO.getCustomerId(), emailDTO.getEmail());
requestMap.put(emailDTO.getEmail(), eventDeferredObject);
Thread t1=new Thread(eventDeferredObject);
t1.start();
}
else
{
eventDeferredObject=requestMap.get(emailDTO.getEmail());
}
think of a scenario in which you have two requests with the same key emailDTO.getEmail().
Request 1 checks if there is a key in the map, does not find it and puts it inside.
Request 2 comes some time later, checks if there is a key in the map, finds it, and
goes to fetch it; however just before that, the thread started by request 1 finishes and another thread, started by onComplete event, removes the key from the map. At this point,
requestMap.get(emailDTO.getEmail())
will return null, and as a result you will have a NullPointerException.
Now, this does look like a rare scenario, so I do not know if this is the problem you see.
I would try to modify the code as follows (I did not run it myself, so I might have errors):
public class EventDeferredObject<T> extends DeferredResult<Boolean> implements Runnable {
private Long customerId;
private String email;
private ConcurrentMap ourConcurrentMap;
#Override
public void run() {
...
this.setResult(result);
ourConcurrentMap.remove(this.email);
}
//Constructor and getter and setters
}
so the DeferredResult implementation has the responsibility to remove itself from the concurrent map. Moreover I do not use the onComplete to set a callback thread, as it seems to me an unnecessary complication. To avoid the race conditions I talked about before, one needs to combine somehow the verification of the presence of an entry with its fetching into one atomic operation; this is done by the putIfAbsent method of ConcurrentMap. Therefore I change the controller into
#RequestMapping(value="/sendVerificationDetails")
public class SendVerificationDetailsController {
private ConcurrentMap<String , EventDeferredObject<Boolean>> requestMap=new ConcurrentHashMap<String , EventDeferredObject<Boolean>>();
#RequestMapping(value="/sendEmail",method=RequestMethod.POST)
public EventDeferredObject<Boolean> sendEmail(#RequestBody EmailDTO emailDTO)
{
EventDeferredObject<Boolean> eventDeferredObject = new EventDeferredObject<Boolean>(emailDTO.getCustomerId(), emailDTO.getEmail(), requestMap);
EventDeferredObject<Boolean> oldEventDeferredObject = requestMap.putIfAbsent(emailDTO.getEmail(), eventDeferredObject );
if(oldEventDeferredObject == null)
{
//if no value was present before
Thread t1=new Thread(eventDeferredObject);
t1.start();
return eventDeferredObject;
}
else
{
return oldEventDeferredObject;
}
}
}
if this does not solve the problem you have, I hope that at least it might give some idea.