spring mongodb mysql same transaction rollback - spring

I want to save one document and one row in single transaction, but i want still be able to rollback transaction..
#Transactional(transactionManager = "chainedTransactionManager")
public void createAlienAndSpaceShip(String alienname, String spaceshipname){
spaceShipRepository.save(new SpaceShip(null, spaceshipname, 100.0d));
if (true) {
throw new RuntimeException("Something happened");
}
alienRepository.save(new Alien(null, alienname, 1.0d, 100.0d));
}
I tried to do this using ChainedTransactionManager, but it is deprecated..
I followed following tutorial https://www.youtube.com/watch?v=qOfdE-cFzto

Related

How to prevent data loss from redis where server is stopped forcefully which results in RedisCommandInterruptedException

#Autowired
private StringRedisTemplate stringRedisTemplate;
public List<Object> getDataFromRedis(String redisKey) {
try {
long numberOfEntriesToRead = 60000;
return stringRedisTemplate.executePipelined(
(RedisConnection connection) -> {
StringRedisConnection stringRedisConn =(StringRedisConnection)connection;
for (int index = 0; index < numberOfEntriesToRead; index++) {
stringRedisConn.lPop(redisKey);
}
return null;
});
}catch (RedisCommandInterruptedException e) {
LOGGER.error("Interrupted EXCEPTION :::", e);
}
}
}
I have a method which reads redis content for given key. Now the problem is when my application server is stopped while this method is trying to fetch data from redis i am getting RedisCommandInterruptedException exception which results in loss of some data from redis. So how can i overcome this problem Any suggestions are appreciable.
Pipelines are not atomic operations therefore there is no guarantee that all/none of the commands are executed when an exception happens.
You can use lua scripts or multi command to make run operations in a single transaction.
You can read more about using multi in spring boot data redis in this SO thread and this site.

#Version column is not working out of the box with spring data jdbc

I have my version column defined like this
#org.springframework.data.annotation.Version
protected long version;
With Spring Data JDBC it's always trying to INSERT. Updates are not happening. When I debug I see that, PersistentEntityIsNewStrategy is being used which is the default strategy. It has isNew() method to determine the state of the entity being persisted. I do see that version and id are used for this determination.
But my question is who is responsible to increment the version column after every save, so that when the second time .save() is called, the isNew() method can return false.
Should we do fire a BeforeSaveEvent and handle the incrementation of Version column? Would that be good enough to handle the OptimisticLock ?
Edit
I added an ApplicationListener to listen to BeforeSaveEvent like this.
public ApplicationListener<BeforeSaveEvent> incrementingVersion() {
return event -> {
Object entity = event.getEntity();
if (BaseDataModel.class.isAssignableFrom(entity.getClass())) {
BaseDataModel baseDataModel = (BaseDataModel) entity;
Long version = baseDataModel.getVersion();
if (version == null) {
baseDataModel.setVersion(0L);
} else {
baseDataModel.setVersion(version + 1L);
}
}
};
}
So now the version column works, but rest of Auditable fields #CreatedAt, #CreatedBy,#LastModifiedDate and #LastModifiedBy are not set!!
Edit2
Created a new ApplicationListener like below. In this case both my custom listener and Spring's RelationalAuditingListener are getting called. But still it doesn't solve the problem. Because the order of listeners[custom one followed by spring's] making the markAudited to invoke markUpdated instead of markCreated, since the version column is already incremented. I tried to make my Listener be the LOWEST_PRECEDENCE still no luck.
My custom listener here
public class CustomRelationalAuditingEventListener
implements ApplicationListener<BeforeSaveEvent>, Ordered {
#Override
public void onApplicationEvent(BeforeSaveEvent event) {
Object entity = event.getEntity();
// handler.markAudited(entity);
if (BaseDataModel.class.isAssignableFrom(entity.getClass())) {
BaseDataModel baseDataModel = (BaseDataModel) entity;
if (baseDataModel.getVersion() == null) {
baseDataModel.setVersion(0L);
} else {
baseDataModel.setVersion(baseDataModel.getVersion() + 1L);
}
}
}
#Override
public int getOrder() {
return LOWEST_PRECEDENCE;
}
}
Currently, you have to increment the version manually and there is no optimistic locking, i.e. the version is only used for checking if an entity is new.
There is an open issue for support of optimistic locking and there is even a PR open for it.
Therefore it is likely that this feature will be available with an upcoming 1.1 milestone.

Long running Spring Service is locking DB table

I have a Spring Service that is going through multiple items in a list and for each one it is making an extra WS call to external services. The Service is called by a Job on a fixed time interval.
As a first step, the Service is saving in a JOB_CONTROL table the status of the Job (STARTED), then it iterates through the list and at the end it saves it to (FINISHED).
There are 2 issues:
the JOB_CONTROL table doesn't get saved gradually - only the
"FINISHED" value is saved and never "STARTED"
if using flush method in order to force the commit, the table gets locked, eg. no other select can be made on it until the Service finishes
#Service
public class PromotionSchedulerService implements Runnable {
#Autowired
GeofencingAreaDAO storeDao;
#Autowired
promotionsWSClient promotionsWSClient;
#Autowired
private JobControlDAO jobControlDAO;
public void run() {
JobControl job = jobControlDAO.findByClassName(this.getClass().getSimpleName());
job.setState(JobControlStateTypes.RUNNING.getStateType());
job.setLastRunDate(new Date());
// LINE BELLOW DOES NOT GET COMMITED IN DB
jobControlDAO.save(job);
List < GeofencingArea > stores = storeDao.findAllStores();
for (GeofencingArea store: stores) {
/** Call WS **/
GetActivePromotionsResponse rsp = null;
try {
rsp = promotionsWSClient.getpromotions();
} catch (Exception e) {
e.printStackTrace();
job.setState(JobControlStateTypes.FAILED.getStateType());
job.setLastRunStatus("There was an error calling promagic promotions");
jobControlDAO.save(job);
return;
}
List < PromotionBean > promos = rsp.getReturn();
for (PromotionBean promo: promos) {
BackendPromotionPOJO backendPromotionsPOJO = new BackendPromotionPOJO();
backendPromotionsPOJO.setDescription(promo.getDescription());
}
}
// ONLY THIS JOB STATE GOES TO DB. IT ACTUALLY SEEM TO OVERWRITE PREVIOUS SET VALUE ("RUNNING") from line 16
job.setLastRunStatus("COMPLETED");
job.setState(JobControlStateTypes.SUCCESS.getStateType());
jobControlDAO.save(job);
}
}
I would like to force the commit after changing job state and not locking the table when doing this.

Jpa testing and automatic rollback with Spring

I am in reference to Spring Roo In Action (book from Manning). Somewhere in the book it says "Roo marks the test class as #Transactional so that the unit tests automatically roll back any change.
Here is the illustrating method:
#Test
#Transactional
public void addAndFetchCourseViaRepo() {
Course c = new Course();
c.setCourseType(CourseTypeEnum.CONTINUING_EDUCATION);
c.setName("Stand-up Comedy");
c.setDescription(
"You'll laugh, you'll cry, it will become a part of you.");
c.setMaxiumumCapacity(10);
c.persist();
c.flush();
c.clear();
Assert.assertNotNull(c.getId());
Course c2 = Course.findCourse(c.getId());
Assert.assertNotNull(c2);
Assert.assertEquals(c.getName(), c2.getName());
Assert.assertEquals(c2.getDescription(), c.getDescription());
Assert.assertEquals(
c.getMaxiumumCapacity(), c2.getMaxiumumCapacity());
Assert.assertEquals(c.getCourseType(), c2.getCourseType());
}
However, I don't understand why changes in this method would be automatically rolled back if no RuntimeException occurs...
Quote from documentation:
By default, the framework will create and roll back a transaction for each test. You simply write code that can assume the existence of a transaction. [...] In addition, if test methods delete the contents of selected tables while running within a transaction, the transaction will roll back by default, and the database will return to its state prior to execution of the test. Transactional support is provided to your test class via a PlatformTransactionManager bean defined in the test's application context.
So, in other words, SpringJUnit4ClassRunner who runs your tests always do transaction rollback after test execution.
I'm trying to find a method that allows me to do a rollback when one of the elements of a list fails for a reason within the business rules established (ie: when throw my customize exception)
Example, (the idea is not recording anything if one element in list fails)
public class ControlSaveElement {
public void saveRecords(List<MyRecord> listRecords) {
Boolean status = true;
foreach(MyRecord element: listRecords) {
// Here is business rules
if(element.getStatus() == false) {
// something
status = false;
}
element.persist();
}
if(status == false) {
// I need to do roll back from all elements persisted before
}
}
...
}
Any idea? I'm working with Roo 1.2.2..

About Spring Transaction Manager

Currently i am using spring declarative transaction manager in my application. During DB operations if any constraint violated i want to check the error code against the database. i mean i want to run one select query after the exception happened. So i am catching the DataIntegrityViolationException inside my Catch block and then i am trying to execute one more error code query. But that query is not get executed . I am assuming since i am using the transaction manager if any exception happened the next query is not getting executed. Is that right?. i want to execute that error code query before i am returning the results to the client. Any way to do this?
#Override
#Transactional
public LineOfBusinessResponse create(
CreateLineOfBusiness createLineOfBusiness)
throws GenericUpcException {
logger.info("Start of createLineOfBusinessEntity()");
LineOfBusinessEntity lineOfBusinessEntity =
setLineOfBusinessEntityProperties(createLineOfBusiness);
try {
lineOfBusinessDao.create(lineOfBusinessEntity);
return setUpcLineOfBusinessResponseProperties(lineOfBusinessEntity);
}
// Some db constraints is failed
catch (DataIntegrityViolationException dav) {
String errorMessage =
errorCodesBd.findErrorCodeByErrorMessage(dav.getMessage());
throw new GenericUpcException(errorMessage);
}
// General Exceptions handling
catch (Exception exc) {
logger.debug("<<<<Coming inside General >>>>");
System.out.print("<<<<Coming inside General >>>>");
throw new GenericUpcException(exc.getMessage());
}
}
public String findErrorCodeByErrorMessage(String errorMessage)throws GenericUpcException {
try{
int first=errorMessage.indexOf("[",errorMessage.indexOf("constraint"));
int last=errorMessage.indexOf("]",first);
String errorCode=errorMessage.substring(first+1, last);
//return errorCodesDao.find(errorCode);
return errorCode;
}
catch(Exception e)
{
throw new GenericUpcException(e.getMessage());
}
}
Please help me.
I don't think problem you're describing has anything to do with Transaction management. If DataIntegrityViolationException happens within your try() block you code within catch() should execute. Perhaps exception different from DataIntegrityViolationException happens or your findErrorCodeByErrorMessage() throwing another exception. In general, Transaction logic would be applied only once you return from your method call, until then you could do whatever you like using normal Java language constructs. I suggest you put breakpoint in your error error handler or some debug statements to see what's actually happening.

Resources