Prevent locking of tables while bulk/ multiple table inserts using Spring Hibernate #Transactional Entity Manager - spring

I'm using Spring #Transactional for multiple table inserts inside a single function.
For each entity's read/write I'm using EntityManager,lets suppose that in my function 10 tables are getting updated with data then in that case all 10 tables are locked till the transaction is not over , which is user experience wise bad since it causes waits\ delays for the user where they are viewing few pages which uses these tables.
So how can I prevent the locking of tables for read while the whole insert table process is taking place, Is there a way to avoid using transaction and do single table independent insert.
Below is code snippet
#Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_UNCOMMITTED, noRollbackFor = {SQLException.class, IllegalStateException.class, PersistenceException.class})
public BaseImportDtoRoot importData(BaseImportDtoRoot baseDto) throws Exception {
try{
table1.fninsert(); .. call each class to insert entity wise
table2.fninsert();
}
catch(Exception e){
}
}
public class table1(){
fninsert(){
MstTable tb1= new MstTable ();
tb1= modMap.map(MstTableDto,
MstTable.class);
entityManager.persist(tb1);
entityManager.flush();
entityManager.clear();
}

Is there a way to avoid using transaction
You can not, transactions are created whenever you persist data, so you need to either:
Use #Transactional, which wraps your DML inside a transaction
Create your transaction manually
entityManager.getTransaction().begin();
entityManager.persist(data);
entityManager.getTransaction().commit();
and do single table independent insert
I think what you mean is to create separate transactions for each insertion. You could do that by creating multiple methods which the annotation #Transactional and removing the #Transaction annotation on the importData method. (by doing so the importData method is not atomic anymore)
Please correct me if I misunderstand anything.

Related

springboot manually commit after each call to Stored Procedure

I have a springboot application that needs to iterate over a large number of records and call a stored procedure which inserts some data in a table for each record read.
we cannot use BatchUpdate because it is taking a long time to process thousands of records , I was asked to commit frequently (either after every record or after x records)
I was looking online and I did not see a good example on how to commit manually in springboot while calling a stored procedure in a loop.
I am using SimpleJdbcCall and my code looks like this:
#Transactional(isolation = Isolation.READ_UNCOMMITTED,propagation = Propagation.NOT_SUPPORTED)
public class EventsProcessor
{
#Autowired
#Qualifier("dbDatasource")
DataSource dataSource;
public void process(List<Event> events) throws Exception
{
SimpleJdbcCall dbTemplate = new SimpleJdbcCall(dataSource).withProcedureName("UPDATE_EVENTS").withSchemaName("TEST");
DataSourceUtils.getConnection(dataSource).setAutoCommit(false);
for (Event ev : events)
{
//fill inParams here
outParams = dbTemplate.execute(inParams);
DataSourceUtils.getConnection(dataSource).commit();
}
}
}
I tried without Propagation.NOT_SUPPORTED and with it but same result.
The code is executing the call to the sp and there is not error when it executes commit() , but after commit() if I query the table that the sp inserted the record in it, I don't see the records in the table.
If I remove the setAutocommit(false) and the commit statement and the Propagation.NOT_SUPPORTED , and just let springboot handle transactions, then while it's processing, I can see the records in the table if I do READ UNCOMMITTED, but they will not be committed till the full job ends.
What am I doing wrong that is preventing the commit to happen after each row?
I ended up separating the call to the SP into a different method with
#Transactional(propagation = Propagation.REQUIRES_NEW)
This way when it returns from the method, it commits.

Deleting a record then selecting within the same Spring Transaction still returns the deleted record

I have some code within a spring transaction with the isolation level set to SERIALIZABLE. This code does a few things firstly it deletes all records from a table that have a flag set, next it performs a select to ensure invalid records can not be written and finally the new records are written.
The problem is that the select continues to return the records that were deleted if the code is run with the transaction annotation. My understanding is that because we are performing these operations within the same spring transaction that the previous delete operation will be considered when performing the select.
We are using Spring Boot 2.1 and Hibernate 5.2
A summary of the code is shown below:
#HystrixCommand
public void deleteRecord(EntityObj entityObj) {
fooRepository.deleteById(entityObj.getId());
//Below line added as part of debugging but I don't think I should really need it?
fooRepository.flush();
}
public List<EntityObj> findRecordByProperty(final String property) {
return fooRepository.findEntityObjByProperty(property);
}
#Transactional(isolation = Isolation.SERIALIZABLE)
public void debugReadWrite() {
EntitiyObject entitiyObject = new EntityObject();
entitiyObject.setId(1);
deleteRecord(entitiyObject);
List<EntityObj> results = findRecordByProperty("bar");
if (!results.isEmpty()) {
throw new RuntimeException("Should be no results!")
}
}
The transaction has not committed yet, you need to complete the transaction and then find the record.
decorating the deleteRecord with propagation = Propagation.REQUIRES_NEW) should solve the issue
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void deleteRecord(EntityObj entityObj) {
fooRepository.deleteById(entityObj.getId());
// flush not needed fooRepository.flush();
}
A flush is not needed because when deleteRecord completes the translation will be committed.
under the hood
//start transaction
public void deleteRecord(EntityObj entityObj) {
fooRepository.deleteById(entityObj.getId());
}
//commit transaction
Turns out the issue was due to our use of Hystrix. The transaction is started outside of Hystirx and then at a later point goes through a Hystrix command. The Hystrix command is using a threadpool and so the transaction is lost while executing on the new thread from the Hystrix threadpool. See this github issue for more info:
https://github.com/spring-cloud/spring-cloud-netflix/issues/1381

How to rollback transaction invoked with jpa entity listeners

I'm using jpa , spring data and entity listeners to audit my entities precisely on postUpdate , postPersist , PostRemove
This is a pseudo code of my entity listener class
public class EntityListener extends AuditingEntityListener {
#PostUpdate
public void postPersist(Object auditedEntity) {
writer.saveEntity(auditedEntity,"UPDATE");
}
This the pseudo code of the Writer class
public class Writer {
#Async
public void saveEntity(Object auditedEntity, String action) {
try {
//some code to prepare the history entity
historyDAO.save(entity);
} catch (Exception e) {
}
}
when an exception is thrown in Writer class , the auditedEntity is updated or inserted however the historyEntity where i store the audit action doesnt
The problem is i need to invoke the saveEntity method in another thread for performance issue (#Async) but in that case a new transaction is open instead of the previously one which opened
how can i solve the rollack issue for both transactions
so when an exception is throwen both historyEntity and auditedEntity not persisted
I understand that you want to rollback both the child and the parent transaction when an exception is thrown from within Writer.saveEntity.
The problem is that the thread with the original transaction would still need to wait for all these complicated operations to finish before it could mark the transaction as committed. You can't easily span a transaction across multiple threads, either.
The only thing you could probably do to speed things up is you could run the logic of generating the history entities in parallel, and then save them all just before the transaction commits.
One way of doing that that I can think of is using a Hibernate interceptor:
public class AuditInterceptor extends EmptyInterceptor {
private List<Callable<BaseEntity>> historyEntries;
private ExecutorService executor;
...
public void beforeTransactionCompletion(Transaction tx) {
List<Future<BaseEntity>> futures = executor.invokeAll(historyEntries);
if (executor.awaitTermination(/* some timeout here */)) {
futures.stream().map(Future::get).forEach(entity -> session.save(object));
} else {
/* rollback */
}
}
}
Your listener code then becomes:
#PostUpdate
public void postPersist(Object auditedEntity) {
interceptor.getHistoryEntries().add(new Callable<BaseEntity> {
/* history entry generation logic goes here */
});
}
(note that the above code is greatly simplified, you could use any other asynchronous execution API, the basic idea is that you need to block in AuditInterceptor.beforeTransactionCompletion, waiting for all the history entries to be generated)
However, I would strongly advise against using the above technique, as it is rather complicated and error prone.
If you look here: https://docs.jboss.org/hibernate/orm/5.1/userguide/html_single/chapters/events/Events.html, you'll find that Hibernate interceptors have more interesting methods that could help you gather auditing info, and that perhaps your implementation could make use of them, possibly avoiding the need for complicated logic altogether (Hibernate already does track changes to fields of individual entities, so you get that information for free).
Why reinvent the wheel, though? If you dig even deeper, you'll find the Hibernate Envers module (http://hibernate.org/orm/envers/, works for both JPA and pure Hibernate) which gives you business auditing out of the box. Envers already digs into the above mechanism, so hopefully the performance issue would go away.
Final note: have you measured how long history entry generation takes? I would guess that executing for loops and if statements might be cheaper than database access operations. If I were you, I wouldn't do any of the above unless I was absolutely sure that's where the performance bottleneck was.

How to link JPA persistence context with single database transaction

Latest Spring Boot with JPA and Hibernate: I'm struggling to understand the relationship between transactions, the persistence context and the hibernate session and I can't easily avoid the dreaded no session lazy initialization problem.
I update a set of objects in one transaction and then I want to loop through those objects processing them each in a separate transaction - seems straightforward.
public void control() {
List<> entities = getEntitiesToProcess();
for (Entity entity : entities) {
processEntity(entity.getId());
}
}
#Transactional(value=TxType.REQUIRES_NEW)
public List<Entity> getEntitiesToProcess() {
List<Entity> entities = entityRepository.findAll();
for (Entity entity : entities) {
// Update a few properties
}
return entities;
}
#Transactional(value=TxType.REQUIRES_NEW)
public void processEntity(String id) {
Entity entity = entityRepository.getOne(id);
entity.getLazyInitialisedListOfObjects(); // throws LazyInitializationException: could not initialize proxy - no Session
}
However, I get a problem because (I think) the same hibernate session is being used for both transactions. When I call entityRepository.getOne(id) in the 2nd transaction, I can see in the debugger that I am returned exactly the same object that was returned by findAll() in the 1st transaction without a DB access. If I understand this correctly, it's the hibernate cache doing this? If I then call a method on my object that requires a lazy evaluation, I get a "no session" error. I thought the cache and the session were linked so that's my first confusion.
If I drop all the #Transactional annotations or if I put a #Transactional on the control method it all runs fine, but the database commit isn't done until the control method completes which is obviously not what I want.
So, I have a few questions:
How can I make the hibernate session align with my transaction scope?
What is a good pattern for doing the separation transactions in a loop with JPA and declarative transaction management?
I want to retain the declarative style (i.e. no xml), and don't want to do anything Hibernate specific.
Any help appreciated!
Thanks
Marcus
Spring creates a proxy around your service class, which means #Transactional annotations are only applied when annotated methods are called through the proxy (where you have injected this service).
You are calling getEntitiesToProcess() and processEntity() from within control(), which means those calls are not going through proxy but instead have the transactional scope of the control() method (if you aren't also calling control() from another method in the same class).
In order for #Transactional to apply, you need to do something like this
#Autowired
private ApplicationContext applicationContext;
public void control() {
MyService myService = applicationContext.getBean(MyService.class);
List<> entities = myService.getEntitiesToProcess();
for (Entity entity : entities) {
myService.processEntity(entity.getId());
}
}

Why JDBCTEmplate.batchupdate(sql[]) method not roll back in Spring4 using #transaction annotation?

The below code is not working for rollback when any exception occurs while insertion of records in database.I am using Spring 4 framework and annotation .
*/I am using below code for transaction management and it will not roll back for any exception./
#Transactional(rollbackFor = RuntimeException.class)
public boolean insertBatch(List<String> query) throws SQLException {
boolean flag= false;
try
{
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
String[] Sql= query.toArray(new String[query.size()]);
jdbcTemplate.batchUpdate(Sql);
flag=true;
}catch(DataAccessException e )
{
flag=false;
MessageResource.setMessages("Constraints Violation ! CSV data value not matched with database constraints ");
LOGGER.info("CSV file Data not expected as database table structure defination like constraint violation/Data Type lenght/NUll etc for same data value" );
LOGGER.error( "Cause for error: "+ e.getRootCause().getMessage());
LOGGER.debug( "Details explain : "+ e.toString());
throw new RuntimeException("Roll back operation");
//transactionManager.rollback(status);
}
return flag;
}**
Actullay answaer provided by Sir, M.Deinum is below:
Spring uses proxies to apply AOP this will only work for methods called from the outside. Internal method calls don't pass through the proxy hence no transactions and depending on your queries you get one large or multiple smaller commits. Make sure that the outer method (the one called to initiate everything) is transactional. – M. Deinum 14 hours ago
#Transactional(rollbackFor = RuntimeException.class)
This will rollback only if a RuntimeException or a subclass is thrown from the annotated method. If you want to rollback for any Exception (such as SQLException, which is NOT a RuntimeException), you should do:
#Transactional(rollbackFor = Exception.class)
And if you want to try a rollback for whatever error that might happen
#Transactional(rollbackFor = Throwable.class)
Altough in this last case the runtime might be so broken that not even the rollback can complete.
Use Prepared statement from connection object and the do a execute batch object. On the connection object use conn.setAutoCommit(false). Prepeared statement has 4 times better performance than JdbcTemplate for batch insertion of 1000 records.
Reference : JdbcTemplate.batchUpdate() returns 0, on insert error for one item but inserts the remaining item into sql server db despite using #Transactional

Resources