springboot manually commit after each call to Stored Procedure - spring-boot

I have a springboot application that needs to iterate over a large number of records and call a stored procedure which inserts some data in a table for each record read.
we cannot use BatchUpdate because it is taking a long time to process thousands of records , I was asked to commit frequently (either after every record or after x records)
I was looking online and I did not see a good example on how to commit manually in springboot while calling a stored procedure in a loop.
I am using SimpleJdbcCall and my code looks like this:
#Transactional(isolation = Isolation.READ_UNCOMMITTED,propagation = Propagation.NOT_SUPPORTED)
public class EventsProcessor
{
#Autowired
#Qualifier("dbDatasource")
DataSource dataSource;
public void process(List<Event> events) throws Exception
{
SimpleJdbcCall dbTemplate = new SimpleJdbcCall(dataSource).withProcedureName("UPDATE_EVENTS").withSchemaName("TEST");
DataSourceUtils.getConnection(dataSource).setAutoCommit(false);
for (Event ev : events)
{
//fill inParams here
outParams = dbTemplate.execute(inParams);
DataSourceUtils.getConnection(dataSource).commit();
}
}
}
I tried without Propagation.NOT_SUPPORTED and with it but same result.
The code is executing the call to the sp and there is not error when it executes commit() , but after commit() if I query the table that the sp inserted the record in it, I don't see the records in the table.
If I remove the setAutocommit(false) and the commit statement and the Propagation.NOT_SUPPORTED , and just let springboot handle transactions, then while it's processing, I can see the records in the table if I do READ UNCOMMITTED, but they will not be committed till the full job ends.
What am I doing wrong that is preventing the commit to happen after each row?

I ended up separating the call to the SP into a different method with
#Transactional(propagation = Propagation.REQUIRES_NEW)
This way when it returns from the method, it commits.

Related

Prevent locking of tables while bulk/ multiple table inserts using Spring Hibernate #Transactional Entity Manager

I'm using Spring #Transactional for multiple table inserts inside a single function.
For each entity's read/write I'm using EntityManager,lets suppose that in my function 10 tables are getting updated with data then in that case all 10 tables are locked till the transaction is not over , which is user experience wise bad since it causes waits\ delays for the user where they are viewing few pages which uses these tables.
So how can I prevent the locking of tables for read while the whole insert table process is taking place, Is there a way to avoid using transaction and do single table independent insert.
Below is code snippet
#Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_UNCOMMITTED, noRollbackFor = {SQLException.class, IllegalStateException.class, PersistenceException.class})
public BaseImportDtoRoot importData(BaseImportDtoRoot baseDto) throws Exception {
try{
table1.fninsert(); .. call each class to insert entity wise
table2.fninsert();
}
catch(Exception e){
}
}
public class table1(){
fninsert(){
MstTable tb1= new MstTable ();
tb1= modMap.map(MstTableDto,
MstTable.class);
entityManager.persist(tb1);
entityManager.flush();
entityManager.clear();
}
Is there a way to avoid using transaction
You can not, transactions are created whenever you persist data, so you need to either:
Use #Transactional, which wraps your DML inside a transaction
Create your transaction manually
entityManager.getTransaction().begin();
entityManager.persist(data);
entityManager.getTransaction().commit();
and do single table independent insert
I think what you mean is to create separate transactions for each insertion. You could do that by creating multiple methods which the annotation #Transactional and removing the #Transaction annotation on the importData method. (by doing so the importData method is not atomic anymore)
Please correct me if I misunderstand anything.

Spring Data returns deleted object sometimes

I am working by spring data to access database and do not use any cache. I have a problem that after deleting a record from database, I am sending an event to other micro system to re-query to update list of objects. So basically my code is :
private void deleteObject(MyObject object) {
myRepository.deleteById(object.getId());
myRepository.flush();
...
sendEventToSystemX();
}
Basically other micro service captures the event which is sent by sendEventToSystemX method and make a query to myRepository.
#Transactional(isolation = Isolation.READ_UNCOMMITTED, readOnly = true)
public Page<T> findAll(Specification<T> spec, Pageable pageable) {
TypedQuery<T> query = getQuery(spec, getDomainClass(), pageable.getSort());
if(pagable.isUnpaged()) {
return new PageImpl<>(query.getResultList())
}
return readPage(query, getDomainClass(), pageable, spec);
}
So note that I am flushing repo after deletion. And select query is done by different service so it is not in the same transaction. So why I still get deleted object for the first time I query after deletion. If I re-run findAll method then I get up-to data result. And this also does not happen always. So what can be reason behind it ?

Deleting a record then selecting within the same Spring Transaction still returns the deleted record

I have some code within a spring transaction with the isolation level set to SERIALIZABLE. This code does a few things firstly it deletes all records from a table that have a flag set, next it performs a select to ensure invalid records can not be written and finally the new records are written.
The problem is that the select continues to return the records that were deleted if the code is run with the transaction annotation. My understanding is that because we are performing these operations within the same spring transaction that the previous delete operation will be considered when performing the select.
We are using Spring Boot 2.1 and Hibernate 5.2
A summary of the code is shown below:
#HystrixCommand
public void deleteRecord(EntityObj entityObj) {
fooRepository.deleteById(entityObj.getId());
//Below line added as part of debugging but I don't think I should really need it?
fooRepository.flush();
}
public List<EntityObj> findRecordByProperty(final String property) {
return fooRepository.findEntityObjByProperty(property);
}
#Transactional(isolation = Isolation.SERIALIZABLE)
public void debugReadWrite() {
EntitiyObject entitiyObject = new EntityObject();
entitiyObject.setId(1);
deleteRecord(entitiyObject);
List<EntityObj> results = findRecordByProperty("bar");
if (!results.isEmpty()) {
throw new RuntimeException("Should be no results!")
}
}
The transaction has not committed yet, you need to complete the transaction and then find the record.
decorating the deleteRecord with propagation = Propagation.REQUIRES_NEW) should solve the issue
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void deleteRecord(EntityObj entityObj) {
fooRepository.deleteById(entityObj.getId());
// flush not needed fooRepository.flush();
}
A flush is not needed because when deleteRecord completes the translation will be committed.
under the hood
//start transaction
public void deleteRecord(EntityObj entityObj) {
fooRepository.deleteById(entityObj.getId());
}
//commit transaction
Turns out the issue was due to our use of Hystrix. The transaction is started outside of Hystirx and then at a later point goes through a Hystrix command. The Hystrix command is using a threadpool and so the transaction is lost while executing on the new thread from the Hystrix threadpool. See this github issue for more info:
https://github.com/spring-cloud/spring-cloud-netflix/issues/1381

Why JDBCTEmplate.batchupdate(sql[]) method not roll back in Spring4 using #transaction annotation?

The below code is not working for rollback when any exception occurs while insertion of records in database.I am using Spring 4 framework and annotation .
*/I am using below code for transaction management and it will not roll back for any exception./
#Transactional(rollbackFor = RuntimeException.class)
public boolean insertBatch(List<String> query) throws SQLException {
boolean flag= false;
try
{
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
String[] Sql= query.toArray(new String[query.size()]);
jdbcTemplate.batchUpdate(Sql);
flag=true;
}catch(DataAccessException e )
{
flag=false;
MessageResource.setMessages("Constraints Violation ! CSV data value not matched with database constraints ");
LOGGER.info("CSV file Data not expected as database table structure defination like constraint violation/Data Type lenght/NUll etc for same data value" );
LOGGER.error( "Cause for error: "+ e.getRootCause().getMessage());
LOGGER.debug( "Details explain : "+ e.toString());
throw new RuntimeException("Roll back operation");
//transactionManager.rollback(status);
}
return flag;
}**
Actullay answaer provided by Sir, M.Deinum is below:
Spring uses proxies to apply AOP this will only work for methods called from the outside. Internal method calls don't pass through the proxy hence no transactions and depending on your queries you get one large or multiple smaller commits. Make sure that the outer method (the one called to initiate everything) is transactional. – M. Deinum 14 hours ago
#Transactional(rollbackFor = RuntimeException.class)
This will rollback only if a RuntimeException or a subclass is thrown from the annotated method. If you want to rollback for any Exception (such as SQLException, which is NOT a RuntimeException), you should do:
#Transactional(rollbackFor = Exception.class)
And if you want to try a rollback for whatever error that might happen
#Transactional(rollbackFor = Throwable.class)
Altough in this last case the runtime might be so broken that not even the rollback can complete.
Use Prepared statement from connection object and the do a execute batch object. On the connection object use conn.setAutoCommit(false). Prepeared statement has 4 times better performance than JdbcTemplate for batch insertion of 1000 records.
Reference : JdbcTemplate.batchUpdate() returns 0, on insert error for one item but inserts the remaining item into sql server db despite using #Transactional

DBUnit, Hibernate/JPA, DAO and flush

Goal
In order to test the create method of a DAO, I create an instance, insert it in database, flush the entity manager to update the database and then use dbunit to compare table using dataset.
Code
Here is the code, it uses Spring test, DBUnit and JPA (via Hibernate) :
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {
"/WEB-INF/applicationContext-database.xml"})
public class MyEntityTest extends AbstractTransactionalJUnit4SpringContextTests {
#PersistenceContext
protected EntityManager em;
#Autowired
MyEntityDAO myEntityDAO;
#Test
public void createTest() {
// create the entity
MyEntity record = new MyEntity();
record.setData("test");
myEntityDAO.insertNew(record);
// flush to update the database
em.flush();
// get actual dataset from the connection
Session session = em.unwrap(Session.class);
Connection conn = SessionFactoryUtils.getDataSource(
session.getSessionFactory()).getConnection();
DatabaseConnection connection = new DatabaseConnection(conn);
ITable actualTable = connection.createTable("MY_ENTITY");
// get expected dataset
IDataSet expectedDataSet = new FlatXmlDataSetBuilder().build(getClass().getResource("dataset.xml"));
ITable expectedTable = expectedDataSet.getTable("MY_ENTITY");
// compare the dataset
Assertion.assertEquals(expectedTable, actualTable);
}
}
Problem
This code never ends, it seems to freeze (infinite loop ?) during this command:
ITable actualTable = connection.createTable("MY_ENTITY");
But if I comment the em.flush() block, the test ends (no freeze or infinite loop). In this case, the test fails because the database has not been updated after the insert.
Question
how can I test the create method of a DAO using a similar approach (compare dataset with dbunit) without having a freeze when calling dataset.getTable() ?
I found the solution. The problem was caused by the connection.
If I replace :
Session session = em.unwrap(Session.class);
Connection conn = SessionFactoryUtils.getDataSource(
session.getSessionFactory()).getConnection();
by
DataSource ds = (DataSource) applicationContext.getBean("dataSource");
Connection conn = DataSourceUtils.getConnection(ds);
All runs fine...
I don't understand why, so let me know in comments if you have any clue to help me understand that.

Resources