When using Spring jdbcTemplate without #Transactional annotation and dataSource autocommit property set to true.
Each SQL statement will be in it's own transaction, but will these separate transaction be in the same physical connection?
In below code snippet(with #Transactional), all three SQL statements will be in one same connection/transaction.
#Transactional
public void test() {
jdbcTemplate.batchUpdate("INSERT INTO TB_MYTEST(MYKEY, MYVALUE) VALUES ('start', 'start')");
jdbcTemplate.batchUpdate("INSERT INTO TB_MYTEST(MYKEY, MYVALUE) VALUES ('hi', 'hi')");
jdbcTemplate.batchUpdate("INSERT INTO TB_MYTEST(MYKEY, MYVALUE) VALUES ('end', 'end')");
}
But in below code snippet(without #Transactional), each SQL statement will be in separate transaction.
How about connection? Will all these three separate transactions be in the same physical connection or will Spring get three different connections from dataSource for each transaction?
public void test() {
jdbcTemplate.batchUpdate("INSERT INTO TB_MYTEST(MYKEY, MYVALUE) VALUES ('start', 'start')");
jdbcTemplate.batchUpdate("INSERT INTO TB_MYTEST(MYKEY, MYVALUE) VALUES ('hi', 'hi')");
jdbcTemplate.batchUpdate("INSERT INTO TB_MYTEST(MYKEY, MYVALUE) VALUES ('end', 'end')");
}
Is there any way to check if two transactions are in the same physical connection?
Thanks for the reply!
It may use the same connection Each DB connection is bound to a thread or a transaction in spring. If no connections are bound to a thread/transaction then it will get a new connection from the connection pool.
TransactionSynchronizationManager class manages database connections with the help of the DataSource.
When you use Transaction then it's guaranteed that it will use the same database connection while if you're running SQL queries without transaction then it may or may not use the same database connection that depends on the connection availability in the connection pool.
Related
I am using spring transactions and hibernate to insert data into the oracle database table:
Here is the scenarion I am facing problem with:
I have two tables which have one to one mapping in hibernate. And I am inserting data in these two tables using below method calls.Transaction propagates from one method to another.So that insertion of data in both the tables happens in one transaction.
Problem: is that , while inserting data in second table, if an exception like "constraintvoilationexception---can not insert null into a particular column", is thrown,....then ideally the data should not be inserted in any of the tables i.e transaction should roll back,,,...But this is not happening ...when an exception is thrown while inserting data in the second table....records do get inserted in the first table, which ideally should not happen i.e whole transaction should be rolled back...
Can you please help,...where I am wrong while applying #Transactional , or there is some other reason for this ( may be from database side, not sure though)
#Transactional(readOnly = false, propagation = Propagation.REQUIRED)
public void methodA(){
// inserting data in table 1;
methodB();
}
#Transactional(readOnly = false, propagation = Propagation.REQUIRED)
public void methodB{
// inserting data in table 2;
}
Defines zero (0) or more exception classes, which must be subclasses of Throwable, indicating which exception types must cause a transaction rollback. Details Here
#Transactional(readOnly = false, propagation = Propagation.REQUIRED,rollbackFor = Exception.class)
public void methodA(){
try{
// inserting data in table 1;
methodB();
}
catch(Exception ex)
{
}
}
public void methodB{
// inserting data in table 2;
}
In my Spring boot app I'm deleting and inserting a large amount of data into my MySQL db in a single transaction. Ideally, I want to only commit the results to my database at the end, so all or nothing. I'm running into issues where my deletions will be committed before my insertions, so during that period any calls to the db will return no data (not good). Is there a way to manually commit transaction?
My main logic is:
#Transactional
public void saveParents(List<Parent> parents) {
parentRepo.deleteAllInBatch();
parentRepo.resetAutoIncrement();
//I'm setting the id manually before hand
String sql = "INSERT INTO parent " +
"(id, name, address, number) " +
"VALUES ( ?, ?, ?, ?)";
jdbcTemplate.batchUpdate(sql, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
Parent parent = parents.get(i);
ps.setInt(1, parent.getId());
ps.setString(2, parent.getName());
ps.setString(3, parent.getAddress());
ps.setString(4, parent.getNumber());
}
#Override
public int getBatchSize() {
return parents.size();
}
});
}
ParentRepo
#Repository
#Transactional
public interface ParentRepo extends JpaRepository<Parent, Integer> {
#Modifying
#Query(
value = "alter table parent auto_increment = 1",
nativeQuery = true
)
void resetAutoIncrement();
}
EDIT:
So I changed
parentRepo.deleteAllInBatch();
parentRepo.resetAutoIncrement();
to
jdbcTemplate.update("DELETE FROM output_stream");
jdbcTemplate.update("alter table output_stream auto_increment = 1");
in order to try avoiding jpa's transaction but each operation seems to be committing separately no matter what I try. I have tried TransactionTemplate and implementing PlatformTransactionManager (seen here) but I can't seem to get these operations to commit together.
EDIT: It seems the issue I was having was with the alter table as it will always commit.
I'm running into issues where my deletions will be committed before my insertions, so during that period any calls to the db will return no data
Did you configure JPA and JDBC to share transactions?
If not, then you're basically using two different mechanisms to access the data (EntityManager and JdbcTempate), each of them maintaining a separate connection to the database. What likely happens is that only EntityManager joins the transaction created by #Transactional; the JdbcTemplate operation executes either without a transaction context (read: in AUTOCOMMIT mode) or in a separate transaction altogether.
See this question. It is a little old, but then again, using JPA and Jdbc together is not exactly a common use case. Also, have a look at the Javadoc for JpaTransactionManager.
I know this is going to be the repetitive question , but I feel my question is bit different.
I have JdbcDAO classes like
#Component
public class JdbcUserDAO implements UserDAO {
#Autowired
MyJdbc myJdbc;
}
I have defined the MyJdbc class as follows :
#Component
public class MyJdbc {
#Autowired
protected JdbcTemplate jdbc;
}
In the MyJdbc class I am defining the insert and batchupdate and calling them through jdbc variable.
Will it create too many connections exceptions.
I have defined the jdbc parameters in application.properties file :
spring.datasource.url=#databaseurl
spring.datasource.username=#username
spring.datasource.password=#password
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.test-on-borrow=true
spring.datasource.max-active=100
spring.datasource.max-wait=10000
spring.datasource.min-idle=10
spring.datasource.validation-query=SELECT 1
spring.datasource.time-between-eviction-runs-millis= 5000
spring.datasource.min-evictable-idle-time-millis=30000
spring.datasource.test-while-idle=true
spring.datasource.test-on-borrow=true
spring.datasource.test-on-return=false
I am getting the exception :
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Too many connections
I have done many changes to the application.properties file for various constant but it didn't work. My db is hosted on AWS RDS.
But for updating the blob image values I do :
blob= myJdbc.jdbc.getDataSource().getConnection().createBlob();
blob.setBytes(1, str.getBytes());
pstmt = myJdbc.jdbc.getDataSource().getConnection().prepareStatement("update user_profile set profileImage=? where user_profile.id in ( select id from user_login where email=?)");
blob= myJdbc.jdbc.getDataSource().getConnection().createBlob();
blob.setBytes(1, str.getBytes());
pstmt = myJdbc.jdbc.getDataSource().getConnection().prepareStatement("update user_profile set profileImage=? where user_profile.id in ( select id from user_login where email=?)");
The problem is with your code. That code opens 2 additional connections to the database without closing them. You are opening connections yourself then you should also close them. However it is better to use a ConnectionCallback in those cases.
myJdbc.execute(new ConnectionCallback() {
public Object doInConnection(Connection con) throws SQLException, DataAccessException {
blob = con.createBlob();
blob.setBytes(1, str.getBytes());
pstmt = con.prepareStatement("update user_profile set profileImage=? where user_profile.id in ( select id from user_login where email=?)");
return null;
}
});
However it is even easier to use Spring JDBCs Blob support (see the reference guide). That way you don't need to mess around with connections and blobs yourself.
final String query = "update user_profile set profileImage=? where user_profile.id in ( select id from user_login where email=?)";
myJdbc.jdbc.execute(query, new AbstractLobCreatingPreparedStatementCallback(lobHandler) { 1
protected void setValues(PreparedStatement ps, LobCreator lobCreator) throws SQLException {
byte[] bytes = str.getBytes();
ps.setString(2, email);
lobCreator.setBlobAsBinaryStream(ps, 1, str.getBytes());
}
});
I'm forced to use a certain NLS_LANGUAGE for accessing a database and don't want to change user.locale to avoid affecting the rest of the application. Is something like this okay to do or can it cause unexpected issues?
Also, how large is the scope of the session? Will this affect this single query only, or each call using the same entityManager, or even the whole application?
#Stateless
#Local
public class myDAOImpl implements MyDAO{
#PersistenceContext(unitName = "MyUnit" )
protected EntityManager em;
public List<Object> getSomeData(){
em.createNativeQuery("alter session set nls_language = 'AMERICAN'").executeUpdate();
Query q = em.createNativeQuery("Select * from my_view");
return q.getResultList();
}
}
Alter session on oracle will affect all future requests on that connection. So if you are using a connection-pool, as you should, this will affect all future sessions that are opened on the same connection.
I'm using SimpleJdbcDaoSupport object to access DB resources. I have a query which is frequently executed against the database to locate a record with a specific key. for some reason after executing the same query several times I start to get an empty result even though the record exists in the database.
Any ideas what can cause this behavior?
daoSupport.getJdbcTemplate().query(this.getConsumerTokenQueryStatement(),params, this.rowMapper);
public static class TokenServicesRowMapper implements RowMapper {
public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
DefaultLobHandler lobHandler = new DefaultLobHandler();
return lobHandler.getBlobAsBytes(rs, 1);
}
}
If this is not related to your code one reason can be the fact that another transaction is doing something (like an update) to the row you search and due do the isolation between transactions you cannot see your row. One transaction can change but not commit your row yet while in the same time the other one is searching for it but as it can only see committed rows it does not see your row.