Optimizing oracle database connection acquisition time on Spring boot - spring

I have an Spring boot application that runs several services and uses oracle database. The database is maintained properly, indexes also added up, and when executing SQL statements directly on SQL Developer, it's getting executed in milliseconds.
In the spring boot, I use this to execute the statement:
Session session = sessionFactory.getCurrentSession();
session.createQuery("from Table where id = :id and status = 0").setParameter("id", id);
Here is the config properties for the database:
spring.jpa.properties.hibernate.enable_lazy_load_no_trans=true
hibernate.hbm2ddl.auto=none
Here is the way of datasource initialization in the datasource config:
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource(url, name, pw);
dataSource.setDriverClassName(...);
return dataSource;
}
Recently, it takes so much time to acquire the database connection, it can go up to 10 seconds jut for acquiring connection. I don't think there is any problem in the query. As from the database side, it's also ok. The resources of server which running this service and the database server are also fine, as well as the network. The servers also have auto-scale feature to create new instance when the memory getting low. I just can't figure out what should I do to improve the acquisition time. Could you please help?

Related

HikariPool active connection stuck and will not be run

I'm running a simple update query with MyBatis in my SpringBoot project, and using Hiraki connection pool.
In testing stage, only single tread of update query is execute in the program. When the program reach the line of update query, the Hikari connection start successfully but somehow the active connection will not be run and stuck forever.
//my hikari datasource setup
#Bean
public HikariDataSource dataSource() {
HikariDataSource db = new HikariDataSource();
db.setDriverClassName(driverClassName);
db.setJdbcUrl(url);
db.setUsername(username);
db.setPassword(pwd);
db.setReadOnly(false);
db.setMaximunPoolSize(80);
db.setConnectionTimeout(30000);
db.setIdleTimeout(30000);
db.setMaxLifetime(30000);
db.setMinimunIdle(5);
db.setValidationTimeout(500);
return db;
}
Following is the screenshot of stacktrace
It's weird, this situation will not occur if i use insert statement, it only happen when I call update statement

WebSphere insert/update statements with SQL-SERVER hangs with REQUIRES_NEW propagation

We are facing an issue in our spring batch application when we are deploying the application on WebSphere.
Example: One class contains parent() method and Second class contains child() method, where child method requires a new transaction. After execution of the methods when transaction is committed the commit routine hangs and nothing happens further.
#Transactional //using current transaction
public void parent(){
child();
}
#Transactional(propagation=REQUIRES_NEW) //creates new transaction
public void child(){
//Database save statements including update, insert and deletes
}
This issue only persists in WebSphere and code works fine on our local machine where we are using tomcat as web container.
WebSphere logs/stacktrace shows that the WebSphere prepared statement keeps on waiting for the response from the database. At the same time update and inserts are locked out on the affected tables i.e. if we run an insert or update query manually on the affected table the query doesn't execute.
We are using Spring JPA for data persistence and Spring’s JpaTransactionManager for transaction management and MSSQLServer database.
Is it that WebSphere does not support creating new transaction from existing transaction?
Yes, the pattern you are describing is supported by WebSphere Application Server. Given that this involved locked entries within the database, you might be running into a difference between the application servers in which transaction isolation level is used by default. In WebSphere Application Server, you get a default of java.sql.Connection.TRANSACTION_REPEATABLE_READ for SQL Server, whereas I think in most other cases you end up with a default of java.sql.Connection.TRANSACTION_READ_COMMITTED (less locking). If the default value is the problem, you can change it on the data source configuration.
If you are using WebSphere Application Server Liberty, then the default isolation level can be configured in server.xml as a property of the dataSource element, like this,
<dataSource isolationLevel="TRANSACTION_READ_COMMITTED" jndiName=...
If you are using WebSphere Application Server traditional, then the default isolation level can be configured as the webSphereDefaultIsolationLevel custom property, which can be set to the numeric value of the isolation level constant on java.sql.Connection (value for TRANSACTION_READ_COMMITTED is 2).
See this linked article for the steps of doing so via the admin console.

Liquibase in Spring boot application keeps 10 connections open

I'm working on a Spring Boot application with Liquibase integration to setup the database. We use a different user for the database changes which we configured using the application.properties file
liquibase.user=abc
liquibase.password=xyz
liquibase.url=jdbc:postgresql://something.eu-west-1.rds.amazonaws.com:5432/app?ApplicationName=${appName}-liquibase
liquibase.enabled=true
liquibase.contexts=dev,postgres
We have at this moment 3 different microservices in deployment and we noticed that for every running instance, Liquibase opens 10 connections and it never closes these connections unless we stop the application. This basically means that in development we regularly hit the connection limit of our Amazon RDS instance.
Right now, in development, 40 of 74 active connections are occupied by Liquibase. If we ever want to go to production with this, having autoscaling enabled for all the microservices, that would mean we'll have to over-scale the database in order not to hit any connection limits.
Is there a way to
tell liquibase to not use a connection pool of 10 connections
tell liquibase to stop or close the connections
So far I found no documentation on how to do this.
Thanks to the response of Slava I managed to fix the problem with following datasource configuration class
#Configuration
public class LiquibaseDataSourceConfiguration {
private static final Logger LOG = LoggerFactory.getLogger(LiquibaseDataSourceConfiguration.class);
#Autowired
private LiquibaseDataSourceProperties liquibaseDataSourceProperties;
#LiquibaseDataSource
#Bean
public DataSource liquibaseDataSource() {
DataSource ds = DataSourceBuilder.create()
.username(liquibaseDataSourceProperties.getUser())
.password(liquibaseDataSourceProperties.getPassword())
.url(liquibaseDataSourceProperties.getUrl())
.driverClassName(liquibaseDataSourceProperties.getDriver())
.build();
if (ds instanceof org.apache.tomcat.jdbc.pool.DataSource) {
((org.apache.tomcat.jdbc.pool.DataSource) ds).setInitialSize(1);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxActive(2);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxAge(1000);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinIdle(0);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinEvictableIdleTimeMillis(60000);
} else {
// warnings or exceptions, whatever you prefer
}
LOG.info("Initialized a datasource for {}", liquibaseDataSourceProperties.getUrl());
return ds;
}
}
The documentation of the properties can be found on the site of Tomcat: https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
initialSize: The initial number of connections that are created when the pool is started
maxActive: The maximum number of active connections that can be allocated from this pool at the same time
minIdle: The minimum number of established connections that should be kept in the pool at all times
maxAge: Time in milliseconds to keep this connection. When a connection is returned to the pool, the pool will check to see if the now - time-when-connected > maxAge has been reached, and if so, it closes the connection rather than returning it to the pool. The default value is 0, which implies that connections will be left open and no age check will be done upon returning the connection to the pool.
minEvictableIdleTimeMillis: The minimum amount of time an object may sit idle in the pool before it is eligible for eviction.
So it does not appear to be a connection leak, it's just the default configuration of the datasource which is not optimal for Liquibase if you use a dedicated datasource. I don't expect this to be a problem if the liquibase datasource is your primary datasource.
Update: This has been fixed in 2.5.0-M2 and Liquibase now uses a SimpleDriverDataSource without a connection pool.
Original answer: This change to connection pool management was introduced in Spring Boot version 2.0.6.RELEASE, and only takes effect if you use Spring Boot Actuator. There is an actuator endpoint (enabled by default) which allows you to get change sets applied by Liquibase. For this to work Liquibase keeps its database connections open. You can disable the endpoint with management.endpoint.liquibase.enabled = false, in which case the connection pool used by Liquibase will be shutdown after the initial run.
GitHub issue related to this change: https://github.com/spring-projects/spring-boot/issues/13832
Spring Boot Actuator (see 12. Liquibase: https://docs.spring.io/spring-boot/docs/2.0.6.RELEASE/actuator-api/html/
I don't know why liquibase doesn't close a connection, maybe it's a bug and you should create an issue for that.
To set connection pool for liquibase you have to create a custom data source and mark it with #LiquibaseDataSource annotation.
Related issues provide more details:
Possibility to specify custom dataSource configuration for liquibase only
Add LiquibaseDataSource annotation

writing API's with JPA

In a big company, our team provides API's for accessing data on a oracle DB. Until now, we used plain SQL (JDBC) to get/write the data on the database.
So most of the existing API's looked like this (ok, not always that stupid :-)
public class DummyApi {
private final DataSource datasource;
public DummyApi(javax.sql.DataSource datasource) {
this.datasource = datasource;
}
public void doSomething() throws SQLException {
Connection connection = datasource.getConnection();
PreparedStatement statement = connection.prepareStatement("plain sql query");
statement.execute();
}
}
Using such API's is simple, it doesn't matter if your end-application is plain java SE, spring or javaEE. Further, transaction-APIs works proper with this API's. We use them with spring TransactionManager (together with the TransactionAwareDataSourceProxy) and with JTA in CMA-Java EE applications.
Now we evaluate to use JPA in new API's. And the big question we currently struggle is the following: how can we provide a simple interface so that the end-application doesn't need to know about JPA? How can we initialize the EntityManager with a DataSource (for example provided as constructor parameter)? And how can we rollback if there are old, plain JDBC-APIs AND new JPA-APIs in the same transaction (begin/rollback in the end-application)?
Thanks for bringing a little light on the matter!
With JPA the datasource will normally be set in the persistence.xml. If you need some sort of dynamic datasources, then you can pass the DataSource as a property to Persistence.createEntityManagerFactory().
Most JPA providers provide a way to get the JDBC Connection if you want to mix JDBC. Normally this is accessed using em.unwrap(Connection.class). You could also use JTA or Spring to have the transaction share the same connection.

Spring JDBC and Connection Object

I'm building an app using a proprietary api. To connect to the database I use a method that returns a Connection object and then on that connection I call the appropriate methods to run queries on the database for example....
Connection conn = JdbcServiceFactory.getInstance().getDefaultDatabase().getConnectionManager().getConnection();
PreparedStatement ps = conn.prepareStatement("select * from test");
If I'm choosing to use Spring MVC 3 for my next project, what must I do to get the database connection setup? From what I've seen in the documentation, I have use the datasource tag in the container and pass a URL, username, and password. As shown, I currently don't have to do that to get the connection.
At the end of the day your proprietary API must access some database (available on some server), using some credentials. You just don't see this. In Spring you must first define some DataSource. Either use existing librariess like dbcp, bonecp or c3p0 or take one provided by your application server via jndi. As long as they implement DataSource interface, it doesn't matter what approach you choose. Too much to explain each one in detail.
Once you have DataSource bean set up, I strongly recommend using JdbcTemplate which simplifies your JDBC code a lot, e.g:
List<Map<String,Object>> res = jdbcTemplate.queryForList("select * from test");
...and much more.
UPDATE: If you want to use your existing legacy API with modern frameworks expecting DataSource (pretty much all of them), implementing DataSource adapter is trivial (remaining methods can stay unimplemented, throwing UnsupportedOperationException):
public class LegacyDataSourceAdapter implements DataSource {
#Override
public Connection getConnection() throws SQLException {
return JdbcServiceFactory.getInstance().getDefaultDatabase().getConnectionManager().getConnection();
}
#Override
public Connection getConnection(String username, String password) throws SQLException {
return getConnection();
}
//other methods are irrelevant
}
Now just create an instance of LegacyDataSourceAdapter (maybe as a Spring bean) and pass it to JdbcTemplate, Hibernate, myBatis...
BTW you have here some first class example of bad API design:
Connection conn = JdbcServiceFactory.
getInstance().
getDefaultDatabase().
getConnectionManager().
getConnection();

Resources