I'm working on a Spring Boot application with Liquibase integration to setup the database. We use a different user for the database changes which we configured using the application.properties file
liquibase.user=abc
liquibase.password=xyz
liquibase.url=jdbc:postgresql://something.eu-west-1.rds.amazonaws.com:5432/app?ApplicationName=${appName}-liquibase
liquibase.enabled=true
liquibase.contexts=dev,postgres
We have at this moment 3 different microservices in deployment and we noticed that for every running instance, Liquibase opens 10 connections and it never closes these connections unless we stop the application. This basically means that in development we regularly hit the connection limit of our Amazon RDS instance.
Right now, in development, 40 of 74 active connections are occupied by Liquibase. If we ever want to go to production with this, having autoscaling enabled for all the microservices, that would mean we'll have to over-scale the database in order not to hit any connection limits.
Is there a way to
tell liquibase to not use a connection pool of 10 connections
tell liquibase to stop or close the connections
So far I found no documentation on how to do this.
Thanks to the response of Slava I managed to fix the problem with following datasource configuration class
#Configuration
public class LiquibaseDataSourceConfiguration {
private static final Logger LOG = LoggerFactory.getLogger(LiquibaseDataSourceConfiguration.class);
#Autowired
private LiquibaseDataSourceProperties liquibaseDataSourceProperties;
#LiquibaseDataSource
#Bean
public DataSource liquibaseDataSource() {
DataSource ds = DataSourceBuilder.create()
.username(liquibaseDataSourceProperties.getUser())
.password(liquibaseDataSourceProperties.getPassword())
.url(liquibaseDataSourceProperties.getUrl())
.driverClassName(liquibaseDataSourceProperties.getDriver())
.build();
if (ds instanceof org.apache.tomcat.jdbc.pool.DataSource) {
((org.apache.tomcat.jdbc.pool.DataSource) ds).setInitialSize(1);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxActive(2);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxAge(1000);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinIdle(0);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinEvictableIdleTimeMillis(60000);
} else {
// warnings or exceptions, whatever you prefer
}
LOG.info("Initialized a datasource for {}", liquibaseDataSourceProperties.getUrl());
return ds;
}
}
The documentation of the properties can be found on the site of Tomcat: https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
initialSize: The initial number of connections that are created when the pool is started
maxActive: The maximum number of active connections that can be allocated from this pool at the same time
minIdle: The minimum number of established connections that should be kept in the pool at all times
maxAge: Time in milliseconds to keep this connection. When a connection is returned to the pool, the pool will check to see if the now - time-when-connected > maxAge has been reached, and if so, it closes the connection rather than returning it to the pool. The default value is 0, which implies that connections will be left open and no age check will be done upon returning the connection to the pool.
minEvictableIdleTimeMillis: The minimum amount of time an object may sit idle in the pool before it is eligible for eviction.
So it does not appear to be a connection leak, it's just the default configuration of the datasource which is not optimal for Liquibase if you use a dedicated datasource. I don't expect this to be a problem if the liquibase datasource is your primary datasource.
Update: This has been fixed in 2.5.0-M2 and Liquibase now uses a SimpleDriverDataSource without a connection pool.
Original answer: This change to connection pool management was introduced in Spring Boot version 2.0.6.RELEASE, and only takes effect if you use Spring Boot Actuator. There is an actuator endpoint (enabled by default) which allows you to get change sets applied by Liquibase. For this to work Liquibase keeps its database connections open. You can disable the endpoint with management.endpoint.liquibase.enabled = false, in which case the connection pool used by Liquibase will be shutdown after the initial run.
GitHub issue related to this change: https://github.com/spring-projects/spring-boot/issues/13832
Spring Boot Actuator (see 12. Liquibase: https://docs.spring.io/spring-boot/docs/2.0.6.RELEASE/actuator-api/html/
I don't know why liquibase doesn't close a connection, maybe it's a bug and you should create an issue for that.
To set connection pool for liquibase you have to create a custom data source and mark it with #LiquibaseDataSource annotation.
Related issues provide more details:
Possibility to specify custom dataSource configuration for liquibase only
Add LiquibaseDataSource annotation
Related
I have an Spring boot application that runs several services and uses oracle database. The database is maintained properly, indexes also added up, and when executing SQL statements directly on SQL Developer, it's getting executed in milliseconds.
In the spring boot, I use this to execute the statement:
Session session = sessionFactory.getCurrentSession();
session.createQuery("from Table where id = :id and status = 0").setParameter("id", id);
Here is the config properties for the database:
spring.jpa.properties.hibernate.enable_lazy_load_no_trans=true
hibernate.hbm2ddl.auto=none
Here is the way of datasource initialization in the datasource config:
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource(url, name, pw);
dataSource.setDriverClassName(...);
return dataSource;
}
Recently, it takes so much time to acquire the database connection, it can go up to 10 seconds jut for acquiring connection. I don't think there is any problem in the query. As from the database side, it's also ok. The resources of server which running this service and the database server are also fine, as well as the network. The servers also have auto-scale feature to create new instance when the memory getting low. I just can't figure out what should I do to improve the acquisition time. Could you please help?
I am deploying my Spring Boot REST API on AWS Fargate, which connects to AWS Aurora Postgresql Serverless V1.
I have configured the Aurora to scale the ACU to 0 when idle as in the following picture, so that I am not charge too much when I don't use the API.
Initially, my Spring Boot App maintains 10 idle connections by default, so I have tried to make it zero by adding the this to application.properties
spring.datasource.minimumIdle=0
And then I see from AWS console that the database connection has been reduced. But it remains 1 connection forever.
Please help suggest if you know how to make it zero.
The Spring Boot database configuration is basically like this
#Bean
#ConfigurationProperties(prefix = "spring.datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
Edit 1
I used the suggestion in the comment to check if the connection really comes from Spring Boot.
It turns out there is no active connection but /actuator/metrics/hikaricp.connections.idle always return the value of 1
{"name":"hikaricp.connections.idle","description":"Idle connections","baseUnit":null,"measurements":[{"statistic":"VALUE","value":1.0}],"availableTags":[{"tag":"pool","values":["HikariPool-1"]}]}
And it seems does not relate to health check because I have tried running it locally and the result of /actuator/metrics/hikaricp.connections.idle remains 1.
I set logging.level.root = trace to see what is happening.
There are only 2 things keep printing in the log periodically
The Hikari connection report, showing 1 idle connection
{"level":"DEBUG","ref":"|","marker":"INTERNAL","message":"HikariPool-1 - Before cleanup stats (total=1, active=0, idle=1, waiting=0)","logger":"com.zaxxer.hikari.pool.HikariPool","timestamp":"2022-06-14 16:15:16.799","thread":"HikariPool-1 housekeeper"}
{"level":"DEBUG","ref":"|","marker":"INTERNAL","message":"HikariPool-1 - After cleanup stats (total=1, active=0, idle=1, waiting=0)","logger":"com.zaxxer.hikari.pool.HikariPool","timestamp":"2022-06-14 16:15:16.800","thread":"HikariPool-1 housekeeper"}
{"level":"DEBUG","ref":"|","marker":"INTERNAL","message":"HikariPool-1 - Fill pool skipped, pool is at sufficient level.","logger":"com.zaxxer.hikari.pool.HikariPool","timestamp":"2022-06-14 16:15:16.800","thread":"HikariPool-1 housekeeper"}
Tomcat NioEndpoint, but I think it is not relevant
{"level":"DEBUG","ref":"|","marker":"INTERNAL","message":"timeout completed: keys processed=0; now=1655198117181; nextExpiration=1655198117180; keyCount=0; hasEvents=false; eval=false","logger":"org.apache.tomcat.util.net.NioEndpoint","timestamp":"2022-06-14 16:15:17.181","thread":"http-nio-8445-Poller"}
Thanks to the suggestion in the comment, this is because of the actuator health check, which can be solved by the following settings
management.health.db.enabled=false
I surprisingly discovered that liquibase creates his own connection pool with default values and therefore holds 10 connections to db. It doesn't use connection pool configured from application.properties. So, I have a couple of questions:
What is necessity of own pool?
How can I configure this pool?
The linked question has indeed a correct answer. But obviously, 10 connections are taken from the default settings of Hikari Pool (it is the default db connection pool starting from Spring Boot 2.0).
So, here is the modified version of the same configuration but with Hikari instead of tomcat-jdbc:
#LiquibaseDataSource
#Bean
public DataSource liquibaseDataSource() {
DataSource ds = DataSourceBuilder.create()
.username(liquibaseDataSourceProperties.getUser())
.password(liquibaseDataSourceProperties.getPassword())
.url(liquibaseDataSourceProperties.getUrl())
.driverClassName(liquibaseDataSourceProperties.getDriver())
.build();
if (ds instanceof HikariDataSource) {
((HikariDataSource) ds).setMaximumPoolSize(2); //10 by default
}
return ds;
}
We are facing an issue in our spring batch application when we are deploying the application on WebSphere.
Example: One class contains parent() method and Second class contains child() method, where child method requires a new transaction. After execution of the methods when transaction is committed the commit routine hangs and nothing happens further.
#Transactional //using current transaction
public void parent(){
child();
}
#Transactional(propagation=REQUIRES_NEW) //creates new transaction
public void child(){
//Database save statements including update, insert and deletes
}
This issue only persists in WebSphere and code works fine on our local machine where we are using tomcat as web container.
WebSphere logs/stacktrace shows that the WebSphere prepared statement keeps on waiting for the response from the database. At the same time update and inserts are locked out on the affected tables i.e. if we run an insert or update query manually on the affected table the query doesn't execute.
We are using Spring JPA for data persistence and Spring’s JpaTransactionManager for transaction management and MSSQLServer database.
Is it that WebSphere does not support creating new transaction from existing transaction?
Yes, the pattern you are describing is supported by WebSphere Application Server. Given that this involved locked entries within the database, you might be running into a difference between the application servers in which transaction isolation level is used by default. In WebSphere Application Server, you get a default of java.sql.Connection.TRANSACTION_REPEATABLE_READ for SQL Server, whereas I think in most other cases you end up with a default of java.sql.Connection.TRANSACTION_READ_COMMITTED (less locking). If the default value is the problem, you can change it on the data source configuration.
If you are using WebSphere Application Server Liberty, then the default isolation level can be configured in server.xml as a property of the dataSource element, like this,
<dataSource isolationLevel="TRANSACTION_READ_COMMITTED" jndiName=...
If you are using WebSphere Application Server traditional, then the default isolation level can be configured as the webSphereDefaultIsolationLevel custom property, which can be set to the numeric value of the isolation level constant on java.sql.Connection (value for TRANSACTION_READ_COMMITTED is 2).
See this linked article for the steps of doing so via the admin console.
I am using Spring in my project and instantiating dataSource as below.
#Bean(destroyMethod="close")
public DataSource restDataSource() {
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(env.getProperty("hibernate.connection.driver_class"));
dataSource.setUrl(env.getProperty("hibernate.connection.url"));
dataSource.setUsername(env.getProperty("hibernate.connection.username"));
dataSource.setPassword(env.getProperty("hibernate.connection.password"));
dataSource.setInitialSize(env.getRequiredProperty("hibernate.dbcp.initialSize", Integer.class));
dataSource.setMaxActive(env.getRequiredProperty("hibernate.dbcp.maxActive", Integer.class));
dataSource.setMaxIdle(env.getRequiredProperty("hibernate.dbcp.maxIdle", Integer.class));
dataSource.setMinIdle(env.getRequiredProperty("hibernate.dbcp.minIdle", Integer.class));
return dataSource;
}
Below is my properties file.
hibernate.dialect=org.hibernate.dialect.Oracle10gDialect
hibernate.connection.driver_class=oracle.jdbc.driver.OracleDriver
hibernate.connection.username=<>
hibernate.connection.password=<>
hibernate.connection.url=jdbc:oracle:thin:#<Host>:1521:<SID>
hibernate.show_sql=true
hibernate.cache.use_query_cache=true
cache.provider_class=org.hibernate.cache.EhCacheProvider
hibernate.cache.use_second_level_cache=true
hibernate.cache.region.factory_class=org.hibernate.cache.ehcache.EhCacheRegionFactory
net.sf.ehcache.configurationResourceName=ehcache.xml
**hibernate.dbcp.initialSize=10
hibernate.dbcp.maxActive=100
hibernate.dbcp.maxIdle=30
hibernate.dbcp.minIdle=10**
Please suggest :-
Any changes in the properties marked in Bold(initialSize,maxActive,maxidle,minIdle). My application will be concurrently used by around 100 users and total users are around 3000.
I am using Tomcat Server to deploy my application.Should I be using JNDI for connections instead of directly specify connection properties? Is above way of using connections good for a production System?
Instead of Commons DBCP I would suggest using HikariCP (I'm having very good experiences with that lately or if you are already on tomcat use Tomcat JDBC instead.
There is a lot written on poolsizing (see here for a nice explanation and here for a short video from Oracle). In short large poolsizes don't work and probably will make performance worse.
A rule of thumb/formula (also in the article mentioned) is to use
connections = ((core_count * 2) + effective_spindle_count)
Where core_count is the number of (actual) cores in your server and effective_spindle_count the number of disks you have. If you have server with a large disk and 4 cores it would lead to a connection pool of size 9. This should be able to handle what you need, adding more will only add overhead of monitoring, thread switching etc.