Need inputs for Database Tuning using Spring and Dbcp Connection pool - spring

I am using Spring in my project and instantiating dataSource as below.
#Bean(destroyMethod="close")
public DataSource restDataSource() {
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(env.getProperty("hibernate.connection.driver_class"));
dataSource.setUrl(env.getProperty("hibernate.connection.url"));
dataSource.setUsername(env.getProperty("hibernate.connection.username"));
dataSource.setPassword(env.getProperty("hibernate.connection.password"));
dataSource.setInitialSize(env.getRequiredProperty("hibernate.dbcp.initialSize", Integer.class));
dataSource.setMaxActive(env.getRequiredProperty("hibernate.dbcp.maxActive", Integer.class));
dataSource.setMaxIdle(env.getRequiredProperty("hibernate.dbcp.maxIdle", Integer.class));
dataSource.setMinIdle(env.getRequiredProperty("hibernate.dbcp.minIdle", Integer.class));
return dataSource;
}
Below is my properties file.
hibernate.dialect=org.hibernate.dialect.Oracle10gDialect
hibernate.connection.driver_class=oracle.jdbc.driver.OracleDriver
hibernate.connection.username=<>
hibernate.connection.password=<>
hibernate.connection.url=jdbc:oracle:thin:#<Host>:1521:<SID>
hibernate.show_sql=true
hibernate.cache.use_query_cache=true
cache.provider_class=org.hibernate.cache.EhCacheProvider
hibernate.cache.use_second_level_cache=true
hibernate.cache.region.factory_class=org.hibernate.cache.ehcache.EhCacheRegionFactory
net.sf.ehcache.configurationResourceName=ehcache.xml
**hibernate.dbcp.initialSize=10
hibernate.dbcp.maxActive=100
hibernate.dbcp.maxIdle=30
hibernate.dbcp.minIdle=10**
Please suggest :-
Any changes in the properties marked in Bold(initialSize,maxActive,maxidle,minIdle). My application will be concurrently used by around 100 users and total users are around 3000.
I am using Tomcat Server to deploy my application.Should I be using JNDI for connections instead of directly specify connection properties? Is above way of using connections good for a production System?

Instead of Commons DBCP I would suggest using HikariCP (I'm having very good experiences with that lately or if you are already on tomcat use Tomcat JDBC instead.
There is a lot written on poolsizing (see here for a nice explanation and here for a short video from Oracle). In short large poolsizes don't work and probably will make performance worse.
A rule of thumb/formula (also in the article mentioned) is to use
connections = ((core_count * 2) + effective_spindle_count)
Where core_count is the number of (actual) cores in your server and effective_spindle_count the number of disks you have. If you have server with a large disk and 4 cores it would lead to a connection pool of size 9. This should be able to handle what you need, adding more will only add overhead of monitoring, thread switching etc.

Related

How to set properties of Tomcat JDBC Pool when using Spring Cloud Connector config?

I want to configure the properties of the Tomcat JDBC Pool with custom parameter values. The pool is bootstrapped by the spring-cloud (Spring Cloud Connector) environment (Cloud Foundry) and connected to a PostgreSQL database. In particular, I want to set the minIdle, maxIdle and initialSize properties for the given pool.
In a "spring-vanilla" environment (non-cloud) the properties can be easily set by using
application.properties / .yaml files with environment properties,
#ConfigurationProperties annotation.
However, this approach doesn't transfer to my Cloud environment, where the URL (and other parameters) are injected from the environment variable VCAP_SERVICES (via the ServiceInfo instances). I don't want to re-implement the logic which Spring Cloud already did with its connectors.
After some searching I also stumbled over some tutorials / guides, which suggest to make use of the PoolConfig object (e.g. http://cloud.spring.io/spring-cloud-connectors/spring-cloud-spring-service-connector.html#_relational_database_db2_mysql_oracle_postgresql_sql_server). However, that way one cannot set the properties I need but merely the following three:
minPoolSize,
maxPoolSize,
maxWaitTime.
Note that I don't want to set connection-related properties (such as charset), but the properties are associated with the pool itself.
In essence, I would like to do the configuration similarly to https://www.baeldung.com/spring-boot-tomcat-connection-pool (using spring.datasource.tomcat.* properties). The problem with that approach is that the properties are not considered if the datasource was created by Spring Cloud. The article https://dzone.com/articles/binding-data-services-spring, section "Using a CloudFactory to create a DataSource", claims that the following code snippet makes it so that the configuration "can be tweaked using application.properties via spring.datasource.* properties":
#Bean
#ConfigurationProperties(DataSourceProperties.PREFIX)
public DataSource dataSource() {
return cloud().getSingletonServiceConnector(DataSource.class, null);
}
However, my own local test (with spring-cloud:Greenwich.RELEASE and spring-boot-starter-parent:2.1.3.RELEASE) showed that those property values are simply ignored.
I found an ugly way to solve my problem but I think it's not appropriate:
Let spring-cloud create the DataSource, which is not the pooled DataSource directly,
check that the reference is a descendant of a DelegatingDataSource,
resolve the delegate, which is then the pool itself,
change the properties programmatically directly at the pool itself.
I do not believe that this is the right way as I am using internal knowledge (on the layering of datasources). Additionally, this approach does not work for the property initialSize, which is only considered when the pool is created.

Unable to set HikariCP max-pool-size

I'm using Spring Boot 2.0.3 release. I want to increase the maximum pool size of HikariCP which is 10 by default.
I tried changing it in applicaiton.properties with
spring.datasource.hikari.maximum-pool-size=200
but it is not working because in the logs it still show that max pool size is 10.
The reason I want to change is because I am somehow reaching that limit in staging and I have no idea what's causing it.
I faced similar issue recently (Spring Boot v2.1.3). Posting it here in case people bump into the same scenario.
Long story short: if you're initializing DataSource using #ConfigurationProperties, those properties don't seem to require hikari prefix for maximum-pool-size, unless I'm missing something. So spring.datasource.maximum-pool-size should work if you use #ConfigurationProperties(prefix = "spring.datasource").
Details: In my case I'm initializing DataSource myself using org.springframework.boot.jdbc.DataSourceBuilder, so that I could later use it in org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean:
#Bean
#Primary
#ConfigurationProperties(prefix = "spring.datasource")
public DataSource primaryDataSource()
{
return DataSourceBuilder.create().build();
}
In this case property spring.datasource.hikari.maximum-pool-size taken from Common App Properties section in Spring Boot doc did not help. Neither did suggested above maximumPoolSize.
So I went and debugged Spring Boot code which lead me to org.springframework.boot.context.properties.bind.JavaBeanBinder and it's method bind. Value for property name for HikariDataSource setter setMaximumPoolSize was "maximum-pool-size", so just for sake of testing I renamed my property to be spring.datasource.maximum-pool-size and it finally worked.
Hope it helps. Please let me know in the comments if I missed something or took wrong path. Thanks!
As per HikariCP Github README it's maximumPoolSize so try using:
spring.datasource.hikari.maximumPoolSize = 200
But this will work only if you allow Spring Boot to create the DataSource. If you create the DataSource yourself Spring Boot properties have no effect.
Do note that 200 is a very high value that may negatively impact your database as each physical connection requires server resources. In most cases a lower value will yield better performance, see HikariCP wiki: About Pool Sizing.

Liquibase in Spring boot application keeps 10 connections open

I'm working on a Spring Boot application with Liquibase integration to setup the database. We use a different user for the database changes which we configured using the application.properties file
liquibase.user=abc
liquibase.password=xyz
liquibase.url=jdbc:postgresql://something.eu-west-1.rds.amazonaws.com:5432/app?ApplicationName=${appName}-liquibase
liquibase.enabled=true
liquibase.contexts=dev,postgres
We have at this moment 3 different microservices in deployment and we noticed that for every running instance, Liquibase opens 10 connections and it never closes these connections unless we stop the application. This basically means that in development we regularly hit the connection limit of our Amazon RDS instance.
Right now, in development, 40 of 74 active connections are occupied by Liquibase. If we ever want to go to production with this, having autoscaling enabled for all the microservices, that would mean we'll have to over-scale the database in order not to hit any connection limits.
Is there a way to
tell liquibase to not use a connection pool of 10 connections
tell liquibase to stop or close the connections
So far I found no documentation on how to do this.
Thanks to the response of Slava I managed to fix the problem with following datasource configuration class
#Configuration
public class LiquibaseDataSourceConfiguration {
private static final Logger LOG = LoggerFactory.getLogger(LiquibaseDataSourceConfiguration.class);
#Autowired
private LiquibaseDataSourceProperties liquibaseDataSourceProperties;
#LiquibaseDataSource
#Bean
public DataSource liquibaseDataSource() {
DataSource ds = DataSourceBuilder.create()
.username(liquibaseDataSourceProperties.getUser())
.password(liquibaseDataSourceProperties.getPassword())
.url(liquibaseDataSourceProperties.getUrl())
.driverClassName(liquibaseDataSourceProperties.getDriver())
.build();
if (ds instanceof org.apache.tomcat.jdbc.pool.DataSource) {
((org.apache.tomcat.jdbc.pool.DataSource) ds).setInitialSize(1);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxActive(2);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxAge(1000);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinIdle(0);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinEvictableIdleTimeMillis(60000);
} else {
// warnings or exceptions, whatever you prefer
}
LOG.info("Initialized a datasource for {}", liquibaseDataSourceProperties.getUrl());
return ds;
}
}
The documentation of the properties can be found on the site of Tomcat: https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
initialSize: The initial number of connections that are created when the pool is started
maxActive: The maximum number of active connections that can be allocated from this pool at the same time
minIdle: The minimum number of established connections that should be kept in the pool at all times
maxAge: Time in milliseconds to keep this connection. When a connection is returned to the pool, the pool will check to see if the now - time-when-connected > maxAge has been reached, and if so, it closes the connection rather than returning it to the pool. The default value is 0, which implies that connections will be left open and no age check will be done upon returning the connection to the pool.
minEvictableIdleTimeMillis: The minimum amount of time an object may sit idle in the pool before it is eligible for eviction.
So it does not appear to be a connection leak, it's just the default configuration of the datasource which is not optimal for Liquibase if you use a dedicated datasource. I don't expect this to be a problem if the liquibase datasource is your primary datasource.
Update: This has been fixed in 2.5.0-M2 and Liquibase now uses a SimpleDriverDataSource without a connection pool.
Original answer: This change to connection pool management was introduced in Spring Boot version 2.0.6.RELEASE, and only takes effect if you use Spring Boot Actuator. There is an actuator endpoint (enabled by default) which allows you to get change sets applied by Liquibase. For this to work Liquibase keeps its database connections open. You can disable the endpoint with management.endpoint.liquibase.enabled = false, in which case the connection pool used by Liquibase will be shutdown after the initial run.
GitHub issue related to this change: https://github.com/spring-projects/spring-boot/issues/13832
Spring Boot Actuator (see 12. Liquibase: https://docs.spring.io/spring-boot/docs/2.0.6.RELEASE/actuator-api/html/
I don't know why liquibase doesn't close a connection, maybe it's a bug and you should create an issue for that.
To set connection pool for liquibase you have to create a custom data source and mark it with #LiquibaseDataSource annotation.
Related issues provide more details:
Possibility to specify custom dataSource configuration for liquibase only
Add LiquibaseDataSource annotation

Spring boot embedded jettty thread pool maximum size

I want to configure Thread pool size for my application which is using spring boot embedded jetty server. Below is the code snippet i am using.
I want to know what is the maximum thread pool size I can set for the embedded jetty server and is it the correct way of configuring it?
#Bean
public EmbeddedServletContainerFactory jettyConfigBean() {
JettyEmbeddedServletContainerFactory jef = new JettyEmbeddedServletContainerFactory();
jef.addServerCustomizers(new JettyServerCustomizer() {
public void customize(org.eclipse.jetty.server.Server server) {
final QueuedThreadPool threadPool = server.getBean(QueuedThreadPool.class);
server.setHandler(handlers);
}
});
return jef;
}
Speaking for spring-boot 2.0.0-SNAPSHOT:
This is for jetty and tomcat and undertow configurable via
server.jetty.acceptors
server.tomcat.max-connections
server.undertow.io-threads
See https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#common-application-properties for a complete reference.
The maximum number of threads you can set is dependent on your machine, not so much on the hosting software. If you have a monster machine you can probably have hundreds of threads, whereas a regular laptop would probably only be able to handle tens of threads. You need to tune your configuration by testing. A good way is to setup a load test, for example see if 50 threads with 50 req/sec will crash your service, then check if increasing/decreasing the number of threads will help. You can find the limits of your application and machine with such a technique.
Regarding the correctness of the configuration, you can read this blog post which explains it very well: http://jdpgrailsdev.github.io/blog/2014/10/07/spring_boot_jetty_thread_pool.html
This is what you can do:
While working on newer versions, you need to create a server factory bean in a configuration file. I tested this on Spring Boot version 2.1.6 while the document I referred to is for version 2.3.3
#Bean
public ConfigurableServletWebServerFactory webServerFactory() {
JettyServletWebServerFactory factory = new JettyServletWebServerFactory();
factory.setPort(8080);
factory.setContextPath("/my-app");
QueuedThreadPool threadPool = new QueuedThreadPool();
threadPool.setMinThreads(10);
threadPool.setMaxThreads(100);
threadPool.setIdleTimeout(60000);
factory.setThreadPool(threadPool);
return factory;
}
Following is the link to Spring Docs: customizing-embedded-containers

Prevent use of CachingConnectionFactory with DefaultJmsListenerContainerFactory

I am working on a brand new project in which I need to have listeners that will consume messages from several queues (no need to have producer for now).
Starting from scratch, I am using the last Spring JMS version (4.1.2).
Here is an extract of my configuration file:
<bean id="cachedConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory"
p:targetConnectionFactory-ref="jmsConnectionFactory"
p:sessionCacheSize="3" />
<bean id="jmsListenerContainerFactory"
class="org.springframework.jms.config.DefaultJmsListenerContainerFactory"
p:connectionFactory-ref="cachedConnectionFactory"
p:destinationResolver-ref="jndiDestinationResolver"
p:concurrency="3-5"
p:receiveTimeout="5000" />
But I think I may be wrong since DefaultJmsListenerContainerFactory will build regular DefaultMessageListenerContainerS. And, as stated in the doc, CachingConnectionFactory should not be used with a message listener container...
Even if I am using the new Spring 4.1 DefaultJmsListenerContainerFactory class the answer from post is still valid (cacheConsumers = true can be an issue + don't need to cache sessions for listener containers because the sessions are long lived) , right?
Instead of using the CachingConnectionFactory, I should use the SingleConnectionFactory (and not directly the broker implementation one)?
If the SingleConnectionFactory class should indeed be used, is the "reconnectOnException" property should be set to true (as it is done in the CachingConnectionFactory) or does the new "setBackOff" method (from DefaultJmsListenerContainerFactory) deals with the same kind of issues?
Thanks for any tips
Correct.
There's not really much benefit in using a SingleConnectionFactory unless you want to share a single connection across multiple containers; the DMLC will use a single connection from the vendor factory by default for all consumer threads (cacheLevel >= CACHE_CONNECTION), unless a TransactionManager is configured.
The container(s) will handle reconnection - even before the 'new' backOff property - backOff just adds more sophistication to the reconnection algorithm - it used to just retry every n seconds (5 by default).
As stated in the answer you cited, it's ok to use a CCF as long as you disable consumer caching.
Correction: Yes, when using the SingleConnectionFactory, you do need to set reconnectOnException to true in order for the container to properly recover its connection. Otherwise, it simply hands out the stale connection.

Resources