Default connections hold by JedisPool - performance

I am integrating redis with ofbiz by using jedis client.
One redis server is being used by different application.
My question is
How many connection will be hold by JedisPool by default.
If I create multiple JedisPool will it effect redis performance
Note : I am creating JedisPool with default configuration in another application.
client = new JedisPool(ip, port);
Is there any better approach?, suggest me. Thanks
Update : Initiating redis server with default configuratoin user spring data
<bean id="connectionFactory"
class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory"
p:host-name="${app.cache.redis.host}" p:port="${app.cache.redis.port}" p:password="${app.cache.redis.password}" />

1) How many connection will be hold by JedisPool by default
By using JedisPool with this instantiation,
client = new JedisPool(ip, port);
you are using a GenericObjectPoolConfig from apache-commons-pool.
The default configuration for this generic pool is:
DEFAULT_MAX_TOTAL = 8
DEFAULT_MAX_IDLE = 8
DEFAULT_MIN_IDLE = 0
2) If I create multiple JedisPool will it effect redis performance
Yes and no. If you create multiple JedisPool, you will have more clients connected to Redis. But Redis can support a lot of connected clients with very good performance.
You can set the Redis max client authorized number in redis.conf (for example 10000 clients max).
maxclients 10000
or at startup :
./redis-server --maxclients 10000
or with redis-cli :
127.0.0.1:6379> config set maxclients 10000
In the default configuration, the number of authorized clients is unlimited.
Depending of your use case, you can have multiple JedisPool, or simply increase the size of your JedisPool (to have more than 8 connections).

Related

Spring Boot cannot config database connection down to zero

I am deploying my Spring Boot REST API on AWS Fargate, which connects to AWS Aurora Postgresql Serverless V1.
I have configured the Aurora to scale the ACU to 0 when idle as in the following picture, so that I am not charge too much when I don't use the API.
Initially, my Spring Boot App maintains 10 idle connections by default, so I have tried to make it zero by adding the this to application.properties
spring.datasource.minimumIdle=0
And then I see from AWS console that the database connection has been reduced. But it remains 1 connection forever.
Please help suggest if you know how to make it zero.
The Spring Boot database configuration is basically like this
#Bean
#ConfigurationProperties(prefix = "spring.datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
Edit 1
I used the suggestion in the comment to check if the connection really comes from Spring Boot.
It turns out there is no active connection but /actuator/metrics/hikaricp.connections.idle always return the value of 1
{"name":"hikaricp.connections.idle","description":"Idle connections","baseUnit":null,"measurements":[{"statistic":"VALUE","value":1.0}],"availableTags":[{"tag":"pool","values":["HikariPool-1"]}]}
And it seems does not relate to health check because I have tried running it locally and the result of /actuator/metrics/hikaricp.connections.idle remains 1.
I set logging.level.root = trace to see what is happening.
There are only 2 things keep printing in the log periodically
The Hikari connection report, showing 1 idle connection
{"level":"DEBUG","ref":"|","marker":"INTERNAL","message":"HikariPool-1 - Before cleanup stats (total=1, active=0, idle=1, waiting=0)","logger":"com.zaxxer.hikari.pool.HikariPool","timestamp":"2022-06-14 16:15:16.799","thread":"HikariPool-1 housekeeper"}
{"level":"DEBUG","ref":"|","marker":"INTERNAL","message":"HikariPool-1 - After cleanup stats (total=1, active=0, idle=1, waiting=0)","logger":"com.zaxxer.hikari.pool.HikariPool","timestamp":"2022-06-14 16:15:16.800","thread":"HikariPool-1 housekeeper"}
{"level":"DEBUG","ref":"|","marker":"INTERNAL","message":"HikariPool-1 - Fill pool skipped, pool is at sufficient level.","logger":"com.zaxxer.hikari.pool.HikariPool","timestamp":"2022-06-14 16:15:16.800","thread":"HikariPool-1 housekeeper"}
Tomcat NioEndpoint, but I think it is not relevant
{"level":"DEBUG","ref":"|","marker":"INTERNAL","message":"timeout completed: keys processed=0; now=1655198117181; nextExpiration=1655198117180; keyCount=0; hasEvents=false; eval=false","logger":"org.apache.tomcat.util.net.NioEndpoint","timestamp":"2022-06-14 16:15:17.181","thread":"http-nio-8445-Poller"}
Thanks to the suggestion in the comment, this is because of the actuator health check, which can be solved by the following settings
management.health.db.enabled=false

When using spring-boot-starter-data-redis, how to set the eviction policy? LFU or LRU etc?

When redis is used as a caching technology through spring boot (<artifactId>spring-boot-starter-data-redis</artifactId>), i see few properties like TTL can be set in the application.properties file.
ex:
spring.cache.cache-names=cache1,cache2
spring.cache.redis.time-to-live=600000
and few more snippets from - Appendix A. Common application properties
spring.redis.database=0 # Database index used by the connection factory.
spring.redis.url= # Connection URL. Overrides host, port, and password. User is ignored. Example: redis://user:password#example.com:6379
spring.redis.host=localhost # Redis server host.
But I am not able to figure out how to set the cache eviction policy like - Least Frequently Used or Last Recently Used etc.
How and where do I have to provide this config details ??
Redis cache documentation states:
It is possible to set the configuration directive using the redis.conf
file, or later using the CONFIG SET command at runtime.
Redis configuration documentation states:
maxmemory 2mb
maxmemory-policy allkeys-lru
Combining the two, the command to change the eviction policy is:
CONFIG SET maxmemory-policy allkeys-lfu
Using Spring Data Redis:
If using non-reactive Redis connection:
RedisConnection conn = null;
try {
conn = connectionFactory.getConnection();
conn.setConfig("maxmemory-policy", "allkeys-lfu");
} finally {
if (conn != null) {
conn.close();
}
}
If using reactive Redis connection:
ReactiveRedisConnection conn = connectionFactory.getReactiveConnection();
conn
.serverCommands()
.setConfig("maxmemory-policy", "allkeys-lfu")
.filter(status -> status.equals("OK"))
.doFinally(unused -> conn.close())
.block(Duration.ofSeconds(5L));

Liquibase in Spring boot application keeps 10 connections open

I'm working on a Spring Boot application with Liquibase integration to setup the database. We use a different user for the database changes which we configured using the application.properties file
liquibase.user=abc
liquibase.password=xyz
liquibase.url=jdbc:postgresql://something.eu-west-1.rds.amazonaws.com:5432/app?ApplicationName=${appName}-liquibase
liquibase.enabled=true
liquibase.contexts=dev,postgres
We have at this moment 3 different microservices in deployment and we noticed that for every running instance, Liquibase opens 10 connections and it never closes these connections unless we stop the application. This basically means that in development we regularly hit the connection limit of our Amazon RDS instance.
Right now, in development, 40 of 74 active connections are occupied by Liquibase. If we ever want to go to production with this, having autoscaling enabled for all the microservices, that would mean we'll have to over-scale the database in order not to hit any connection limits.
Is there a way to
tell liquibase to not use a connection pool of 10 connections
tell liquibase to stop or close the connections
So far I found no documentation on how to do this.
Thanks to the response of Slava I managed to fix the problem with following datasource configuration class
#Configuration
public class LiquibaseDataSourceConfiguration {
private static final Logger LOG = LoggerFactory.getLogger(LiquibaseDataSourceConfiguration.class);
#Autowired
private LiquibaseDataSourceProperties liquibaseDataSourceProperties;
#LiquibaseDataSource
#Bean
public DataSource liquibaseDataSource() {
DataSource ds = DataSourceBuilder.create()
.username(liquibaseDataSourceProperties.getUser())
.password(liquibaseDataSourceProperties.getPassword())
.url(liquibaseDataSourceProperties.getUrl())
.driverClassName(liquibaseDataSourceProperties.getDriver())
.build();
if (ds instanceof org.apache.tomcat.jdbc.pool.DataSource) {
((org.apache.tomcat.jdbc.pool.DataSource) ds).setInitialSize(1);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxActive(2);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMaxAge(1000);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinIdle(0);
((org.apache.tomcat.jdbc.pool.DataSource) ds).setMinEvictableIdleTimeMillis(60000);
} else {
// warnings or exceptions, whatever you prefer
}
LOG.info("Initialized a datasource for {}", liquibaseDataSourceProperties.getUrl());
return ds;
}
}
The documentation of the properties can be found on the site of Tomcat: https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
initialSize: The initial number of connections that are created when the pool is started
maxActive: The maximum number of active connections that can be allocated from this pool at the same time
minIdle: The minimum number of established connections that should be kept in the pool at all times
maxAge: Time in milliseconds to keep this connection. When a connection is returned to the pool, the pool will check to see if the now - time-when-connected > maxAge has been reached, and if so, it closes the connection rather than returning it to the pool. The default value is 0, which implies that connections will be left open and no age check will be done upon returning the connection to the pool.
minEvictableIdleTimeMillis: The minimum amount of time an object may sit idle in the pool before it is eligible for eviction.
So it does not appear to be a connection leak, it's just the default configuration of the datasource which is not optimal for Liquibase if you use a dedicated datasource. I don't expect this to be a problem if the liquibase datasource is your primary datasource.
Update: This has been fixed in 2.5.0-M2 and Liquibase now uses a SimpleDriverDataSource without a connection pool.
Original answer: This change to connection pool management was introduced in Spring Boot version 2.0.6.RELEASE, and only takes effect if you use Spring Boot Actuator. There is an actuator endpoint (enabled by default) which allows you to get change sets applied by Liquibase. For this to work Liquibase keeps its database connections open. You can disable the endpoint with management.endpoint.liquibase.enabled = false, in which case the connection pool used by Liquibase will be shutdown after the initial run.
GitHub issue related to this change: https://github.com/spring-projects/spring-boot/issues/13832
Spring Boot Actuator (see 12. Liquibase: https://docs.spring.io/spring-boot/docs/2.0.6.RELEASE/actuator-api/html/
I don't know why liquibase doesn't close a connection, maybe it's a bug and you should create an issue for that.
To set connection pool for liquibase you have to create a custom data source and mark it with #LiquibaseDataSource annotation.
Related issues provide more details:
Possibility to specify custom dataSource configuration for liquibase only
Add LiquibaseDataSource annotation

Prevent use of CachingConnectionFactory with DefaultJmsListenerContainerFactory

I am working on a brand new project in which I need to have listeners that will consume messages from several queues (no need to have producer for now).
Starting from scratch, I am using the last Spring JMS version (4.1.2).
Here is an extract of my configuration file:
<bean id="cachedConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory"
p:targetConnectionFactory-ref="jmsConnectionFactory"
p:sessionCacheSize="3" />
<bean id="jmsListenerContainerFactory"
class="org.springframework.jms.config.DefaultJmsListenerContainerFactory"
p:connectionFactory-ref="cachedConnectionFactory"
p:destinationResolver-ref="jndiDestinationResolver"
p:concurrency="3-5"
p:receiveTimeout="5000" />
But I think I may be wrong since DefaultJmsListenerContainerFactory will build regular DefaultMessageListenerContainerS. And, as stated in the doc, CachingConnectionFactory should not be used with a message listener container...
Even if I am using the new Spring 4.1 DefaultJmsListenerContainerFactory class the answer from post is still valid (cacheConsumers = true can be an issue + don't need to cache sessions for listener containers because the sessions are long lived) , right?
Instead of using the CachingConnectionFactory, I should use the SingleConnectionFactory (and not directly the broker implementation one)?
If the SingleConnectionFactory class should indeed be used, is the "reconnectOnException" property should be set to true (as it is done in the CachingConnectionFactory) or does the new "setBackOff" method (from DefaultJmsListenerContainerFactory) deals with the same kind of issues?
Thanks for any tips
Correct.
There's not really much benefit in using a SingleConnectionFactory unless you want to share a single connection across multiple containers; the DMLC will use a single connection from the vendor factory by default for all consumer threads (cacheLevel >= CACHE_CONNECTION), unless a TransactionManager is configured.
The container(s) will handle reconnection - even before the 'new' backOff property - backOff just adds more sophistication to the reconnection algorithm - it used to just retry every n seconds (5 by default).
As stated in the answer you cited, it's ok to use a CCF as long as you disable consumer caching.
Correction: Yes, when using the SingleConnectionFactory, you do need to set reconnectOnException to true in order for the container to properly recover its connection. Otherwise, it simply hands out the stale connection.

Need inputs for Database Tuning using Spring and Dbcp Connection pool

I am using Spring in my project and instantiating dataSource as below.
#Bean(destroyMethod="close")
public DataSource restDataSource() {
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName(env.getProperty("hibernate.connection.driver_class"));
dataSource.setUrl(env.getProperty("hibernate.connection.url"));
dataSource.setUsername(env.getProperty("hibernate.connection.username"));
dataSource.setPassword(env.getProperty("hibernate.connection.password"));
dataSource.setInitialSize(env.getRequiredProperty("hibernate.dbcp.initialSize", Integer.class));
dataSource.setMaxActive(env.getRequiredProperty("hibernate.dbcp.maxActive", Integer.class));
dataSource.setMaxIdle(env.getRequiredProperty("hibernate.dbcp.maxIdle", Integer.class));
dataSource.setMinIdle(env.getRequiredProperty("hibernate.dbcp.minIdle", Integer.class));
return dataSource;
}
Below is my properties file.
hibernate.dialect=org.hibernate.dialect.Oracle10gDialect
hibernate.connection.driver_class=oracle.jdbc.driver.OracleDriver
hibernate.connection.username=<>
hibernate.connection.password=<>
hibernate.connection.url=jdbc:oracle:thin:#<Host>:1521:<SID>
hibernate.show_sql=true
hibernate.cache.use_query_cache=true
cache.provider_class=org.hibernate.cache.EhCacheProvider
hibernate.cache.use_second_level_cache=true
hibernate.cache.region.factory_class=org.hibernate.cache.ehcache.EhCacheRegionFactory
net.sf.ehcache.configurationResourceName=ehcache.xml
**hibernate.dbcp.initialSize=10
hibernate.dbcp.maxActive=100
hibernate.dbcp.maxIdle=30
hibernate.dbcp.minIdle=10**
Please suggest :-
Any changes in the properties marked in Bold(initialSize,maxActive,maxidle,minIdle). My application will be concurrently used by around 100 users and total users are around 3000.
I am using Tomcat Server to deploy my application.Should I be using JNDI for connections instead of directly specify connection properties? Is above way of using connections good for a production System?
Instead of Commons DBCP I would suggest using HikariCP (I'm having very good experiences with that lately or if you are already on tomcat use Tomcat JDBC instead.
There is a lot written on poolsizing (see here for a nice explanation and here for a short video from Oracle). In short large poolsizes don't work and probably will make performance worse.
A rule of thumb/formula (also in the article mentioned) is to use
connections = ((core_count * 2) + effective_spindle_count)
Where core_count is the number of (actual) cores in your server and effective_spindle_count the number of disks you have. If you have server with a large disk and 4 cores it would lead to a connection pool of size 9. This should be able to handle what you need, adding more will only add overhead of monitoring, thread switching etc.

Resources