I am Migrating from dbcp2 to c3p0,
We create a datasource by extending the BasicDataSource from dbcp2 and setting the properties. Some properties are set at driver level via setConnectionProperties method.
I dont see such a provision in c3p0 when extending the AbstractComboPooledDataSource. Is there another way to set the same?
Digging through the docs, I found something called a connectionCustomizer, but not sure if it does the same
This is how I am currently setting the properties with dbcp2:
this.setConnectionProperties("driver:oracle.jdbc.ReadTimeout=180000");
this.setConnectionProperties("driver:oracle.net.CONNECT_TIMEOUT=180000");
where "this" is class which extends BasicDataSource
Is there anyway in c3p0 to get the same result?
EDIT:
Just to be clear, I am able to set the properties provided by the c3p0 library, what I am looking for is setting properties at driver level the way dbcp2 allows to do via the SetConnectionProperties() method.THanks
You can get the details from the below answer,
https://stackoverflow.com/a/51838455/1529092
So basically you have to do something like this,
#Bean
public ComboPooledDataSource dataSource(){
ComboPooledDataSource dataSource = new ComboPooledDataSource();
try {
dataSource.setDriverClass(env.getProperty("db.driver"));
dataSource.setJdbcUrl(env.getProperty("db.url"));
dataSource.setUser(env.getProperty("db.username"));
dataSource.setPassword(env.getProperty("db.password"));
dataSource.setMinPoolSize(Integer.parseInt(env.getProperty("minPoolSize")));
dataSource.setMaxPoolSize(Integer.parseInt(env.getProperty("maxPoolSize")));
dataSource.setMaxIdleTime(Integer.parseInt(env.getProperty("maxIdleTime")));
dataSource.setMaxStatements(Integer.parseInt(env.getProperty("maxStatements")));
dataSource.setMaxStatementsPerConnection(Integer.parseInt(env.getProperty("maxStatementsPerConnection")));
dataSource.setMaxIdleTimeExcessConnections(10000);
} catch (PropertyVetoException e) {
e.printStackTrace();
}
return dataSource;
}
Edit:
As per documentation, the timeout properties are different in each framework, so in this case, the time out is handled by,
maxConnectionAge
maxIdleTime
maxIdleTimeExcessConnections
Managing Pool Size and Connection Age Go To Top
Different applications have different needs with regard to trade-offs
between performance, footprint, and reliability. C3P0 offers a wide
variety of options for controlling how quickly pools that have grown
large under load revert to minPoolSize, and whether "old" Connections
in the pool should be proactively replaced to maintain their
reliablity.
maxConnectionAge
maxIdleTime
maxIdleTimeExcessConnections
By default, pools will never expire Connections. If you wish Connections to be expired over time in order
to maintain "freshness", set maxIdleTime and/or maxConnectionAge.
maxIdleTime defines how many seconds a Connection should be permitted
to go unused before being culled from the pool. maxConnectionAge
forces the pool to cull any Connections that were acquired from the
database more than the set number of seconds in the past.
maxIdleTimeExcessConnections is about minimizing the number of
Connections held by c3p0 pools when the pool is not under load. By
default, c3p0 pools grow under load, but only shrink if Connections
fail a Connection test or are expired away via the parameters
described above. Some users want their pools to quickly release
unnecessary Connections after a spike in usage that forces a large
pool size. You can achieve this by setting
maxIdleTimeExcessConnections to a value much shorter than maxIdleTime,
forcing Connections beyond your set minimum size to be released if
they sit idle for more than a short period of time.
Some general advice about all of these timeout parameters: Slow down!
The point of Connection pooling is to bear the cost of acquiring a
Connection only once, and then to reuse the Connection many, many
times. Most databases support Connections that remain open for hours
at a time. There's no need to churn through all your Connections every
few seconds or minutes. Setting maxConnectionAge or maxIdleTime to
1800 (30 minutes) is quite aggressive. For most databases, several
hours may be more appropriate. You can ensure the reliability of your
Connections by testing them, rather than by tossing them. (see
Configuring Connection Testing.) The only one of these parameters that
should generally be set to a few minutes or less is
maxIdleTimeExcessConnections.
I found the answer in this answer:
c3p0 hangs on getConnection when there is a network failure.
cpds = new ComboPooledDataSource();
...
//--------------------------------------------------------------------------------------
// NOTE: Once you decide to use cpds.setProperties() to set some connection properties,
// all properties must be set, including user/password, otherwise an exception
// will be thrown
Properties prop = new Properties();
prop.setProperty("oracle.net.CONNECT_TIMEOUT",
Integer.toString(JDBC_CONNECTION_TIMEOUT_IN_MILLISECONDS));
prop.setProperty("oracle.jdbc.ReadTimeout",
Integer.toString(JDBC_SOCKET_TIMEOUT_IN_MILLISECONDS));
prop.setProperty("user", username);
prop.setProperty("password", password);
cpds.setProperties(prop);
Related
I was facing this issue for my springboot application that connects to a DB and MQ, and uses Atomikos Transaction manager.
com.atomikos.jms.AtomikosJMSException|Connection pool exhausted - try increasing 'maxPoolSize' and/or 'borrowConnectionTimeout' on the AtomikosConnectionFactoryBean.
com.atomikos.datasource.pool.PoolExhaustedException: ConnectionPool: pool is empty - increase either maxPoolSize or borrowConnectionTimeout
at com.atomikos.datasource.pool.ConnectionPool.waitForAtLeastOneAvailableConnection(ConnectionPool.java:326)
at com.atomikos.datasource.pool.ConnectionPool.findOrWaitForAnAvailableConnection(ConnectionPool.java:144)
at com.atomikos.datasource.pool.ConnectionPool.borrowConnection(ConnectionPool.java:132)
at com.atomikos.datasource.pool.ConnectionPoolWithSynchronizedValidation.borrowConnection(ConnectionPoolWithSynchronizedValidation.java:23)
at com.atomikos.jms.AtomikosConnectionFactoryBean.createConnection(AtomikosConnectionFactoryBean.java:601)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:196)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.access$100(AbstractPollingMessageListenerContainer.java:77)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer$MessageListenerContainerResourceFactory.createConnection(AbstractPollingMessageListenerContainer.java:490)
at org.springframework.jms.connection.ConnectionFactoryUtils.doGetTransactionalSession(ConnectionFactoryUtils.java:325)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:281)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:245)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1189)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1179)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1076)
at java.lang.Thread.run(Thread.java:748)
I tried printing the maxPoolSize and found that it is 1. This page came across in between (https://www.atomikos.com/Documentation/ConfiguringJms) and I found the line where they increased the MaxPoolSize to 5. I just tried setting it to 2 and it worked.
AtomikosConnectionFactoryBean xaConnectionFactory = new AtomikosConnectionFactoryBean();
xaConnectionFactory.setXaConnectionFactory(ibmMQXAConnectionFactory);
xaConnectionFactory.setMaxPoolSize(2);
Can someone help me to understand what should be the ideal poolsize. what it is for etc?
In order to process messages Atomikos uses DB and JMS connections (in your case).
These connections are taken from the pools of available connections. To get the idea why connection pools are needed, please follow this link as a starting point - Connection_pool
To put it simple - in order to process one message at a time Atomikos needs one DB and one JMS connection/session. So if you plan to process 10 messages in parallel, each connection pool size must be at least 10 (10 for DB and 10 for JMS connection pools respectively).
Does HikariCP supports command timeout in Spring Boot application similar to C#
I am using Hikari Connection Pool in my Spring boot application. I have enabled connectionTimeout using the following configuration
spring.datasource.hikari.connectionTimeout: 30000
If I increase the number of concurrent users I'm getting the following exception in logs
Caused by: org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection;
nested exception is java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
I am perfectly find with the above exception. I can increase the number of connections. But my concern is, there are few endpoints which are taking more than 2 minutest to respond. Those endpoints got DB Connection from pool but took more time to process. Is there a setting in which I can mentioned some timeout so that if DB takes more than some time (Operation time -Say 40 seconds) then it should send SQL exception. Similar to command timeout in C#
Application.properties
# A list of all Hikari parameters with a good explanation is available on https://github.com/brettwooldridge/HikariCP#configuration-knobs-baby
# This property controls the minimum number of idle connections that HikariCP tries to maintain in the pool. Default: same as maximumPoolSize
spring.datasource.hikari.minimumIdle: 10
# This property controls the maximum size that the pool is allowed to reach, including both idle and in-use connections. Basically this value will determine the maximum number of actual connections to the database backend.
# Default: 10
spring.datasource.hikari.maximumPoolSize: 20
#This property controls the maximum number of milliseconds that a client (that's you) will wait for a connection from the pool. If this time is exceeded without a connection becoming available, a SQLException will be thrown.
#Lowest acceptable connection timeout is 250 ms. Default: 30000 (30 seconds)
spring.datasource.hikari.connectionTimeout: 30000
# This property controls the maximum amount of time that a connection is allowed to sit idle in the pool. This setting only applies when minimumIdle is defined to be less than maximumPoolSize
# Default: 600000 (10 minutes)
spring.datasource.hikari.idleTimeout: 600000
# This property controls the maximum lifetime of a connection in the pool. An in-use connection will never be retired, only when it is closed will it then be removed.
# Default: 1800000 (30 minutes)
spring.datasource.hikari.maxLifetime: 1800000
# This property sets a SQL statement that will be executed after every new connection creation before adding it to the pool. Default: none
spring.datasource.hikari.connectionInitSql: SELECT 1 FROM DUAL
As explained in this comment, seems that connectionTimeout value is used for acquire connection timeout.
You can try setting that to a higher value than 30 secs and see if that helps.
Druid connection pool have this setting that allows you to set up the timeout:
spring.datasource.druid.query-timeout=10000
I Have the following Hikari configuration in my spring boot app. Queries are taking more than the connection-timeout time set. However, timeout never happened. I am keeping as low as possible to simulate the connection timeout.
HikariConfig dataSourceConfig = new HikariConfig();
dataSourceConfig.setDriverClassName(config.driver);
dataSourceConfig.setJdbcUrl(config.url);
dataSourceConfig.setUsername(config.user);
dataSourceConfig.setPassword(config.password);
dataSourceConfig.setConnectionTestQuery(config.validationQuery);
dataSourceConfig.setMaximumPoolSize(config.poolMax);
dataSourceConfig.setConnectionTimeout(300);
dataSourceConfig.setIdleTimeout(10000);
dataSourceConfig.setMaxLifetime(60000);
JdbcTemplate jdbcTemplate = new JdbcTemplate(new HikariDataSource(dataSourceConfig));
Here is some log that shows the query ran for more than 300ms.
Time elapsed to run query......2913
Using Hikari 3.2 and mariadb
Thanks.
From: https://github.com/brettwooldridge/HikariCP
connectionTimeout
This property controls the maximum number of milliseconds that a client (that's you) will wait for a connection from the pool. If this time is exceeded without a connection becoming available, a SQLException will be thrown. Lowest acceptable connection timeout is 250 ms. Default: 30000 (30 seconds)
So this property is more about how long your application will wait for a connection, not how long a query is allowed to execute.
I think what you want is "max_statement_time": https://mariadb.com/kb/en/library/server-system-variables/#max_statement_time
Maximum time in seconds that a query can execute before being aborted. This includes all queries, not just SELECT statements, but excludes statements in stored procedures. If set to 0, no limit is applied.
Here's my code of using ehcache when I do multi-threaded reading and writing:
write code:
try {
targetCache.acquireWriteLockOnKey(key);
targetCache.putIfAbsent(new Element(key, value));
}
finally {
targetCache.releaseWriteLockOnKey(key);
}
reading code:
try{
cache.acquireReadLockOnKey(key);
cacheCarId = (String)ele.getObjectValue();
}
finally {
cache.releaseReadLockOnKey(key);
}
key and value are both String.
My config is as follows:
CacheConfiguration config = new CacheConfiguration();
config.name("carCache");
config.maxBytesLocalHeap(128, MemoryUnit.parseUnit("M"));
config.eternal(false);
config.timeToLiveSeconds(60);
config.setTimeToIdleSeconds(60);
SizeOfPolicyConfiguration sizeOfPolicyConfiguration = new SizeOfPolicyConfiguration();
sizeOfPolicyConfiguration.maxDepth(10000);
sizeOfPolicyConfiguration.maxDepthExceededBehavior("abort");
config.addSizeOfPolicy(sizeOfPolicyConfiguration);
Cache memoryOnlyCache = new Cache(config);
CacheManager.getInstance().addCache(memoryOnlyCache);
Values are evict within 60s and will be written by multi-thread. The total number of key is less than 25,000.
The reading and writing was ok at the beginning, but after a couple of hours, i get inconsistence of reading and writing...
Could Anybody help me with this problem? Thanks a lot
A Cache is already a thread safe data structure, so you should not need to use explicit locking as you do.
Also the method Cache.putIfAbsent is already an atomic operation that guarantees that only one thread will succeed with the put.
Note that eviction and expiry are two different things. With your configuration, eviction happens when the cache size grows beyond 128MB and expiry indeed happens after 60 seconds. However Ehcache does expiry in-line, so it is triggered when you read or write the mapping.
As for your remark on inconsistence, you will need to describe in more detail what you mean by that.
I have the following issues in our production environment (Web-Farm - 4 nodes, on top of it Load balancer):
1) Timeout performing HGET key, inst: 3, queue: 29, qu=0, qs=29, qc=0, wr=0/0
at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor``1 processor, ServerEndPoint server) in ConnectionMultiplexer.cs:line 1699 This happens 3-10 times in a minute
2) No connection is available to service this operation: HGET key at StackExchange.Redis.ConnectionMultiplexer.ExecuteSyncImpl[T](Message message, ResultProcessor``1 processor, ServerEndPoint server) in ConnectionMultiplexer.cs:line 1666
I tried to implement as Marc suggested (Maybe I interpreted it incorrectly) - better to have fewer connections to Redis than multiple.
I made the following implementation:
public class SeRedisConnection
{
private static ConnectionMultiplexer _redis;
private static readonly object SyncLock = new object();
public static IDatabase GetDatabase()
{
if (_redis == null || !_redis.IsConnected || !_redis.GetDatabase().IsConnected(default(RedisKey)))
{
lock (SyncLock)
{
try
{
var configurationOptions = new ConfigurationOptions
{
AbortOnConnectFail = false
};
configurationOptions.EndPoints.Add(new DnsEndPoint(ConfigurationHelper.CacheServerHost,
ConfigurationHelper.CacheServerHostPort));
_redis = ConnectionMultiplexer.Connect(configurationOptions);
}
catch (Exception ex)
{
IoC.Container.Resolve<IErrorLog>().Error(ex);
return null;
}
}
}
return _redis.GetDatabase();
}
public static void Dispose()
{
_redis.Dispose();
}
}
Actually dispose is not being used right now. Also I have some specifics of the implementation which could cause such behavior (I'm only using hashes):
1. Add, Remove hashes - async
2. Get -sync
Could somebody help me how to avoid this behavior?
Thanks a lot in advance!
SOLVED - Increasing Client connection timeout after evaluating network capabilities.
UPDATE 2: Actually it didn't solve the problem. When cache volume starting to get increased e.g. from 2GB.
Then I saw the same pattern actually these timeouts were happend about every 5 minutes.
And our sites were frozen for some period of time every 5 minutes until fork operation was finished.
Then I found out that there is an option to make a fork (save to disk) every x seconds:
save 900 1
save 300 10
save 60 10000
In my case it was "save 300 10" - save in every 5 minutes if at least 10 updates were happened. Also I found out that "fork" could be very expensive. Commented "save" section resolved the problem at all. We can commented "save" section as we are using only Redis as "cache in memory" - we don't need any persistance.
Here is configuration of our cache servers "Redis 2.4.6" windows port: https://github.com/rgl/redis/downloads
Maybe it has been solved in recent versions of Redis windows port in MSOpentech: http://msopentech.com/blog/2013/04/22/redis-on-windows-stable-and-reliable/
but I haven't tested yet.
Anyway StackExchange.Redis has nothing to do with this issue and it works pretty stable in our production environment, thanks to Marc Gravell.
FINAL UPDATE:
Redis is single-threaded solution - it is ultimately fast but when it comes to the point of releasing the memory (Removing items that are stale or expired) the problems are emerged due to one thread should reclaim the memory (that is not fast operation - whatever algorithm is used) and the same thread should handle GET, SET operations. Of course it happens when we are talking about medium-loaded production environment. Even if you use a cluster with slaves when the memory barrier is reached it will have the same behavior.
It looks like in most cases this exception is a client issue. Previous versions of StackExchange.Redis used Win32 socket directly which sometimes has a negative impact. Probably Asp.net internal routing somehow related to it.
The good news is that StackExchange.Redis's network infra was completely rewritten recently. The last version is 2.0.513. Try it and there is a good chance that your problem will go.