When we configure JMS connection factory in WebSphere [WAS] the default values for connection pool settings are as below
connection timeout: 180 seconds
unused timeout: 1800 seconds
Considering that there is a period of time [>180 seconds] when the application is not beingused, wouldn't this configuration always result in a stale connection object remaining in the pool and the accessing application to throw an exception?
Shouldn't we always ensure that the unused timeout value is less than the connection timeout value?
I don't believe the Unused timeout has anything to do with the Connection timeout. If the Unused timeout is too low, the factory has to keep closing and opening connections, but that only applies to connections in the Free pool, not the Active pool. Nevertheless, you want to avoid repeated opening/closing of connections as it will impact performance.
Unused Timeout
The Connection pool property Unused timeout defines how long a JMS connection will stay in the Free Pool before it is disconnected. This property has the default value of 1800 seconds (30 minutes). Basically, if a connection sits dormant in the Free pool for longer than 1800 seconds, it will be disconnected.
Connection Timeout
The time the application waits for a connection from the Free pool if the number of connections created from this factory already is equal to the factory's Maximum connections property. If a connection is put back in the free pool within this 3-minute period, the Connection Manager immediately takes it out of the pool again and passes it to the waiting application. However, if the timeout period elapses, a ConnectionWaitTimeoutException is thrown.
So the Connection timeout is basically how long your application waits for the next available connection, assuming the factory can't create new connections because it's maxed out. If you find yourself hitting this ceiling, increase the factory's Maximum connections property.
Related
I want a database connection that has gone idle to be dropped once its maxLifetime has been reached.
How can we do this in quarkus? Is this achieved with these 2 properties?
quarkus.datasource.jdbc.idle-removal-interval
quarkus.datasource.jdbc.max-lifetime
How can I check that idle connections are actually being dropped? Any logs I can activate?
I have increased the concurrent tasks to be '10' for PutSQL processor.
At that time it shows below error but there is no data loss.
failed to process due to org.apache.nifi.processor.exception.ProcessException: org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object; rolling back session:
if i have remove concurrent tasks then it worked without those exception
while google this exception i have found answer in below link
I am getting Cannot get a connection, pool error Timeout waiting for idle object, When I try to create more than 250 threads in my web application
But i don't know how to avoid this issue in NiFi putSQL.
Can anyone help me to resolve this?
This exception occurs when the pool manager cannot produce a viable connection to a waiting requester and the maxWait has passed therefore triggering a timeout.
There are several causes, but they usually fall into 2 main categories:
The DB is down or unreachable.
The connection pool (which is set to 100 max active) is out of connections.
DBCPConnectionPool controller service in nifi has 8 max connection by default and 500 milli seconds max wait time. When PutSQL processor occupied 8 connection from DBCP pool and when it request for 9th connection or exceed the max connection limit then it will throw "Cannot get a connection" exception.
You can try 2 things to avoid this Exception :
You can increase the "Max Wait Time" in DBCPConnectionPool controller
service configuration.
You can increase the "Max Total Connections" limit in
DBCPConnectionPool controller service configuration.
Kindly find the below screenshot where you need to do changes.
It might resolve your issue.
This exception can occurs if some connections are never closed so they do not become available in the pool again.
So more and more connections remain open until reaching the max.
Make sure all threads are closing the connections used.
Can someone let me know the relation between PoolTimeout, IdleTimeout & IdleCheckFrequency in go-redis?
Doubts:-
If I specify PoolTimeout 20ms, IdleTimeout 20ms, PoolSize 100 & IdleCheckFrequency 1 min. Let's say all the connection in the pool are used and a connection finishes its operation. Then will the request for a new connection wait till the IdleCheck is run in 1 min interval?
If I specify PoolSize 100 will the client keep open 100 connections to redis even if there is no active client operation being performed to Redis?
Environment:-
Go - 1.7.4
Redis - 3.2.6
Go-Redis - v5.2
This has been answered in github here. Just posting the relevant parts below:-
PoolSize limits max number of open connections. If app is idle then
go-redis does not open any connections.
New connection is opened when there is a command to process and there
are no idle connections in the pool. Idle connections are closed after
they are idle for IdleTimeout.
IdleCheckFrequency specifies how often we check if connection is
expired. This is needed in case app is idle and there is no activity.
Normally idle connections are closed when go-redis asks pool for a
(healthy) connection.
Does SQL Azure allow 3-rd party connection pool like HikariCP or BoneCP?
We configured HikariCP it works when we just run app but later db doesnt response on request. Is it HikariCP issue or it's common connection poool issue and no need spending more time on investigation?
HikariConfig config = new HikariConfig();
config.setMaximumPoolSize(50);
config.setDriverClassName(env.getProperty("jdbc.driverClassName"));
config.setJdbcUrl(env.getProperty("jdbc.url"));
config.setUsername(env.getProperty("jdbc.user"));
config.setPassword(env.getProperty("jdbc.pass"));
config.addDataSourceProperty("cachePrepStmts", env.getProperty("jdbc.cachePrepStmts"));
config.addDataSourceProperty("prepStmtCacheSize", env.getProperty("jdbc.prepStmtCacheSize"));
config.addDataSourceProperty("prepStmtCacheSqlLimit", env.getProperty("jdbc.prepStmtCacheSqlLimit"));
config.addDataSourceProperty("useServerPrepStmts", env.getProperty("jdbc.useServerPrepStmts"));
See this SQL Azure page re: Connection Constraints.
Maximum allowable durations are subject to change depending on the resource usage.
A logged-in session that has been idle for 30 minutes will be terminated
automatically. We strongly recommend that you use the connection pooling and
always close the connection when you are finished using it so that the unused
connection will be returned to the pool. For more information about connection
pooling, see Connection Pooling.
See if any of these errors match up to your logs. Search that page for "terminated" and "busy" to find error codes that might be relevant to your issue.
I would suggest setting the maxLifetime property in HikariCP to 15 minutes, and the idleTimeout to 2 minutes.
There is nothing on the SQL Azure side that would prohibit you from using a 3rd party connection pool. My guess is that the connection failed between the server and the client and the client didn't remove the connection from the pool.
Moving forward, I'd ensure that whichever 3-rd part connection pool you end up using tests that the connection exists before taking it out of the pool for use.
Hope that helps.
Hey I'm using Glassfish open source v4 and I'm having a weird problem.
I have defined a JDBC connection pool to Oracle 11g in the admin console and I've set :
Pool Settings
Initial and Minimum Pool Size: 500
Maximum Pool Size: 1000
Pool Resize Quantity: : 750
And I've created a specific user for this connection pool. Yet sometimes when I inspect opened connections in the database I see that there are more then 1000 (maximum I've seen was 1440)
When this happens any query attempts fail, sometimes with OutOfMemory exception, some show http thread interuptions and some don't show any logs at all, just takes a long time.
What I am wondering is how is it possible the Glassfish opens more connections then I've defined it to?
1t try to compare output from netstat on appl. server and db server side. You may have some "dangling" connections. Also try to find some documentation about DCD (Dead connection detection) in Oracle.
Few years ago I saw situations where Java application server thought that the connection is dead because it is not responding for few minutes. So this connection was put onto some dead connection list and a new connection was created.
There also can be some network issues - for example there is a FW between appl and db server.
When TCP connection is not active for one hour then it's cut over on one side but DB sever does not know about that.
The usual way how to investigate that is
compare output of both netstat(s) (appl./db)
identify dangling TCP connections
translate TCP connection onto Unix process id(PID) of Oracle session process
translate PID onto Oracle session (SID and SERIAL#)
kill the session on Oracle level (alter system kill session ...)