Can someone let me know the relation between PoolTimeout, IdleTimeout & IdleCheckFrequency in go-redis?
Doubts:-
If I specify PoolTimeout 20ms, IdleTimeout 20ms, PoolSize 100 & IdleCheckFrequency 1 min. Let's say all the connection in the pool are used and a connection finishes its operation. Then will the request for a new connection wait till the IdleCheck is run in 1 min interval?
If I specify PoolSize 100 will the client keep open 100 connections to redis even if there is no active client operation being performed to Redis?
Environment:-
Go - 1.7.4
Redis - 3.2.6
Go-Redis - v5.2
This has been answered in github here. Just posting the relevant parts below:-
PoolSize limits max number of open connections. If app is idle then
go-redis does not open any connections.
New connection is opened when there is a command to process and there
are no idle connections in the pool. Idle connections are closed after
they are idle for IdleTimeout.
IdleCheckFrequency specifies how often we check if connection is
expired. This is needed in case app is idle and there is no activity.
Normally idle connections are closed when go-redis asks pool for a
(healthy) connection.
Related
I want a database connection that has gone idle to be dropped once its maxLifetime has been reached.
How can we do this in quarkus? Is this achieved with these 2 properties?
quarkus.datasource.jdbc.idle-removal-interval
quarkus.datasource.jdbc.max-lifetime
How can I check that idle connections are actually being dropped? Any logs I can activate?
I have increased the concurrent tasks to be '10' for PutSQL processor.
At that time it shows below error but there is no data loss.
failed to process due to org.apache.nifi.processor.exception.ProcessException: org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Timeout waiting for idle object; rolling back session:
if i have remove concurrent tasks then it worked without those exception
while google this exception i have found answer in below link
I am getting Cannot get a connection, pool error Timeout waiting for idle object, When I try to create more than 250 threads in my web application
But i don't know how to avoid this issue in NiFi putSQL.
Can anyone help me to resolve this?
This exception occurs when the pool manager cannot produce a viable connection to a waiting requester and the maxWait has passed therefore triggering a timeout.
There are several causes, but they usually fall into 2 main categories:
The DB is down or unreachable.
The connection pool (which is set to 100 max active) is out of connections.
DBCPConnectionPool controller service in nifi has 8 max connection by default and 500 milli seconds max wait time. When PutSQL processor occupied 8 connection from DBCP pool and when it request for 9th connection or exceed the max connection limit then it will throw "Cannot get a connection" exception.
You can try 2 things to avoid this Exception :
You can increase the "Max Wait Time" in DBCPConnectionPool controller
service configuration.
You can increase the "Max Total Connections" limit in
DBCPConnectionPool controller service configuration.
Kindly find the below screenshot where you need to do changes.
It might resolve your issue.
This exception can occurs if some connections are never closed so they do not become available in the pool again.
So more and more connections remain open until reaching the max.
Make sure all threads are closing the connections used.
When we configure JMS connection factory in WebSphere [WAS] the default values for connection pool settings are as below
connection timeout: 180 seconds
unused timeout: 1800 seconds
Considering that there is a period of time [>180 seconds] when the application is not beingused, wouldn't this configuration always result in a stale connection object remaining in the pool and the accessing application to throw an exception?
Shouldn't we always ensure that the unused timeout value is less than the connection timeout value?
I don't believe the Unused timeout has anything to do with the Connection timeout. If the Unused timeout is too low, the factory has to keep closing and opening connections, but that only applies to connections in the Free pool, not the Active pool. Nevertheless, you want to avoid repeated opening/closing of connections as it will impact performance.
Unused Timeout
The Connection pool property Unused timeout defines how long a JMS connection will stay in the Free Pool before it is disconnected. This property has the default value of 1800 seconds (30 minutes). Basically, if a connection sits dormant in the Free pool for longer than 1800 seconds, it will be disconnected.
Connection Timeout
The time the application waits for a connection from the Free pool if the number of connections created from this factory already is equal to the factory's Maximum connections property. If a connection is put back in the free pool within this 3-minute period, the Connection Manager immediately takes it out of the pool again and passes it to the waiting application. However, if the timeout period elapses, a ConnectionWaitTimeoutException is thrown.
So the Connection timeout is basically how long your application waits for the next available connection, assuming the factory can't create new connections because it's maxed out. If you find yourself hitting this ceiling, increase the factory's Maximum connections property.
Hey I'm using Glassfish open source v4 and I'm having a weird problem.
I have defined a JDBC connection pool to Oracle 11g in the admin console and I've set :
Pool Settings
Initial and Minimum Pool Size: 500
Maximum Pool Size: 1000
Pool Resize Quantity: : 750
And I've created a specific user for this connection pool. Yet sometimes when I inspect opened connections in the database I see that there are more then 1000 (maximum I've seen was 1440)
When this happens any query attempts fail, sometimes with OutOfMemory exception, some show http thread interuptions and some don't show any logs at all, just takes a long time.
What I am wondering is how is it possible the Glassfish opens more connections then I've defined it to?
1t try to compare output from netstat on appl. server and db server side. You may have some "dangling" connections. Also try to find some documentation about DCD (Dead connection detection) in Oracle.
Few years ago I saw situations where Java application server thought that the connection is dead because it is not responding for few minutes. So this connection was put onto some dead connection list and a new connection was created.
There also can be some network issues - for example there is a FW between appl and db server.
When TCP connection is not active for one hour then it's cut over on one side but DB sever does not know about that.
The usual way how to investigate that is
compare output of both netstat(s) (appl./db)
identify dangling TCP connections
translate TCP connection onto Unix process id(PID) of Oracle session process
translate PID onto Oracle session (SID and SERIAL#)
kill the session on Oracle level (alter system kill session ...)
When the Oracle 10 databases are up and running fine, OCILogon2() will connect immediately. When the databases are turned off or inaccessible due to network issues - it will fail immediately.
However when our DBAs go into emergency maintenance and block incomming connections, it can take 5 to 10 minutes to timeout.
This is problematic for me since I've found that OCILogin2 isn't thread safe and we can only use it serially - and I connect to quite a few Oracle DBs. 3 blocked servers X 5-10 minutes = 15 to 30 minutes of lockup time
Does anyone know how to set the OCILogon2 connection timeout?
Thanks.
I'm currenty playing with OCI and it seems to me that it's impossible.
The only way I can think of is to use non-blocking mode. You'll need OCIServerAttach() and OCISessionBegin() instead of OCILogon() in this case. But when I tried this, OCISessionBegin() constantly returns OCI_ERROR with the following error code:
ORA-03123 operation would block
Cause: The attempted operation cannot complete now.
Action: Retry the operation later.
It looks strange and I don't yet know how to deal with it.
Possible workaround is to run your logon in another process, which you can kill after timeout...
We think we found the right file setting - but it's one of those problems where we have to wait until something rare and horrible occurs before we can verify it :-/
[sqlnet.ora]
SQLNET.OUTBOUND_CONNECT_TIMEOUT=60
From the Oracle docs..
http://download.oracle.com/docs/cd/B28359_01/network.111/b28317/sqlnet.htm#BIIFGFHI
5.2.35 SQLNET.OUTBOUND_ CONNECT _TIMEOUT
Purpose
Use the SQLNET.OUTBOUND_ CONNECT _TIMEOUT parameter to specify the time, in seconds, for a client to establish an Oracle Net connection to the database instance.
If an Oracle Net connection is not established in the time specified, the connect attempt is terminated. The client receives an ORA-12170: TNS:Connect timeout occurred error.
The outbound connect timeout interval is a superset of the TCP connect timeout interval, which specifies a limit on the time taken to establish a TCP connection. Additionally, the outbound connect timeout interval includes the time taken to be connected to an Oracle instance providing the requested service.
Without this parameter, a client connection request to the database server may block for the default TCP connect timeout duration (approximately 8 minutes on Linux) when the database server host system is unreachable.
The outbound connect timeout interval is only applicable for TCP, TCP with SSL, and IPC transport connections.
Default
None
Example
SQLNET.OUTBOUND_ CONNECT _TIMEOUT=10