We are having an issue with connections staying idle in Oracle. To give you some background, our users connect to Denodo which in turn has a data source that connects to Oracle. This data source works with one user name and password and creates a pool. The pool has an initial size of 4 and a Maximum number of active connections of 20.
Connections start coming in from clients using JDBC, ODBC and so on. Some clients are other server requesting data (Spotfire and BusinessObjects), and others are just regular users that have developed scripts in R, python, C# and others. They can also connect with tools such as DBeaver. The Oracle user has settings to maintain up to 100 connections idle.
Now, users connect with their scripts and they have code (that we have checked) that opens a connection to Denodo, gets the data through a query, gets the data returned, and closes the connection to Denodo. Denodo in turns does the same and opens the connection to Oracle, passes the query from the client to Oracle, gets the data and relays it back to the client. And this is the part we are not too sure about. We were expecting Denodo to close the connection to Oracle and it does not. The connection stays open in Oracle and shows as idle. Eventually we have enough connections idle to fill up the quota set up for the User (100).
Based on this, we have done some tune up in Denodo with the connection to Oracle and applied these settings to the connection:
FETCHSIZE = 10000
BATCHINSERTSIZE = 200
VALIDATIONQUERY = 'SELECT COUNT(*) FROM SYS.DUAL'
INITIALSIZE = 4 #initial size of pool
MAXIDLE = 25 #max number of idle connections
MINIDLE = 5 #min number of idle connections
MAXACTIVE = 20 #max active in the pool
EXHAUSTEDACTION = 1
TESTONBORROW = true
TESTONRETURN = false
TESTWHILEIDLE = false
TIMEBETWEENEVICTION = 300000 #time between evictions in milliseconds
NUMTESTPEREVICTION = 10 #10 connections to be evaluated for eviction
MINEVICTABLETIME = 900000 #min evictable time in milliseconds
POOLPREPAREDSTATEMENTS = false
MAXSLEEPINGPS = 4
INITIALCAPACITYPS = 8
After applying this setting, we thought it would clear the idle connections. Problem is that it has not. You can see the connections start creeping in and it eventually fills up again and does not allow for any other connection.
What I would like to see is Denodo open the connection that it needs, use it and release it. Not keep a connection in Idle in Oracle. Oracle connections do not seem to be evicted ever and eventually they reach 100 again.
Any help would be appreciated
Related
We migrated an application FROM Websphere 8.5 (bundled with OpenJPA 2) and Oracle 12.1
TO Websphere Liberty 22 with Hibernate 5.4 and Oracle 19c
and did thorough testing but no load tests.
After going to production (some 200-300 end users) we quickly saw ORA-01000, i.e. maximum open cursors exceeded. A fix seemed to be that our kind DBA set OPEN_CURSORS to 1000 (before the value was 500) and we restarted the app.
As I understand it, the prepared statement cache of the webserver leads to open cursors in Oracle by design. I found an old IBM page that is saying the needed number of cursors configured in Oracle equals to
(number of cluster members) x (max
connection pool size) x (prepared statement cache size)
In our application, Hibernate's statistics give clue that we have at least 1000 different statements. That would mean the number of needed cursors is at least 1000 times the number of used connections in total (which seems a bit ridiculous).
OTOH, Oracle's documentation rather suggest that the number of open cursors is not shared by sessions. So the statement cache size itself would suffice for the maximal number of open cursors.
NB: the statement cache size in Websphere 8.5 was set to 100 and - of course - we never encountered ORA-01000 problems with that.
Questions
Is there a way to keep the prepared statement cache without having open cursors flying around? I.e. did we misconfigure Liberty/Hibernate?
Is IBM or Oracle correct? That is, is the max. number of open cursors defined by session or overall?
I'm trying to configure a C3P0 JDBC connection pool to avoid locking an Oracle DB. Seems like acquireRetryAttempts and acquireRetryDelay are important.
Looking at Oracle 12c docs, I see:
FAILED_LOGIN_ATTEMPTS
Specify the number of consecutive failed attempts to log in to the user account before the account is locked. If you omit this clause, then the default is 10 times.
Within what time frame do the 10 attempts apply? I.e. if I set acquireRetryAttempts to 9, what value of acquireRetryDelay will avoid locking the DB?
You are asking for a time frame after which Oracle will forget about previous invalid login attempts? There is none.
Oracle maintains a column lcount in the SYS.USER$ table that has the number of consecutive invalid login attempts. It is only reset to zero upon a successful login.
If you don't want to lock the database accounts for too many failed password attempts, why don't you set failed_login_attempts to UNLIMITED for the profile your connection pool uses?
Oracle has Universal Connection Pool (UCP) that is feature rich and provides HA capabilities out of the box. So, you could consider using UCP.
Also, RETRY_DELAY and RETRY_COUNT can be used as connection descriptors as shown.
(DESCRIPTION =
(CONNECT_TIMEOUT=90) (RETRY_COUNT=20)(RETRY_DELAY=3) (TRANSPORT_CONNECT_TIMEOUT=3)
(ADDRESS_LIST =
(LOAD_BALANCE=on)
( ADDRESS = (PROTOCOL = TCP)(HOST=primary-scan)(PORT=1521)))
(ADDRESS_LIST =
(LOAD_BALANCE=on)
( ADDRESS = (PROTOCOL = TCP)(HOST=secondary-scan)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME = gold-cloud)))
RETRY_COUNT:It specifies the number of network connect retry attempts before returning a failure message to the client. In the example above, Oracle Net retries 3 times before returning an error message to the client. This helps in increasing the possibility of getting a connection and thus improves the performance.
RETRY_DELAY:This parameter specifies the wait time in seconds between reconnection
attempts. It works in conjunction with RETRY_COUNT. So, it is advised to use RETRY_DELAY and RETRY_COUNT together to avoid unnecessary CPU cycles
We are using Maximo 6.2 with Websphere 6 version.
In systemOut.log file I can find the total number of connected users is keep on increasing day by day.
Server Host:xxxxx ,Server Name:MXServer ,Number of Users:666
Total number of users connected to the system:666
Memory Total = 1073216000 ,Free = 676880592
But only 10 users have logged into the system.
Is it specifying any DB connection leak?
This is generally a representation of what is in the maxsession table, as users time out or log out the count should decrease. In your web.xml do you have a session timeout defined? If it is set to 0 then it's possible users are just closing out the browser rather then signing out of the application, then the session is never being timed out because there isn't one. If it is set to 0 you should define a time out to ensure inactive users are being removed from maxsession.
In JMeter (v2.13 r1665067), I'm using the tearDown Thread Group to delete all of the left-over records that remain after a test run.
The odd thing I can't quite figure out:
When the Thread Group is executed in isolation (i.e., by itself), I am able to see the left-over records are deleted from within the database.
When the Thread Group is run as part of the full run (i.e., the full end-to-end test plan), the left-over records are not deleted from within the database.
Looking in SQL Profiler, it "appears" the DELETE is sent, yet the records remain in the db. Could it be my Constant Throughput settings or some other timing? Can anyone shed any light on why this happens only during a full run?
In the Test Plan, the Run tearDown Thread Groups after shutdown of main threads is enabled.
Here's what is in my tearDown Thread Group:
JDBC Connection Configuration
Variable Name = myPool
Connection Pool Config
Max # of Connections = 10
Pool Timeout = 5000
Idle Cleanup Interval (ms) = 60000
Auto Commit = True
Transaction Isolation = TRANSACTION_SERIALIZEABLE
Connection Validation by Pool
Keep-Alive = True
Max Connections Age = 5000
Validation Query = null
JDBC Request 1
Variable Name = myPool
Query Type = Prepared Update Statement
DELETE FROM Foo
WHERE Foo.QualifierObjId IN
(SELECT Bar.ObjId FROM Bar WHERE Bar.DsplyName like '%myTest%');
JDBC Request 2
Variable Name = myPool
Query Type = Prepared Update Statement
DELETE FROM Bar WHERE Bar.DsplyName like '%myTest%';
JDBC Request 3
Variable Name = myPool
Query Type = Prepared Update Statement
DELETE FROM Master WHERE Master.DsplyName like '%myTest%';
Solution
If you are using multiple JDBC Connections spread across multiple Thread Groups, be sure to have unique Variable Names bound to each Pool. I was using "myPool" for each JDBC Connection (basically due to copy/paste) and it was causing the issue. (my bad!)
The solution is:
Thread Group 1, JDBC Connection Configuration, Variable Name = myFooPool
Thread Group 2, JDBC Connection Configuration, Variable Name = myBarPool
tearDown Thread Group, JDBC Connection Configuration, Variable Name = myTearDownPool
Creating unique variable names for each Pool provides clarity between each JDBC configuration and avoids issues such as mine. Hope this helps someone else.
In WAS when I create a datasource I can edit the Connection Pool properties (# of active connections, max # of active connections). Now if I say max =20, and if 1000 user requests come in to the WAS and each request runs in its own thread and each thread wants a connection, in essence i am reduced to 20 parallel threads.
Is this right? Because a connection object cannot be shared between threads.
I ask the question because most times, i see this paramter has a max value 20 - 30 when clearly the peak # of simultaneous requests to the server is well over a thousand. It seems we are able to service only 20 requests at a time?
Not really. Connection pooling takes charge of eliminating the overhead of creating and closing connections, by reusing them on database access.
If you have a thousand requests, and a maxSize pool of 20, then 20 database accesses will be performed in parallel, and once every request releases the database access, the same connection will be reused to attend another request. This is assuming that database access will be for a limited period of time (short operations), and that, once data is fetched / inserted / updated, the connection will be released for another request.
On the other hand, requests that cannot retrieve a database connection because the pool is fully in use, (let's say, the 21th request) will be in a waiting status, until some connection is released, so it's a transparent condition on the client side. All of this is supposing that the code is requesting and releasing connections from the pool, in a efficient way. The timeout for the requests is also a configurable property of the pool
You can tune up these values to get the most of it, and you can also consider another alternatives to avoid exhausting the pool for repetitive querys (like always the same select for all requests) by using technologies like database caching (ie, open terracota).
Hope this helps!