Oracle UCP Pool leaking cursor? - oracle

Our application has been using OracleDataSource successfully for several years and we are now evaluating switching to the new Oracle Universal Connection Pool (UCP).
With the new UCP Pool, our application runs into ORA-0100: Maximum open cursors after some time.
Some people seem to have had similar problems:
https://stackoverflow.com/a/4683797/217862
https://stackoverflow.com/a/29892459/217862
Is there any known workaround / fix?
Note: We do close sessions and statements correctly and are following all known JDBC/Hibernate best practices. The app runs 24/7 and the data access layer code is >8 year old and has been exhaustively tested. We are using Oracle 12c.

Well, it turned out we though we were following all known best practices. In some places we were using ScrollableResult without closing them properly. In this case it apparently leaks the underlying cursor, even after the hibernate session is closed. We fixed all occurences we found in the code and as an additional defensive measure we configured the opion maxConnectionReuseTime of the pool to ensure connection are renewed periodically.
Note: it didn't took us one year to find the problem, only a few days, I simply forgot to answer the question after we figured out the problem...

Related

Oracle Entity Framework/Managed Data Access Core and connection pool leak with proxy user

We have recently upgraded from Oracle.ManagedDataAccess.EntityFramework to Oracle.EntityFrameworkCore (we are on .net standard 2.0). When we connect to the database we use proxy credentials, with the following connection string:
User Id=changingUserId;Data Source=dbname;Proxy User Id=proxyUserId;Proxy Password=proxyUserPassword;
The UserID element changes based on who is connecting.
The problem we have is that the connection pools are no longer working as expected, with many connections being spawning and not closed - we very quickly reach the pool size limit and everything grinds to halt. Before the upgrade, pools would increase and decrease in size, but they now only grow!
Reading the oracle docs, it appears it requires the connection string to be identical for connection pooling to work correctly, but I don't see how this is possible when we are using proxy users. Has anyone else come across this/got around it or am I missing something?
Thanks
Chris
We have found a work around, adding the users password into the connection string makes it work as expected - no more filling up the connection pool/connection numbers are again rising and falling.
User Id=changingUserId;Password=usersPwd;Data Source=dbname;Proxy User Id=proxyUserId;Proxy Password=proxyUserPassword;
This isn't ideal for us - authentication/authorisation is handled elsewhere - but it will do for now. We are raising a call with Oracle as I suspect this is a bug in their library.

How to pool a connection in google apps script without ScriptDB?

I am looking to pool a connection in a similar way as described in this question. However I do not wish to use ScriptDB which was deprecated in May 2014.
How would this be achieved without the ScriptDB class? Not use connection pooling in Google Apps Script?
Actually, the answer in the linked question does not work as you cannot store jdbc connection objects on ScriptDb, or anywhere else for that matter. It is not possible to do a connection pool or cache in Apps Script. Even if you ´stringify´ it, when you parse back it'll not work.
One always have to create a new connection for every instance/execution of the script.

ojdbc6-11.2.0.3 and available entropy

We are having some issues with an application of ours which we are attempting to diagnose. While taking a close look at things, we think we may be having some DBCP connection pool issues.
Among a few things we noticed, we discovered something via a secondary support application (small JDBC based sqlclient for monitoring the DB) using the same driver the main application uses. That discovery was entropy exhaustion. After applying the fix noted in Oracle JDBC intermittent Connection Issue to this small utility, the issue went away.
At that time, we suspected the main application could be suffering from the same problem. We did not apply the same fix at this point, but rather we've started monitoring available entropy via /proc/sys/kernel/random/entropy_avail every 5 seconds to validate.
After reviewing the data for a 24 hour period, we do not see the same drop in available entropy as with the jdbc utility prior to the use of /dev/urandom. Rather, we noticed that the entropy never drops below 128 bytes nor climbs above 191 bytes. We have searched the application configuration files and can't find anything related to specifying the random number source.
OS: Red Hat Enterprise Linux Server release 6.3 (Santiago)
JDBC Driver: ojdbc6-11.2.0.3
Pooling Method: Hibernate DBCP
So, my questions are:
1) If we've not knowingly told the application/driver to use /dev/urandom vs /dev/random, what would possibly explain why we don't see the same entropy drop when new pool connections are created?
2) Why would the minimum and maximum available entropy be so rigid at 128/191? I would expect a little more, pardon the pun, randomness in these values.
I hesitate to go posting a bunch of configuration files not knowing which may be relevant. If there is something particular you'd like to see, please let me know and I will share.
Does your application use JDBC connection pooling or does it make authentication attempts as frequently as your test application did/does?
Keep in mind that each authentication attempt consumes the random pool.

To close or not to close an Oracle Connection?

My application have performance issues, so i started to investigate this from the root: "The connection with the database".
The best practices says: "Open a connection, use it and close is as soon as possible", but i dont know the overhead that this causes, so the question is:
1 -"Open, Use, Close connections as soon as possible is the best aproach using ODP.NET?"
2 - Is there a way and how to use connection pooling with ODP.NET?
I thinking about to create a List to store some connections strings and create a logic to choose the "best" connection every time i need. Is this the best way to do it?
Here is a slide deck containing Oracle's recommended best practices:
http://www.oracle.com/technetwork/topics/dotnet/ow2011-bp-performance-deploy-dotnet-518050.pdf
You automatically get a connection pool when you create an OracleConnection. For most middle tier applications you will want to take advantage of that. You will also want to tune your pool for a realistic workload by turning on Performance Counters in the registry.
Please see the ODP.NET online help for details on connection pooling. Pool settings are added to the connection string.
Another issue people run into a lot with OracleConnections is that the garbage collector does not realize how truly resource intensive they are and does not clean them up promptly. This is compounded by the fact that ODP.NET is not fully managed and so some resources are hidden from the garbage collector. Hence the best practice is to Close() AND Dispose() all Oracle ODP.NET objects (including OracleConnection) to force them to be cleaned up.
This particular issue will be mitigated in Oracle's fully managed provider (a beta will be out shortly)
(EDIT: ODP.NET, Managed Driver is now available.)
Christian Shay
Oracle
The ODP.NET is a data provider for ADO.NET.
The best practice for ADO.Net is Open, Get Data (to memory), close, use in memory data.
For example using a OracleDataReader to load data in a DataTable in memory and close connection.
[]'s
For a single transaction this is best but for multiple transaction where you commit at the end this might not be the best solution. You need to keep the connection open until the transaction either committed or rolled back. How do you manage that and also how do you check the connection still exist in that case?(ie network failure) There is ConnectionState.Broken property which does not work at this point.

Is there an Oracle Open Cursor (ORA-01000) leak in ColdFusion?

using CFMX7 and Oracle 10g ent on a query-intensive and active web site, I'm having a problem that some of the Oracle connections in my web server connection pool are accumulating open cursors. (In JDBC parlance this might be called a ResultSet object leak.)
This is a confusing situation in Oracle; read here for an explanation.
http://www.orafaq.com/node/758
Any how, it's not cached PreparedStatements that are leaking, it's actually ResultSets.
My DBAs have set the OPEN_CURSORS parameter to 500 per connection. Fairly frequently, my connections get up to 450+, which triggers a DBA alarm (because we hope to avoid smacking web app users with ORA-01000 cursor exhaustion errors).
Does anybody know if there's a bug in ColdFusion (MX7) that causes this problem? Is there any way programatically to use CF to generate a ResultSet object leak (called a cfquery leak in CF)? Any suggestions?
Here is some information that might be helpful.
http://jehiah.cz/a/maximum-open-cursors-exceeded

Resources