I am an Oracle DBA and not a java developer or websphere expert. We recently started using websphere in our environment. So the developers are still learning it. So I may not word my question properly. I did search the forums and saw 2 other questions like this. My question is more about how to trouble shoot this.
Websphere 8.5.0.2
Oracle 11.2.0.3
I see 20 open connections in the database. All are inactive. So they are not processing. From oracle it is v$session. Inactive means, you are open and not doing anything. Basically it is idle.
If they are inactive and not processing, they should be available for the connection pool to give to a new requester, assuming the DAO the java developer is using is being closed when done (this includes try/catch block). We confirmed that he is closing his connections.
Checks so far:
1. We reviewed the developers code. He is using standard java DAOs. He is closing his connection. He has a try/catch block and the first thing he does in the catch is close the connection.
2. My assumption is that this should cover the code path.
We don't see any errors raised in a log about 'closing' a connection.
My understanding of how a connection pool works
1. Pool Manager opens a configurable set of connections to the database. In our case it is 20.
2. When an application requests a connection, the connection manager does a lookup of the pool for the next available connection, then passes a pointer to that connect to the requesting function.
Possibility:
1. really slow server. We are using VMs for development/test. We do not have visibility to the servers to see if they are busy. So another VM could be using up CPU or disk.
Though lookups for available connections are light weight, it is possible that the server is hung up at 100% cpu and we timeout. Problem is, I don't have a way to look at this. No privileges and no access to someone who does.
not closing connections: We checked pretty thoroughly. We don't see any code passes (including exceptions) where he is not closing connections. First thing he does in a catch, is close the connection.
Any suggestions on where to look? I think its an issue with a slow server, but I want to rule other stuff out. I would like to state again that I am not a java developer or a websphere expert. So my question may be worded poorly.
the first thing he does in the catch is close the connection
Get the developer to introduce finally block after catch block and close connection in finally block, instead of catch block. Flow will move to catch only in case of error, but on normal flow the connection will not be released soon.
try {
//do something
}
catch(Exception ex) {
// log error
}
finally {
//close connection here
}
The symptoms you've described indicate a connection leak. Leaks are easy to solve (see ad-inf's response), but it can be hard to locate the source of the leak. Luckily, WAS comes with ConnLeakLogic mechanism. After enabling it, in the trace.log you'll find entries related to connections which have been retrieved from pool by application and haven't been returned for longer period of time. That connection information will also print a stack trace of a Java thread from the time of obtaining the connection. Seeing that stack traces, Java developer should be able to identify offending code.
Related
We have recently upgraded from Oracle.ManagedDataAccess.EntityFramework to Oracle.EntityFrameworkCore (we are on .net standard 2.0). When we connect to the database we use proxy credentials, with the following connection string:
User Id=changingUserId;Data Source=dbname;Proxy User Id=proxyUserId;Proxy Password=proxyUserPassword;
The UserID element changes based on who is connecting.
The problem we have is that the connection pools are no longer working as expected, with many connections being spawning and not closed - we very quickly reach the pool size limit and everything grinds to halt. Before the upgrade, pools would increase and decrease in size, but they now only grow!
Reading the oracle docs, it appears it requires the connection string to be identical for connection pooling to work correctly, but I don't see how this is possible when we are using proxy users. Has anyone else come across this/got around it or am I missing something?
Thanks
Chris
We have found a work around, adding the users password into the connection string makes it work as expected - no more filling up the connection pool/connection numbers are again rising and falling.
User Id=changingUserId;Password=usersPwd;Data Source=dbname;Proxy User Id=proxyUserId;Proxy Password=proxyUserPassword;
This isn't ideal for us - authentication/authorisation is handled elsewhere - but it will do for now. We are raising a call with Oracle as I suspect this is a bug in their library.
I have a few places in my code that does not properly close database connections. This gets periodically reported in catalina.out with messages like: org.apache.commons.dbcp.AbandonedTrace$AbandonedObjectException: DBCP object created 2013-08-29 02:55:00 by the following code was never closed. These messages are repeated for other unclosed connections for various times over the next few hours.
By looking at other info in catalina.out, I can see that these messages were printed to catalina.out around 7:40AM. I've seen other instances where these are reported in catalina.out the next day. My question is, what determines when these messages get printed to catalina.out? How does that work exactly?
DBCP is open source so you can look at the code yourself and find out. The way DBCP checks for abandoned connections is a form of cooperative garbage collection. When a connection is checked out from the connection pool, it first checks for abandoned connections and cleans them up.
So if no new connections have been requested for a few hours, abandoned connections will not be removed. And when (eg at start of a business day) a connection is requested from the pool it will first remove all abandoned connections.
If you look at the code of borrowObject(), depending on the configuration it will call removeAbandoned(), which in turn will revoke and log abandoned connections.
My application have performance issues, so i started to investigate this from the root: "The connection with the database".
The best practices says: "Open a connection, use it and close is as soon as possible", but i dont know the overhead that this causes, so the question is:
1 -"Open, Use, Close connections as soon as possible is the best aproach using ODP.NET?"
2 - Is there a way and how to use connection pooling with ODP.NET?
I thinking about to create a List to store some connections strings and create a logic to choose the "best" connection every time i need. Is this the best way to do it?
Here is a slide deck containing Oracle's recommended best practices:
http://www.oracle.com/technetwork/topics/dotnet/ow2011-bp-performance-deploy-dotnet-518050.pdf
You automatically get a connection pool when you create an OracleConnection. For most middle tier applications you will want to take advantage of that. You will also want to tune your pool for a realistic workload by turning on Performance Counters in the registry.
Please see the ODP.NET online help for details on connection pooling. Pool settings are added to the connection string.
Another issue people run into a lot with OracleConnections is that the garbage collector does not realize how truly resource intensive they are and does not clean them up promptly. This is compounded by the fact that ODP.NET is not fully managed and so some resources are hidden from the garbage collector. Hence the best practice is to Close() AND Dispose() all Oracle ODP.NET objects (including OracleConnection) to force them to be cleaned up.
This particular issue will be mitigated in Oracle's fully managed provider (a beta will be out shortly)
(EDIT: ODP.NET, Managed Driver is now available.)
Christian Shay
Oracle
The ODP.NET is a data provider for ADO.NET.
The best practice for ADO.Net is Open, Get Data (to memory), close, use in memory data.
For example using a OracleDataReader to load data in a DataTable in memory and close connection.
[]'s
For a single transaction this is best but for multiple transaction where you commit at the end this might not be the best solution. You need to keep the connection open until the transaction either committed or rolled back. How do you manage that and also how do you check the connection still exist in that case?(ie network failure) There is ConnectionState.Broken property which does not work at this point.
We are running our Junit 4 test suite against Weblogic 9 in front of an Oracle 10 database (using Hudson as a continuous integration server) and occasionally we will get an ORA-12519 crash during script teardown. However, the error is very intermittent:
It usually happens for the same Test class
It doesn't always happen for the same test cases (sometimes they pass)
It doesn't happen for the same number of test cases (anywhere from 3-9)
Sometimes it doesn't happen at all, everything passes
While I can't guarantee this doesn't happen locally (when running against the same database, of course), I have run the same suite of class multiple times with no issues.
Any ideas?
Don't know if this will be everybody's answer, but after some digging, here's what we came up with.
The error is obviously caused by the fact that the listener was not accepting connections, but why would we get that error when other tests could connect fine (we could also connect no problem through sqlplus)? The key to the issue wasn't that we couldn't connect, but that it was intermittent
After some investigation, we found that there was some static data created during the class setup that would keep open connections for the life of the test class, creating new ones as it went. Now, even though all of the resources were properly released when this class went out of scope (via a finally{} block, of course), there were some cases during the run when this class would swallow up all available connections (okay, bad practice alert - this was unit test code that connected directly rather than using a pool, so the same problem could not happen in production).
The fix was to not make that class static and run in the class setup, but instead use it in the per method setUp and tearDown methods.
So if you get this error in your own apps, slap a profiler on that bad boy and see if you might have a connection leak. Hope that helps.
Another solution I have found to a similar error but the same error message is to increase the number of service handlers found. (My instance of this error was caused by too many connections in the Weblogic Portal Connection pools.)
Run SQL*Plus and login as SYSTEM. You should know what password you’ve used during the installation of Oracle DB XE.
Run the command alter system set processes=150 scope=spfile; in SQL*Plus OR any SQL friendly IDE.
VERY IMPORTANT: Restart the database, to make the change effective in the SPFILE.
From here:
http://www.atpeaz.com/index.php/2010/fixing-the-ora-12519-tnsno-appropriate-service-handler-found-error/
I also had the same problem, I searched for the answers many places. I got many similar answers to change the number of process/service handlers. But I thought, what if I forgot to reset it back?
Then I tried using Thread.sleep() method after each of my connection.close();.
I don't know how, but it's working at least for me.
If any one wants to try it out and figure out how it's working then please go ahead. I would also like to know it as I am a beginner in programming world.
I had this problem in a unit test which opened a lot of connections to the DB via a connection pool and then "stopped" the connection pool (ManagedDataSource actually) to release the connections at the end of the each test. I always ran out of connections at some point in the suite of tests.
Added a Thread.sleep(500) in the teardown() of my tests and this resolved the issue. I think that what was happening was that the connection pool stop() releases the active connections in another thread so that if the main thread keeps running tests the cleanup thread(s) got so far behind that the Oracle server ran out of connections. Adding the sleep allows the background threads to release the pooled connections.
This is much less of an issue in the real world because the DB servers are much bigger and there is a healthy mix of operations (not just endless DB connect/disconnect operations).
I had the similar issue. It happened every time when I run a pack of database (Spring JDBC) tests with SpringJUnit4ClassRunner, so I resolved the issue putting #DirtiesContext annotation for each test in order to cleanup the application context and release all resources thus each test could run with a new initalization of the application context.
We have a fairly standard client/server application built using MS RPC. Both client and server are implemented in C++. The client establishes a session to the server, then makes repeated calls to it over a period of time before finally closing the session.
Periodically, however, especially under heavy load conditions, we are seeing an RPC exception show up with code 1754: RPC_S_NOTHING_TO_EXPORT.
It appears that this happens in the middle of a session. The user is logged on for a while, making successful calls, then one of the calls inexplicably returns this error. As far as we can tell, the server receives no indication that anything went wrong - and it definitely doesn't see the call the client made.
The error code appears to have permanent implications, as well. Having the client retry the connection doesn't work, either. However, if the user has multiple user sessions active simultaneously between the same client and server, the other connections are unaffected.
In essence, I have two questions:
Does anyone know what RPC_S_NOTHING_TO_EXPORT means? The MSDN documentation simply says: "No interfaces have been exported." ... Huh? The session was working fine for numerous instances of the same call up until this point...
Does anyone have any ideas as to how to identify the real problem? Note: Capturing network traffic is something we would rather avoid, if possible, as the problem is sporadic enough that we would likely go through multiple gigabytes of traffic before running into an occurrence.
Capturing network traffic would be one of the best ways to tackle this issue. If you can't do that, could you dump the client process and debug with WinDBG or Visual Studio? Perhaps compare a dump when operating normally versus in the error state?