I decided to use DBCP mainly because I was getting timeouts on my database connections. In theory, once you define a "validation query", DBCP will by default run that query on the connection before using it, so you always know the connection is OK.
I set it up two weeks ago and it seemed to work. However, last night I got a timeout exception on a connection.
On my dev machine the code survives a MySQL restart without a problem, so I guess DBCP is doing something.
How should I proceed investigating this? Do you use DBCP for this purpose?
(Just deleted 50 lines or so of more detail, trying to keep the question readable. Let me know if some crucial info is missing).
EDIT: Guess I should have read this question before I started...
This is not always true. I experienced similar problems, but it turned out to be that DBCP has a default behavior that is not very well documented.
You must set the time between eviction runs to purge idle connections. This can be set on a SharedPoolDataSource object by calling setTimeBetweenEvictionRunsMillis. If this is never set, the default value is negative, and the thread never runs!
Related
I have a Spring boot app that use HikariCP for Postgres connection pooling.
Recently I've set up tracing to collect some data how time is spent when handling a request to a specific endpoint.
My assumptions are that when using HikariCP:
The first connection to the database while handling the request might be a bit slower
Subsequent connections to the database should be fast (< 10 ms)
However, as the trace shows, the first connection is fast (< 10 ms). And while some subsequent connections during the same request handling are also fast (< 10 ms), I frequently see some subsequent connections taking 50-100ms, which seems quite slow to me, although I'm not sure if this is to be expected or not.
Is there anything I can configure to improve this behavior?
Maybe good to know:
The backend in question doesn't really see any other traffic right now, so it's only handling traffic when I manually send requests to it
I've changed maximumPoolSize to 1 to rule out that the issue is that it uses different connections in the context of 1 request and that's what causes the issue. The same behavior is still seen.
I use the default Hikari settings, I don't change them.
I do think something is wrong with your pool configuration or your usage of the pool if it takes roughly 10 ms to get an already initialized connection from your pool. I would expect it to be sub-millisecond... Are you sure you are using the pool correctly?
Make sure you are using as new versions of pool and driver as possible, and make sure that connectionTestQuery is not set, as that would execute a query every time the connection is obtained from the pool. The defaults should be good enough for the rest of the settings.
Debug logs could be one thing help figure out what is happening, metrics on the pool another. Have a look at Spring Boot Actuator, it will help you with that...
To answer your actual question on how you can improve the situation given it actually takes roughly 10 ms to obtain a connection: Do not obtain and return the connection to the pool for every query... If you do not want to pass the connection around in your code, and if it suits your use case, you can make this happen easily by making sure your whole request is wrapped in a transaction. See the Spring guide on managing transactions.
Our application has been using OracleDataSource successfully for several years and we are now evaluating switching to the new Oracle Universal Connection Pool (UCP).
With the new UCP Pool, our application runs into ORA-0100: Maximum open cursors after some time.
Some people seem to have had similar problems:
https://stackoverflow.com/a/4683797/217862
https://stackoverflow.com/a/29892459/217862
Is there any known workaround / fix?
Note: We do close sessions and statements correctly and are following all known JDBC/Hibernate best practices. The app runs 24/7 and the data access layer code is >8 year old and has been exhaustively tested. We are using Oracle 12c.
Well, it turned out we though we were following all known best practices. In some places we were using ScrollableResult without closing them properly. In this case it apparently leaks the underlying cursor, even after the hibernate session is closed. We fixed all occurences we found in the code and as an additional defensive measure we configured the opion maxConnectionReuseTime of the pool to ensure connection are renewed periodically.
Note: it didn't took us one year to find the problem, only a few days, I simply forgot to answer the question after we figured out the problem...
I am an Oracle DBA and not a java developer or websphere expert. We recently started using websphere in our environment. So the developers are still learning it. So I may not word my question properly. I did search the forums and saw 2 other questions like this. My question is more about how to trouble shoot this.
Websphere 8.5.0.2
Oracle 11.2.0.3
I see 20 open connections in the database. All are inactive. So they are not processing. From oracle it is v$session. Inactive means, you are open and not doing anything. Basically it is idle.
If they are inactive and not processing, they should be available for the connection pool to give to a new requester, assuming the DAO the java developer is using is being closed when done (this includes try/catch block). We confirmed that he is closing his connections.
Checks so far:
1. We reviewed the developers code. He is using standard java DAOs. He is closing his connection. He has a try/catch block and the first thing he does in the catch is close the connection.
2. My assumption is that this should cover the code path.
We don't see any errors raised in a log about 'closing' a connection.
My understanding of how a connection pool works
1. Pool Manager opens a configurable set of connections to the database. In our case it is 20.
2. When an application requests a connection, the connection manager does a lookup of the pool for the next available connection, then passes a pointer to that connect to the requesting function.
Possibility:
1. really slow server. We are using VMs for development/test. We do not have visibility to the servers to see if they are busy. So another VM could be using up CPU or disk.
Though lookups for available connections are light weight, it is possible that the server is hung up at 100% cpu and we timeout. Problem is, I don't have a way to look at this. No privileges and no access to someone who does.
not closing connections: We checked pretty thoroughly. We don't see any code passes (including exceptions) where he is not closing connections. First thing he does in a catch, is close the connection.
Any suggestions on where to look? I think its an issue with a slow server, but I want to rule other stuff out. I would like to state again that I am not a java developer or a websphere expert. So my question may be worded poorly.
the first thing he does in the catch is close the connection
Get the developer to introduce finally block after catch block and close connection in finally block, instead of catch block. Flow will move to catch only in case of error, but on normal flow the connection will not be released soon.
try {
//do something
}
catch(Exception ex) {
// log error
}
finally {
//close connection here
}
The symptoms you've described indicate a connection leak. Leaks are easy to solve (see ad-inf's response), but it can be hard to locate the source of the leak. Luckily, WAS comes with ConnLeakLogic mechanism. After enabling it, in the trace.log you'll find entries related to connections which have been retrieved from pool by application and haven't been returned for longer period of time. That connection information will also print a stack trace of a Java thread from the time of obtaining the connection. Seeing that stack traces, Java developer should be able to identify offending code.
C3p0 cache has acquireRetryDelay parameter to set time between acquire attempts.
Has tomcat7 jdbc-pool the same functionality?
No, tomcat jdbc pool does not automatically try to connect again, when the attempt to create a connection fails. It simply will throw the exception it got as an SQLException (if it was not one in the first place).
If it's you code which needs to do retry you can do it yourself, by trying to get connection several times. Or you might be able to make a patch, it does not seem to be so hard to do.
You can try to use some different libraries, they say BoneCP is quite good (although I never tried that).
If I open up a JDBC connection in my java code and forget to close it, will it remain open for ever? Or is there a default time out value that I can specify somewhere?
In first place, you really need to fix the code to close it. No excuses.
As to the concrete question, that depends on the DB server used. It's the DB server who will timeout and reclaim it. Refer the DB server specific administration manual for defaults and how to change it. In case of for example MySQL, it's the wait_timeout setting which defaults to 28800 seconds (8 hours).
It depends on the specifics of the JDBC driver implementation. Some might have a timeout. Some might close when the connection object is garbage collected and finalize is called, but (1) you don't know how long after you stopped referencing the object it gets garbage collected, and (2) the finalize is not reliably called.