I have a ASP.Net MVC Application connecting to Oracle DB. I am using LINQ in my controller to pull data from Oracle DB.
If that page is loaded, after several minutes if its idle, it gives the above error.
Now I can't ask my DBA to increase the idle time. In my research I saw mention of Pooling in Web.config file. My understanding is that, because of Pooling, some of these connections are still active. I have removed this portion
Min Pool Size=1;Max Pool Size=20;Pooling=true
Do I have to explicitly say in my Web.config:
Pooling=false
I also have in my Controller, Dispose function as below but that doesn't help:
protected override void Dispose(bool disposing)
{
if (disposing)
{
db.Dispose();
}
base.Dispose(disposing);
}
Please help.
If the DBA sets an idle timeout on the server for connections then configure your connection pool with this option.
Min Pool Size=0;
The default is 1. This will keep the ODP.NET client from keeping any open idle connections in the pool while the application is idle. It will still increase the pool size when connection requests come in and it will likely be slightly less efficient at satisfying initial requests since it has to create those connections, but it will work and not have the idle timeout issue.
I agree with some of the comments that there shouldn't be an idle timeout set on the server for these cases, but I've found that some organizations insist on doing this for security reasons.
Here's an approach. When you resume using your connection after a potential idle delay, such as waiting for an incoming request, do this:
Run some cheap no-op query like SELECT 1 FROM DUAL;
If you get the error you mentioned, make sure the connection is completely closed, then open a new one.
Use the connection as normal.
This is a bit of a hack compared to organizing your connection pool properly, but it's better than opening up a new connection every time you need one.
Related
Can someone help me understand the Hikari implementation better. We currently have the following settings in use:
spring.datasource.hikari.maxLifetime=600000
spring.datasource.hikari.maximumPoolSize=3
spring.datasource.hikari.minimumIdle=1
What we see, is that after 10 min, the applications connections are cleared, and then, it struggles to make new connections to the DB, resulting in sessions exceeded errors, from the DB, we understand what they are, but it seems that rather than use the existing sessions, Hikari is trying to create new sessions, and the DB is not allowing that as is it supposed to.
Why is Hikari trying to establish new connections and not use connections from the available sessions ont he DB?
How does the maxlifetime work? reading the documents on the Hikari Git hub page, it seems straightforward enough.
We have also set
spring.datasource.hikari.idle-timeout=10000
on the idea that it may not kill those connections quite so quickly, but it still seems to remove them...
Thanks
I am an Oracle DBA and not a java developer or websphere expert. We recently started using websphere in our environment. So the developers are still learning it. So I may not word my question properly. I did search the forums and saw 2 other questions like this. My question is more about how to trouble shoot this.
Websphere 8.5.0.2
Oracle 11.2.0.3
I see 20 open connections in the database. All are inactive. So they are not processing. From oracle it is v$session. Inactive means, you are open and not doing anything. Basically it is idle.
If they are inactive and not processing, they should be available for the connection pool to give to a new requester, assuming the DAO the java developer is using is being closed when done (this includes try/catch block). We confirmed that he is closing his connections.
Checks so far:
1. We reviewed the developers code. He is using standard java DAOs. He is closing his connection. He has a try/catch block and the first thing he does in the catch is close the connection.
2. My assumption is that this should cover the code path.
We don't see any errors raised in a log about 'closing' a connection.
My understanding of how a connection pool works
1. Pool Manager opens a configurable set of connections to the database. In our case it is 20.
2. When an application requests a connection, the connection manager does a lookup of the pool for the next available connection, then passes a pointer to that connect to the requesting function.
Possibility:
1. really slow server. We are using VMs for development/test. We do not have visibility to the servers to see if they are busy. So another VM could be using up CPU or disk.
Though lookups for available connections are light weight, it is possible that the server is hung up at 100% cpu and we timeout. Problem is, I don't have a way to look at this. No privileges and no access to someone who does.
not closing connections: We checked pretty thoroughly. We don't see any code passes (including exceptions) where he is not closing connections. First thing he does in a catch, is close the connection.
Any suggestions on where to look? I think its an issue with a slow server, but I want to rule other stuff out. I would like to state again that I am not a java developer or a websphere expert. So my question may be worded poorly.
the first thing he does in the catch is close the connection
Get the developer to introduce finally block after catch block and close connection in finally block, instead of catch block. Flow will move to catch only in case of error, but on normal flow the connection will not be released soon.
try {
//do something
}
catch(Exception ex) {
// log error
}
finally {
//close connection here
}
The symptoms you've described indicate a connection leak. Leaks are easy to solve (see ad-inf's response), but it can be hard to locate the source of the leak. Luckily, WAS comes with ConnLeakLogic mechanism. After enabling it, in the trace.log you'll find entries related to connections which have been retrieved from pool by application and haven't been returned for longer period of time. That connection information will also print a stack trace of a Java thread from the time of obtaining the connection. Seeing that stack traces, Java developer should be able to identify offending code.
I'm getting ActiveRecord::ConnectionTimeoutError in a daemon that runs independently from the rails app. I'm using Passenger with Apache and MySQL as the database.
Passenger's default pool size is 6 (at least that's what the documentation tells me), so it shouldn't use more than 6 connections.
I've set ActiveRecord's pool size to 10, even though I thought that my daemon should only need one connection. My daemon is one process with multiple threads that calls ActiveRecord here and there to save stuff to the database that it shares with the rails app.
What I need to figure out is whether the threads simply can't share one connection or if they just keep asking for new connections without releasing their old connections. I know I could just increase the pool size and postpone the problem, but the daemon can have hundreds of threads and sooner or later the pool will run out of connections.
The first thing I would like to know is that Passenger is indeed just using 6 connections and that the problem lies with the daemon. How do I test that?
Second I would like to figure out if every thread need their own connection or if they just need to be told to reuse the connection they already have. If they do need their own connections, maybe they just need to be told to not hold on to them when they're not using them? The threads are after all sleeping most of the time.
You can get to the connection pools that ActiveRecord is using through ActiveRecord::Base.connection_handler.connection_pools it should be an array of connection pools. You probably will only have one in there and it has a connections method on it. To get an array of connections it knows about.
You can also do a ActiveRecord::Base.connection_handler.connection_pools.each(&:clear_stale_cached_connections!) and it will checkin any checked out connections which thread is no longer alive.
Don't know if that helps or confuses more
As of February 2019, clear_state_cached_connections has been deprecated and moved to reap
Commit
Previous accepted answer updated:
ActiveRecord::Base.connection_handler.connection_pools.each(&:reap)
We have a web service written in Java and is connecting to Oracle database for data extraction. Recently, we encountered too many inactive session in Oracle database from JDBC which is our web service.
We are very sure that all the connection is being closed and set to null after every process.
Can anyone help us in this? Why is it causing inactive session in the database and what can be the solution to this.
Thank you.
What, exactly, is the problem?
Normally, the middle tier application server creates a connection pool. When your code requests a connection, it gets an already open connection from the pool rather than going through the overhead of spawning a new connection to the database. When your code closes a connection, the connection is returned to the pool rather than going through the overhead of physically closing the connection. That means that there will be a reasonable number of connections to the database where the STATUS in V$SESSION is "INACTIVE" at any given point in time. That's perfectly normal.
Even under load, most database connections from a middle tier are "INACTIVE" most of the time. A status of "INACTIVE" merely means that at the instant you ran the query, the session was not executing a SQL statement. Most connections will spend most of their time either sitting in the connection pool waiting for a Java session to open them or waiting on the Java session to do something with the data or waiting on the network to transfer data between the machines.
Are you actually getting an error (i.e. ORA-00020: maximum number of processes exceeded)? Or are you just confused by the number of entries in V$SESSION?
I am using JDBC with proxool connection pool to connect to mysql DB.
I am selecting large number of rows from multiple threads and after some time i get an error saying communication link failure, Last packet sent to the server was ...ago.
I am closing connection,statement,resultSet in every thread.
The fetching time increases gradually and the exception occurs after 5-10 minutes.
I doubt it is a memory leak, but cant find any clue.
Please let me know the possible reasons.
Thanks,
Kaka
it may related on your Connection Timeout, try to increase it.
con.setConnectionTimeout(X);