Connection Pool Behavior ODP.NET - oracle

I'm trying to work out the behavior of connection pooling with ODP.NET. I get the basics but I don't understand what's happening. I have an application that spins up threads every X seconds and that thread connects and performs a number of searches against the database then disconnects, everything is being disposed and disconnected as you would expect. With the defaults in the connection string and X set to a high number that ensures searches are complete before the next search takes place, I get an exception, not on connect, as I would have expected but on OracleDataAdapter.Fill(). I get the following exception:
'ORA-00604: error occurred at recursive SQL level 1 ORA-01000: maximum open cursors exceeded'
After the 9th connection. Every time. Then, the application will run indefinitely without another error. It's definitely related to connection pooling. If I turn off pooling it works without error. If I turn Min Pool Size up then it takes longer for the error but it eventually happens.
My expectation for connection pooling would be a wait on the call to connect to get a new connection, not Fill failing on an adapter that's already connected (although I get that the connection object is using a pool, so maybe that's not what's happening). Anyway it's odd behavior.

Your error is not relating to a maximum number of connections but a maximum number of cursors.
A cursor is effectively a pointer to a memory address within the database server that lets the server look up the query the cursor is executing and the current state of the cursor.
Your code is connecting and then opening cursors but, for whatever reason, it is not closing the cursors. When you close a connection it will automatically close all the cursors; however, when you return a connection to a connection pool it keeps the connection open so it can be reused (and because it does not close the connection it does not automatically close all the cursors).
It is best-practice to make sure that when you open a cursor it is closed when you finish reading from it and if there is an error during the execution of the cursor, that prevents the normal execution path, then that the cursor is closed when you catch the exception.
You need to debug your code and make sure you close all the cursors you open.

Related

Detect when SqlDataAdapter fill method completes

I have a global SqlDataAdapter that serves multiples SqlCommand. The problem is sometimes the SqlDataAdapter fill method raise an error saying
There is already an open DataReader associated with this Command...
I'm wondering if exists some way to know when the fill method still executing?
I'd heard that SqlDataAdapter use a DataReader internally.
Can get that DataReader?
"I have a global SqlDataAdapter" - that's your mistake. Do away with that notion and make a DataAdapter whenever you want to use one. Put it in a using statement so you don't forget to Dispose of it.
Also, if you're caching connections and opening closing them manually, don't do that either - just give the adapter (that you make on the spot) an SQL string and a connection string and let it make the connection and command objects. The only time you might want to create a connection and open it yourself is if you have a lot of operations to perform in sequence using different adapters, perhaps as part of a transaction. Don't forget that opening and closing a connection is not necessarily making a TCP connection to the database (slow) - it's leasing and returning a currently connected and usable connection from a pool of connections and is a trivial operation
The more you try and micromanage this, the worse it will get and trying to jiggle around, detecting this and waiting for that is a) unnecessary, b) a hacky mess to try and work around the corner you've painted yourself into , c) going to give a substandard UI experience.

Handling concurrent neo4j connections with goroutines

We've been tasked with migrating quite a bit of xml data (1.27 million xml files, one node with properties per file) into a Neo graph, and we've been using go routines to chew through files, parse xml, prepare cypher queries for inserts etc. Due to the architecture of having to process the xml, we are using go routines concurrently with channels to process each file in threads, throttling the number of workers going on at one time.
The issue i'm having is that I'm running into errors like "tcp connection reset by peer" and also "panic: Couldn't read expected bytes for message length. Read: 0 Expected: 2." and I can only imagine this is due to running connections and statements concurrently in our workers. Our throttling has us at 100 concurrent workers, and I wouldn't think this would be a major problem for Neo, but I just can't figure out why its choking on us.
Are there any architecture recommendations out there for handling a use case like this, where we have to run single cypher statements in large numbers of worker routines (in our case 100 at a time)?
Currently, we're walking a file tree to build up a queue of files to process, then after the walk is done, we iterate that queue and fire off go routines to process each file, using a buffered throttle channel to block the firing off of new routines until previous routines have finished. Within each routine i'm spinning up a new connection, prepare statement, execute, close, etc..
I see this package offers Pipelines, but i'm just not sure how to use that within the processing/queue/channel architecture that we've got going currently:
https://github.com/johnnadratowski/golang-neo4j-bolt-driver
I've also tried using:
https://github.com/go-cq/cq
But keep getting tcp connection reset by peer errors when trying to connect to Neo concurrently.
It is possible you are using thread unsafe functionality in neo4j-bolt-driver.
There are 2 versions of drivers provided by Neo4j-bolt-driver:
Driver Plain driver
DriverPool Driver which manages connection pool
The driver objects themselves are thread-safe, but the Conn object which represent the underlying connection are not. You maybe using the Conn object in a way that its not meant to be.
With goroutines, its best to create Conn objects using DriverPool methods. When Close is called on the connection, it doesn't necessarily close the underlying connection and reclaims the connection for reuse.

JDBC connection not available

I appreciate everybody giving solutions/suggestions to my post.
Environment: Portlet, Ibm Websphere, Java.
Scenario: In the portal application, whenever I hit a menu item(or portlet) the server often goes down hardly in an hour. Doesn't matter whether I remain in the same menu item(or portlet) or go to another menu item(or portlet). As a result after server down, we used to get backside connection cannot be established.
Connection pool size in server = 50.
In the application: Database calls within a for has a loop of 900 iterations. Checking the log I came to know for the first 50 iterations, the operation is well carried out within seconds. But from the 51st iteration, there happens a connection timeout stating JDBC connection not available and thereafter for every iteration it takes 3 minutes(keeps waiting for database connection but not getting it).
Sample code:
listSize = 900;
for(int i=0; i < listSize; i++){
// database query for setting a status message.
}
We suspected that this might be due to open database connections. So connections are not available for 51st iteration after reaching the pool size of 50. But in the application there is spring's jdbcTemplate used which should automatically open/close connections.
Question(s):
What could be the exact cause of this scenario? Because of using the DB calls inside for loop causes the performance issue and not giving the connections to threads from 51st iteration?
If the spring automatically closes the connections, then why it is not giving the connection to new iterations from the 51st?
Is the for loop iterations are faster than spring's connection closure? So that first 50 threads iterating and not from 51st?
When you said "Connection pool size in server = 50", I assuming you mean max connections is set to 50. As you suspected, the behavior you're seeing indicates that the free pool is being exhausted by connection requests. Based on your "for" loop, the first 50 connection requests caused by queries are successful but since the connections are not being returned to the free pool, the 51st connection request is going to the waiter pool and eventually timing out after 180 seconds. You're correct that the Spring jdbcTemplate configuration is supposed to close() the connection when complete, thus returning the connection to the pool, so you will need to investigate why that is happening. Turing on WebSphere Application server tracing with tracespec of rra=all might give you some insight, see IBM Knowledge Center topic to turn on trace. Additional trace can be obtained with WAS.j2c=all, but it's going to be verbose.
Check this article - Default behavior of managed connections in WebSphere Application Server. You are probably using sharable connections and local transactions. Try to configure resource reference used by your application and set connections to unsharable. Put something like this in your web.xml, and use java:comp/env/jdbc/datasourceRef in your Spring configuration.
<resource-ref>
<res-ref-name>jdbc/datasourceRef</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<res-auth>Application</res-auth>
<res-sharing-scope>Unshareable</res-sharing-scope>
</resource-ref>

Is calling conn.rollback redundant while doing transaction in jdbc?

While this particular question has been asked multiple times already, but I am still unsure about it. My set up is something like this: I am using jdbc and have autocommit as false. Let's say I have 3 insert statements, I want to execute as a transaction followed by conn.commit().
sample code:
try {
getConnection()
conn.setAutoCommit(false);
insertStatment() //#1
insertStatment() //#2
insertStatment() //#3, could throw an error
conn.commit()
} catch(Sql exception) {
conn.rollback() // why is it needed?
}
Say I have two scenarios
Either, there won't be any error and we will call conn.commit() and everything will be updated.
Say first two statements work fine but there is a error in the third one. So conn.commit() is not called and our database is in consistent state. So why do I need to call conn.rollback()?
I noticed that some people mentioned that rollback has an impact in case of connection pooling? Could anyone explain to me, how it will affect?
A rollback() is still necessary. Not committing or rolling back a transaction might keep resources in use on the database (transaction handles, logs or record versions etc). Explicitly committing or rolling back makes sure those resources are released.
Not doing an explicit rollback may also have bad effects when you continue to use the connection and then commit. Changes successfully completed in the transaction (#1 and #2 in your example) will be persisted.
The Connection apidoc however does say "If auto-commit mode has been disabled, the method commit must be called explicitly in order to commit changes; otherwise, database changes will not be saved." which should be interpreted as: a Connection.close() causes a rollback. However I believe there have been JDBC driver implementations that used to commit on connection close.
The impact on connection pooling should not exist for correct implementations. Closing the logical connection obtained from the connection pool should have the same effect as closing a physical connections: an open transaction should be rolled back. However sometimes connection pools are not correctly implemented or have bugs or take shortcuts for performance reasons, all of which could lead to an open transaction being already started when you get handed a logical connection from a pool.
Therefor: be explicit in calling rollback.

Performance issues with a stored proc - closing a connection too slow

I have previously asked a question about a stored proc that was executing too slowly on a sql server box, however, if I ran the sproc in Query Analyzer, it would return under one second. The client is a .NET 1.1 winforms app.
I was able to VNC into the user's box and, of course, they did not have SQL tools installed, so I cranked up Excel, went into VBA and wrote a quick function to call the sproc with exact same params.
It turns out that the sproc does return subsecond and I can loop through all the rows in no time at all. However, closing the connection is what takes a really long time, ranging from 5 seconds to 30.
Why would closing a connection take that long?
The symptoms you describe are almost always due to an 'incorrect' cached query plan. While this is an large topic (see parameter sniffing here on SO), you can often (but not always) alleviate the problem by rebuilding a database's indexes andensuring that all statistics are up to date.
If you're using a SqlDataReader, one thing you can try is once you have all the data you need, call Cancel on the SqlCommand before calling Close on the SqkDataReader. This will prevent the out parameters and return values from being filled in which might be the cause of the slowness to close the connection. Do it in a try catch block because it can throw a cancelled by user exception.
Connection pooling?
That, or I'd check for any service packs or KB articles for the client library.

Resources