I have a global SqlDataAdapter that serves multiples SqlCommand. The problem is sometimes the SqlDataAdapter fill method raise an error saying
There is already an open DataReader associated with this Command...
I'm wondering if exists some way to know when the fill method still executing?
I'd heard that SqlDataAdapter use a DataReader internally.
Can get that DataReader?
"I have a global SqlDataAdapter" - that's your mistake. Do away with that notion and make a DataAdapter whenever you want to use one. Put it in a using statement so you don't forget to Dispose of it.
Also, if you're caching connections and opening closing them manually, don't do that either - just give the adapter (that you make on the spot) an SQL string and a connection string and let it make the connection and command objects. The only time you might want to create a connection and open it yourself is if you have a lot of operations to perform in sequence using different adapters, perhaps as part of a transaction. Don't forget that opening and closing a connection is not necessarily making a TCP connection to the database (slow) - it's leasing and returning a currently connected and usable connection from a pool of connections and is a trivial operation
The more you try and micromanage this, the worse it will get and trying to jiggle around, detecting this and waiting for that is a) unnecessary, b) a hacky mess to try and work around the corner you've painted yourself into , c) going to give a substandard UI experience.
Related
I'm trying to work out the behavior of connection pooling with ODP.NET. I get the basics but I don't understand what's happening. I have an application that spins up threads every X seconds and that thread connects and performs a number of searches against the database then disconnects, everything is being disposed and disconnected as you would expect. With the defaults in the connection string and X set to a high number that ensures searches are complete before the next search takes place, I get an exception, not on connect, as I would have expected but on OracleDataAdapter.Fill(). I get the following exception:
'ORA-00604: error occurred at recursive SQL level 1 ORA-01000: maximum open cursors exceeded'
After the 9th connection. Every time. Then, the application will run indefinitely without another error. It's definitely related to connection pooling. If I turn off pooling it works without error. If I turn Min Pool Size up then it takes longer for the error but it eventually happens.
My expectation for connection pooling would be a wait on the call to connect to get a new connection, not Fill failing on an adapter that's already connected (although I get that the connection object is using a pool, so maybe that's not what's happening). Anyway it's odd behavior.
Your error is not relating to a maximum number of connections but a maximum number of cursors.
A cursor is effectively a pointer to a memory address within the database server that lets the server look up the query the cursor is executing and the current state of the cursor.
Your code is connecting and then opening cursors but, for whatever reason, it is not closing the cursors. When you close a connection it will automatically close all the cursors; however, when you return a connection to a connection pool it keeps the connection open so it can be reused (and because it does not close the connection it does not automatically close all the cursors).
It is best-practice to make sure that when you open a cursor it is closed when you finish reading from it and if there is an error during the execution of the cursor, that prevents the normal execution path, then that the cursor is closed when you catch the exception.
You need to debug your code and make sure you close all the cursors you open.
I have a session object that gets passed around a whole lot and at some point the following lines of code are called (this is unavoidable):
import transaction
transaction.commit()
This renders the session unusable (by closing it I think).
My question is two part:
How do I check if a session is still alive and well?
Is there a quick way to revitalize a dead session?
For 2: The only way I currently know is to use sqlalchemy.orm.scoped_session, then call query(...)get(id) many times to recreate the necessary model instances but this seems pretty darn inefficient.
EDIT
Here's an example of the sequence of events that causes the error:
modelInstance = DBSession.query(ModelClass).first()
import transaction
transaction.commit()
modelInstance.some_relationship
And here is the error:
sqlalchemy.orm.exc.DetachedInstanceError: Parent instance <CategoryNode at 0x7fdc4c4b3110> is not bound to a Session; lazy load operation of attribute 'children' cannot proceed
I don't really want to turn off lazy loading.
EDIT
DBSession.is_active seems to be no indication of whether or not the session is in fact alive and well in this case:
transaction.commit()
print(DBSession.is_active)
this prints True...
EDIT
This seemed too big for a comment so I'm putting it here.
zzzeek said:
"An expired object will automatically load new state from the database, via the Session, as soon as you access anything on it, so there's no need to tell the Session to do anything here."
So how do I get stuff committed in such a way that this will happen? calling transaction.commit is wrong, what's the correct way?
so the first thing to observe here is "import transaction" is a package called zope.transaction. this is a generic transaction that takes hold of any number of sub-tasks, of which the SQLAlchemy Session is one of them, via the zope.sqlalchemy extension.
What zope.sqlalchemy here is going to do is call the begin()/rollback()/commit() methods of the Session itself, in response to it's own management of the "transaction".
The Session itself works in such a way that it is almost always ready for use, even if its internal transaction has been committed. When this happens, the Session upon next use just keeps going, either starting a new transaction if it's in autocommit=False, or if autocommit=True it continues in "autocommit" mode. Basically it is auto-revitalizing.
The one time that the Session is not able to proceed is if a flush has failed, and the rollback() method has not been called, which, when in autocommit=False mode, the Session would like you do to explicitly when flush() fails. To see if the Session is in this specific state, the session.is_active property will return False in that case.
I'm not 100% sure what the implications are of continuing to use the Session when zope.transaction is in use. I think it depends on how you're using zope.transaction in the bigger scheme.
Which leads us where lots of these questions do, which is what are you really trying to do. Like, "recreate the necessary model instances" is not something the Session does, unless you are referring to existing instances which have been expired (their guts emptied out). An expired object will automatically load new state from the database, via the Session, as soon as you access anything on it, so there's no need to tell the Session to do anything here.
It's of course an option to even turn off auto-expiration entirely, but that you are even arriving at a problem here implies something is not working as it should. Like there's some error message you're getting. More detail would be needed to understand exactly what the issue you're having is.
I have a WCF service that uses ODP.NET to read data from an Oracle database. The service also writes to the database, but indirectly, as all updates and inserts are achieved through an older layer of business logic that I access via COM+, which I wrap in a TransactionScope. The older layer connects to Oracle via ODBC, not ODP.NET.
The problem I have is that because Oracle uses a two-phase-commit, and because the older business layer is using ODBC and not ODP.NET, the transaction sometimes returns on the TransactionScope.Commit() before the data is actually available for reads from the service layer.
I see a similar post about a Java user having trouble like this as well on Stack Overflow.
A representative from Oracle posted that there isn't much I can do about this problem:
This maybe due to the way OLETx
ITransaction::Commit() method behaves.
After phase 1 of the 2PC (i.e. the
prepare phase) if all is successful,
commit can return even if the resource
managers haven't actually committed.
After all the successful "prepare" is
a guarantee that the resource managers
cannot arbitrarily abort after this
point. Thus even though a resource
manager couldn't commit because it
didn't receive a "commit" notification
from the MSDTC (due to say a
communication failure), the
component's commit request returns
successfully. If you select rows from
the table(s) immediately you may
sometimes see the actual commit occur
in the database after you have already
executed your select. Your select will
not therefore see the new rows due to
consistent read semantics. There is
nothing we can do about this in Oracle
as the "commit success after
successful phase 1" optimization is
part of the MSDTC's implementation.
So, my question is this:
How should I go about dealing with the possible delay ("asyc" via the title) problem of figuring out when the second part of the 2PC actually occurs, so I can be sure that data I inserted (indirectly) is actually available to be selected after the Commit() call returns?
How do big systems deal with the fact that the data might not be ready for reading immediately?
I assume that the whole transaction has prepared and a commit outcome decided by the TransactionManager, therefore eventually (barring heuristic damage) the Resource Managers will receive their commit message and complete. However, there are no guarantees as to how long that might take - could be days, no timeouts apply, having voted "commit" in the Prepare the Resource Manager must wait to hear the collective outcome.
Under these conditions, the simplest approach is to take "an understood, we're thinking" approach. Your request has been understood, but you actually don't know the outcome, and that's what you tell the user. Yes, in all sane circumstances the request will complete, but under some conditions operators could actually choose to intervene in the transaction manually (and maybe cause heuristic damage in doing so.)
To go one step further, you could start a new transaction and perform some queries to see if the data is there. Now, if you are populating a result screen you will naturally be doing such as query. The question would be what to do if the expected results are not there. So again, tell the user "your recent request is being processed, hit refresh to see if it's complete". Or retry automatically (I don't much like auto retry - prefer to educate the user that it's effectively an asynch operation.)
An example:
Say, I have an AJAX chat on a page where people can talk to each other.
How is it possible to display (send) the message sent by person A to persons B, C and D while they have the chat opened?
I understand that technically it works a bit different: the chat(ajax) is reading from DB (or other source), say every second, to find out if there are new messages to display.
But I wonder if there is a method to send the new message to the rest of the people just when it is sent, and not to load the DB with 1000s of reads every second.
Please note that the AJAX chat example is just an example to explain what I want, and is not something I want to realize. I just need to know if there is a method to let all the opened browser at a specific page(ajax) that there is new content on the server that should be gathered.
{sorry for my English}
Since the server cannot respond to a client without a corresponding request, you need to keep state for each user's queued message. However, this is exactly what the database accomplishes. You cannot get around this by replacing the database with something that doesn't just accomplish the same thing in a different way. That said, there are surely optimizations you could do. Keep in mind, however, that you shouldn't prematurely optimize situations like this; databases are designed to handle extremely high traffic, and it's very possible (and in fact, likely), that the scenario described will be handled just fine by the database out of the box.
What you're describing is generally referred to as the 'Comet' concept. See the Wikipedia article for details, especially implementation options (long polling, etc.).
Another answer is to have the server push changes to connected clients, that way there is just one call to the database and then the server pushes the change to all the clients. This article indicates it is possible, however I have never tried this myself.
It's very basic, but if you want to stick with a standard AJAX solution, a simple means of reducing load on the server when polling would be to get the AJAX call to forward the last collected comment ID for that client - you then use that (with the appropriate escaping) in the lookup query on the server side to ensure you only return new comments.
I have previously asked a question about a stored proc that was executing too slowly on a sql server box, however, if I ran the sproc in Query Analyzer, it would return under one second. The client is a .NET 1.1 winforms app.
I was able to VNC into the user's box and, of course, they did not have SQL tools installed, so I cranked up Excel, went into VBA and wrote a quick function to call the sproc with exact same params.
It turns out that the sproc does return subsecond and I can loop through all the rows in no time at all. However, closing the connection is what takes a really long time, ranging from 5 seconds to 30.
Why would closing a connection take that long?
The symptoms you describe are almost always due to an 'incorrect' cached query plan. While this is an large topic (see parameter sniffing here on SO), you can often (but not always) alleviate the problem by rebuilding a database's indexes andensuring that all statistics are up to date.
If you're using a SqlDataReader, one thing you can try is once you have all the data you need, call Cancel on the SqlCommand before calling Close on the SqkDataReader. This will prevent the out parameters and return values from being filled in which might be the cause of the slowness to close the connection. Do it in a try catch block because it can throw a cancelled by user exception.
Connection pooling?
That, or I'd check for any service packs or KB articles for the client library.