Is there any workaround to remove deadlock without killing the session?
From the Concepts Guide:
Oracle automatically detects deadlock situations and resolves them by rolling back one of the statements involved in the deadlock, thereby releasing one set of the conflicting row locks.
You don't have to do anything to remove a deadlock, Oracle takes care of it automatically. The session is not killed, it is rolled back to a point just before the trigger statement. The other session is unaffected (i-e it still waits for the lock until the rolled back session either commits or rolls back its transaction).
In most situations deadlocks should be exceptionally rare. You can prevent all deadlocks by using FOR UPDATE NOWAIT statements instead of FOR UPDATE.
See also
Discussion about removing deadlock on AskTom
Deadlocks are automatically cleared in Oracle by cancelling one of the locked statements. You need not do it manually. One of the sessions will get "ORA-00060" and it should decide whether to retry or roll back.
But from you description it looks like you have a block, not deadlock.
Anyway, blocking session should somehow release its lock -- by commiting or rolling back its transaction. You can just wait for it (possibly for a long time). If you can change code of your application -- you probably can rewrite it to release lock or avoid it. Otherwise, you have to kill session to immediately unlock resources.
No, Oracle 10g does not seem to resolve deadlocks automatically in practice. We did have dealocks and we had to clear the sessions manually.
This page can help in identifying if you have deadlocks
Identifying and Resolving Oracle ITL Deadlock
Related
A Realm holds a read lock on the version of the data accessed by it, so that changes made to the Realm on different threads do not modify or delete the data seen by this Realm. Calling this method releases the read lock, allowing the space used on disk to be reused by later write transactions rather than growing the file
Is there a matching function in Xamarin.Realm like in Objc/Swift's RLMRealm invalidate.
If not, is this a backlog item or is it not required(?) with the C# wrapper.
I think calling Realm.Refresh() would be a workaround - it will cause the Realm instance to relinquish the read lock it has at the moment and move it to the latest version which would free up the old version for compaction.
Ordinarily moving the read lock to the latest version would happen automatically if the thread you run on has a running CFRunLoop or ALooper, but on a dedicated worker thread you'd be responsible for calling Refresh() on your own to advance the read lock.
Please open an issue on https://github.com/realm/realm-dotnet for Invalidate() if Refresh() doesn't work for you.
I think you would use Realm.Close(). See:
https://realm.io/docs/xamarin/latest/api/class_realms_1_1_realm.html#a7f7a3199c392465d0767c6506c1af5b4
Closes the Realm if not already closed. Safe to call repeatedly. Note that this will close the file. Other references to the same database on the same thread will be invalidated.
I have inherited a database and every night i get buzzed in the middle of the night for locking issues.This database has severe locking issues and the usual drill is to bounce the application tier one by one so the locks get released . I am tired of doing this and came across a documentation where i can go ahead and kill the blocking session .
I am just wondering if i go ahead and kill the database blocking session after a session blocks for a time more then the predefined threshold
do i have the risk of corrupting the database ?
if so how ?
Even if i assume that i am corrupting the database then restarting the application server also is equally risky and more painful for me too.
So what option do i choose here kill automatically the blocking session until the time the developer fixes the code that is causing the blocking ?
regards
Nick
Seems like the exact purpose the Resource Manager directive MAX_IDLE_BLOCKER_TIME was created for.
Example
No , killing a session won't corrupt database as it will be rolled back and generate UNDO , when you killed it , it gives the "marked for kill " message,
do it the normal way "alter system kill session "sid, serial#' , not the "kill -9 .."
I have the following situation:
One process is reading from a SQLite database.
Another processes is updating the database. Updates do not happen very often and all transactions are short. (less than 0.1ms on average)
The process that is reading should have low latencies for the queries. (around 0.1ms)
If the locking of SQLite would work like a mutex or readers-writer lock, everything would be ok.
From reading http://www.sqlite.org/lockingv3.html this should be possible. SQLite is using
LockFileEx(), sometimes without the LOCKFILE_FAIL_IMMEDIATELY, which would block the calling
process as desired.
However I could not figure out how to use/configure SQLite to achieve this behavior. Using a busy-handler
would involve polling, what is not acceptable because the minimal sleep time is usually 15ms on Windows.
I would like that the query is executed as soon as the update transaction ends.
Is this possible without changing the source code of SQLite. If not, is there such a patch available somewhere?
SQLite does not use a synchronization mechanism that would wait just until a lock is released.
SQLite never uses a blocking locking call; when it finds that the database is locked, is waits for some time and tries again.
(You could install your own busy handler to wait for a shorter time.)
The easiest way to prevent readers and a writer from blocking each other is to use WAL mode.
If you cannot use WAL mode, you can synchronize your transactions by implementing your own synchronization mechanism: use a common named mutex in all processes, and lock it around all your transactions.
(This would reduce concurrency if you had multiple readers.)
Normally, billings should execute in the background on a scheduled date (I haven't figured out how to do that yet, but that's another topic).
But occasionally, the user may wish to execute a billing manually. Once clicked, I would like to be sure the operation runs to completion regardless of what happens on the user side (e.g. closes browser, machine dies, network goes down, whatever).
I'm pretty sure db.SaveChanges() wraps its DB operations in a transaction, so from a server perspective I believe the whole thing will either finish or fail, with no partial effect.
But what about all the work between the POST and the db.SaveChanges()? Is there a way to be sure the user can't inadvertently or intentionally stop that from completing?
I guess a corollary to this question is what happens to a running Asynchronous Controller or a running Task or Thread if the user disconnects?
My previous project was actually doing a billing system in MVC. I distinctly remember testing out what would happen if I used Task and then quickly exited the site. It did all of the calculations just fine, ran a stored procedure in SQL Server, and sent me an e-mail when it was done.
So, to answer your question: If you wrap the operations in a Task it should finish anyways with no problems.
A number of stored procedures I support query remote databases over a WAN. The network occasionally goes down, but the worst that ever happened was the procedures failed and would have to be restarted.
The last couple weeks it's taken a sinister turn. Instead of failing the procedures hang in a wierd locked state. They can't be killed inside of Oracle and as long as they exist any attempt to run other copies of the procedure will hang too. The only solution we've found is to kill the offending procedures with a "kill -9" from the OS. Some of these procedures haven't been changed for months, even years, so I suspect a root cause in the DB or DB configuration.
Any one have any ideas of what we can do to either fix the problem? Or does PL/SQL have a time-out mechanism I can add to the code so that I can create an exception that I can handle programatically?
What database version ? Are they stuck running SQL or in PL/SQL ?
Has anyone added exception handling into the routines recently ?
I remember in 9iR2, we were told that, instead or raising an exception to the calling routine, we were to catch all exceptions and keep running (basically try to run process all the items in the job even if some fail).
We inevitably had jobs get stuck in an infinite loop with SQLs failing, getting caught by the exception handler and trying again. And they couldn't be killed as the WHEN OTHERS also caught the 'your session has been killed' exception. I think the latter changed in 10g so that exception didn't get caught.
We were never able to determine what caused this to happen. We believe it was a defect in the October 2008 cumulative patch. Perhaps a later patch as fixed it. It hasn't happened for a couple months (and we've had some network outages) so hopefully the problem has gone away.