does oracle always throw ora-00060 on deadlock? - oracle

Is there any scenario in which Oracle won't throw ora-00060 (Deadlock detected while waiting for resource) error on deadlock in db?
I have an issue of app freezing on multithreaded record inserts and I wonder what might have caused that
I'm not sure what other relevant information I can provide so please ask in case of any doubts.

At times I had the felt that Oracle had not thrown deadlock errors for every single time it actually occured. Having said that, if you are experiencing locks with multithreaded inserts, it is more likely that the sessions are temporarily waiting on each other than truly deadlocking.
To find out for sure you can query the v$session paying particular attention to the STATUS, BLOCKING_SESSION, WAIT_CLASS, EVENT, P1TEXT,P2TEXT and P3TEXT. That should paint the picture in terms of sessions holding each other and why. A true deadlock would have session A blocking session B and session B blocking session A, which is relatively rare.
There is also a chance that the application is hanging due to some multithreading mishap, not a database one.

Related

Solana transaction never gets picked up by the cluster

For the last few weeks or so, we are having the following issue:
Some of our transactions, when sent via sendRawTransaction() never get picked up by the network (if we look up the txid in the explorer, it's never there), and yet web3js doesn't error out.
We use "#solana/web3.js": "^1.44.1"
This has started happening to us like 2-3 weeks ago:
This issue affects some sets of transactions that all share the same instructions + amount of signers and accounts.
It repros 100% of the time for all transactions in those sets. No matter the state of the network or how many times we retry, they never get picked up.
We don't get any error back from web3.js, such as transaction limit hit
They all work in devnet, but not in mainnet!
For one of these tx, I removed one instruction+signer and it started working, so I imagine there's some limit we're hitting, but I can't tell which or how to even determine the limit.
When network congestion is high, validators can drop transactions without any error. To fix your issue you could send more of the same transaction on some interval while you're waiting for confirmation, and while your transaction blockhash is valid. This way you'll raise a chances for your transaction to been processed by the validator.

Is que safe for writing jobs that handle a lot of data?

I am using Que with the Sequel gem, I am interested if it is safe to write jobs that need to handle a lot of data, too much data than what can be safely placed in 1 database transaction, such as import/export of 80k+ rows on a regular basis (I currently process records in 1k record transaction batches).
The thing I'm concerned about is if the gem/postgres does some kind of implicit transaction around the background worker execution which would potentially make the rollback segment go out of hand and crash the DB down in a swappy hell.
The reason why I'm asking this is, this line from the docs:
Safety - If a Ruby process dies, the jobs it's working won't be lost, or left in a locked or ambiguous state - they immediately become available for any other worker to pick up.
with me this screams "nested in a transaction", which if my fears are true could potentially result in silently wrapping my 80k records into the same rollback segment. I could try it on my laptop, but my laptop is far stronger then the production VM so I'm afraid it might successfully crunch on my dev environment and then gloriously crash in deployment.
can someone with similar Que experience help out?
Link: the same question on GH
Answered by the Que devs:
There's no implicit transaction around each job - that guarantee is offered by locking the job id with an advisory lock. Postgres takes care of releasing the advisory lock for us if the client connection is lost, regardless of transactional state.

Is calling conn.rollback redundant while doing transaction in jdbc?

While this particular question has been asked multiple times already, but I am still unsure about it. My set up is something like this: I am using jdbc and have autocommit as false. Let's say I have 3 insert statements, I want to execute as a transaction followed by conn.commit().
sample code:
try {
getConnection()
conn.setAutoCommit(false);
insertStatment() //#1
insertStatment() //#2
insertStatment() //#3, could throw an error
conn.commit()
} catch(Sql exception) {
conn.rollback() // why is it needed?
}
Say I have two scenarios
Either, there won't be any error and we will call conn.commit() and everything will be updated.
Say first two statements work fine but there is a error in the third one. So conn.commit() is not called and our database is in consistent state. So why do I need to call conn.rollback()?
I noticed that some people mentioned that rollback has an impact in case of connection pooling? Could anyone explain to me, how it will affect?
A rollback() is still necessary. Not committing or rolling back a transaction might keep resources in use on the database (transaction handles, logs or record versions etc). Explicitly committing or rolling back makes sure those resources are released.
Not doing an explicit rollback may also have bad effects when you continue to use the connection and then commit. Changes successfully completed in the transaction (#1 and #2 in your example) will be persisted.
The Connection apidoc however does say "If auto-commit mode has been disabled, the method commit must be called explicitly in order to commit changes; otherwise, database changes will not be saved." which should be interpreted as: a Connection.close() causes a rollback. However I believe there have been JDBC driver implementations that used to commit on connection close.
The impact on connection pooling should not exist for correct implementations. Closing the logical connection obtained from the connection pool should have the same effect as closing a physical connections: an open transaction should be rolled back. However sometimes connection pools are not correctly implemented or have bugs or take shortcuts for performance reasons, all of which could lead to an open transaction being already started when you get handed a logical connection from a pool.
Therefor: be explicit in calling rollback.

mongodb many inserts\updates performance

I am using mongodb to store user's events, there's a document for every user, containing an array of events. The system processes thousands of events a minute and inserts each one of them to mongo.
The problem is that I get poor performance for the update operation, using a profiler, I notice that the WriteResult.getError is the one that incur the performance impact.
That makes sense, the update is async, but if one wants to retrieve the operation result he needs to wait until the operation is completed.
My question, is there a way to keep the update async, but only get an exception if error occurs (99.999 of the times there is no error, so the system waits for nothing). I understand it means the exception will be raised somewhere further down the process flow, but I can live with that.
Any other suggestions?
The application is written in Java so we're using the Java driver, but I am not sure it's related.
have you done indexing on your records?
it may be a problem to your performance.
if not done before you should do Indexing on ur collection like
db.collectionName.ensureIndex({"event.type":1})
for more help visit http://www.mongodb.org/display/DOCS/Indexes

What's a good way to handle "async" commits?

I have a WCF service that uses ODP.NET to read data from an Oracle database. The service also writes to the database, but indirectly, as all updates and inserts are achieved through an older layer of business logic that I access via COM+, which I wrap in a TransactionScope. The older layer connects to Oracle via ODBC, not ODP.NET.
The problem I have is that because Oracle uses a two-phase-commit, and because the older business layer is using ODBC and not ODP.NET, the transaction sometimes returns on the TransactionScope.Commit() before the data is actually available for reads from the service layer.
I see a similar post about a Java user having trouble like this as well on Stack Overflow.
A representative from Oracle posted that there isn't much I can do about this problem:
This maybe due to the way OLETx
ITransaction::Commit() method behaves.
After phase 1 of the 2PC (i.e. the
prepare phase) if all is successful,
commit can return even if the resource
managers haven't actually committed.
After all the successful "prepare" is
a guarantee that the resource managers
cannot arbitrarily abort after this
point. Thus even though a resource
manager couldn't commit because it
didn't receive a "commit" notification
from the MSDTC (due to say a
communication failure), the
component's commit request returns
successfully. If you select rows from
the table(s) immediately you may
sometimes see the actual commit occur
in the database after you have already
executed your select. Your select will
not therefore see the new rows due to
consistent read semantics. There is
nothing we can do about this in Oracle
as the "commit success after
successful phase 1" optimization is
part of the MSDTC's implementation.
So, my question is this:
How should I go about dealing with the possible delay ("asyc" via the title) problem of figuring out when the second part of the 2PC actually occurs, so I can be sure that data I inserted (indirectly) is actually available to be selected after the Commit() call returns?
How do big systems deal with the fact that the data might not be ready for reading immediately?
I assume that the whole transaction has prepared and a commit outcome decided by the TransactionManager, therefore eventually (barring heuristic damage) the Resource Managers will receive their commit message and complete. However, there are no guarantees as to how long that might take - could be days, no timeouts apply, having voted "commit" in the Prepare the Resource Manager must wait to hear the collective outcome.
Under these conditions, the simplest approach is to take "an understood, we're thinking" approach. Your request has been understood, but you actually don't know the outcome, and that's what you tell the user. Yes, in all sane circumstances the request will complete, but under some conditions operators could actually choose to intervene in the transaction manually (and maybe cause heuristic damage in doing so.)
To go one step further, you could start a new transaction and perform some queries to see if the data is there. Now, if you are populating a result screen you will naturally be doing such as query. The question would be what to do if the expected results are not there. So again, tell the user "your recent request is being processed, hit refresh to see if it's complete". Or retry automatically (I don't much like auto retry - prefer to educate the user that it's effectively an asynch operation.)

Resources