Performance of rollback on nothing to commit connection - oracle

I have a context in which I have a connection pool in which connections are freed with no guarantee of having been committed or rolled back.
In this context I'm thinking on implementing in the pool itself a rollback on every connection that gets freed but I guess what performance impact this action could have.
This question is specifically about Oracle DB. What actions does Oracle do when a rollback is performed in a transaction with no pending inserts nor updates? For example, what happens (in performance means) if you rollback two consecutive times or do a commit and immediately a rollback?

There will be no performance impact of unnecessarily rolling back every session. Compared to opening and closing a session, a rollback is practically free. For example, the below PL/SQL block rolls back one million times in 6 seconds on my machine.
begin
for i in 1 .. 1000000 loop
rollback;
end loop;
end;
/
Oracle writes every change to the UNDO tablespace before anything is committed or rolled back. There are some significant costs associated with that approach, but it means when you rollback, Oracle does not have to check every table in the database, it only has to check the UNDO data for anything related to the current transaction. If nothing is found, then nothing needs to be done. I would guess that every rollback requires at worst one index lookup. Which is not something you need to worry about if it only occurs once per session.

Related

Is commit required in an Oracle stored procedure which is called from Java class?

I have an Oracle Stored Procedure that does some inserts and updates on a table in DB.
There is no explicit Commit or Rollback statement at the end of the procedure.
However, when I call this SP through a java class, I see that the inserts and updates are committed into the DB.
So can anyone help me understand if we really need a commit statement at the end of the stored procedure in Oracle?
I am not java experience but as far as I know when you close the connection of the database the data are committed (unless if you rollback them). Now to return into your question is when to use the commit in SP.
When you use DML(insert,update,delete) operation in the procedure on a table, the table will be Locked therefore if any other user try to access the locked table, it has to wait till you commit/rollback your operation. so if your procedure was taking time, due to a long loop or bad optimized query then the user will be blocked. So if you had a commit before the DMl, the no blocks will happen.
Other reason, is the undo tablespace, where all the data not committed will wait there till you commit them, so if for example you inserted lot of data (millions), your undo might get full depend on your size and youll get an error.
so short answer , if your procedure doesn't has lot of operations on big tables and it fast then you can pass by the commit , otherwise it better to add commits.

Are inserts with sequence numbers nextval atomic for this number?

If I am inserting rows into a table in auto-commit mode where one column is defined by a sequence's nextval value, are these values guaranteed to become visible in the order they are inserted? I am wondering if a scenario where from three concurrent connections:
Inserts foo
Inserts bar
Select all, observes bar with sequence number 2 but not foo with sequence number 1.
is possible.
The Oralce sequences are thread safe and are always created in order. It is guaranteed that the numbers produced are unique.
But you might not see an insert of an other session immediatley, if this other session has still an open transaction. This might create a temporary gap in the sequence you are seeing from SELECTs.
Further more, if a transaction which has called NEXTVAL is rolled back, then this will cause permanent gaps in the sequence. Sequences are not affected by roll backs or commits. An increment is always immediate and definitive.
See: CREATE SEQUENCE (Oracle Help Center)
"Auto-commit" is not a concept of the Oracle database. That is, there is no "auto-commit" mode or feature in the database -- it is only implemented in tools (like SQL*Plus).
A tool can implement "auto-commit" in different ways, but in most cases, it's probably along the lines of this:
(user's command, e.g., INSERT INTO ...)
<success response from Oracle server>
COMMIT;
In this case, the COMMIT does not get issued by the tool until there is a positive response from the server that the user's command has been executed. In a networked environment with >10ms latency, plus the vagaries of multithreading on the Oracle server itself, I would say there could be situations where session #2's automatic COMMIT gets processed on the server before session #1's and that, therefore, it is possible for session #3 to observe "bar" but not "foo".
The COMMIT timing of each session relative to the time at which session #3 starts its query is the only thing that matters. Session #3 will see whatever work session #1 and/or session #2 have committed as of the time session #3's query starts.

Finding all statements involved in a deadlock from an Oracle trace file?

As I understand it, the typical case of a deadlock involving row-locking requires four SQL statements. Two in one transaction to update row A and row B, and then a further two in a separate transaction to update the same rows, and require the same locks, but in the reverse order.
Transaction 1 gets the lock on row A before transaction 2 can request it, transaction 2 gets the lock on row B before transaction 1 can get it, and neither can get the remaining required locks. One or either transaction has to be rolled back, so the other can complete.
When I review an Oracle trace file after a deadlock, it only seems to highlight two queries. These seem to be the last one out of each transaction.
How can I identify the other statements involved in each transaction, or is this missing in an Oracle trace file?
I can include relevant bits of the specific trace file if required.
You're correct, in a typical row-level deadlock, you'll have session 1 execute sql_a that will lock row 1. Then session 2 will execute sql_b that will lock row 2. Then session 1 will execute sql_c to attempt to lock row 2, but session 2 has not committed, and so session 1 starts waiting. Finally, session 2 comes along, and it issues sql_d, attempting to lock row 1, but, since session 1 holds that lock, it starts waiting. Three seconds later, the deadlock is detected, and one of the sessions will catch ORA-00060 and the trace file is written.
In this scenario, the trace file will contain sql_c and sql_d, but not sql_a or sql_b.
The problem is that information just really isn't available anywhere. Consider that you execute a DML, it starts a transaction if one doesn't exist, generates a bunch of undo and redo, and the change is made. But, once that happens, the session is no longer associated with that SQL statement. There's really no clean way to go back and find that information.
sql_c and sql_d, on the other hand, are the statements that were associated with those sessions when the deadlock occurred, so, clearly, Oracle can identify them, and include that in the trace file.
So, you're correct, the information about sql_a and sql_b is not in the trace, and it's really not readily available.
Hope that helps.

Do the time of the COMMIT and ROLLBACK affect performance?

Suppose I have a set of ID . For each ID , I will insert many records to many different tables based on the ID .Between inserting difference tables, different business checks will be called . If any checking fail , all the records that are inserted based on this ID will be ROLLBACK .This bulk insert action is done through using PL/SQL . Do the time of the COMMIT and ROLLBACK affect the performance and how does it affect ? For example , should I COMMIT after finish the process for one ID or COMMIT after finish all ID?
This is not so much of a performance decision but a process design decision. Do you want the other IDs to stay in the database when you have to roll back a faulty ID?
For obvious reasons, rollback takes longer when more rows must be rolled back. Rollback usually takes longer (sometimes much longer!) than the operations that have to be rolled back. Commit is always fast in Oracle, so it probably doesn't matter how often you commit in that regard.
Your problem description indicates you have a large set of smaller logical transactions (each new ID is a transaction). You should commit each logical transaction. The two reasons to wait to commit the entire set of transactions are:
If the entire set of transactions is in fact a transaction itself - all inserts must succeed for any rows to be committed. In that context, your smaller "transactions" aren't truly transactions.
You don't have a restart capability in your bulk load process, which in effect makes this a special case of item 1. If your bulk load process aborts, you need a way to skip successfully applied ID's.
Tom Kyte's advice is to commit each logical unit of work - the transaction.
Don't take the transaction time longer. make it short as possible as you can. Because according to your query some locks have been created. This locks may cause perfomance issues... so do it ID by ID...
There are two "forces" at work....
locking
during your open transaction, oracle puts locks on the changed rows.
whenever another transaction needs to update any of the locked rows,
it has to wait.
in the worst case, you can even build a deadlock.
synchronous write
every commit performs a synchronous write.
(there are ways to disable that, but it is usually the thing everybody wants: integrity).
that synchronous write can take (much) longer then the a regular write (that can be buffered).
Not to forget that there is usually an additional network round trip involved with an commit.
so, the one force says "commit as soon as possible (considering your integrity requirements)" the other says "commit as as less often as possible".
There are some other issues to consider as well, e.g. the maximum transaction size. every uncommited transaction needs some temporary space. the bigger the transaction gets, the more you need. You can also run into ORA-01555 "snapshot too old".
If there is any advice to give, then it is to implement a configurable "commit frequency" so that you can easily change it as needed.
One option if you need to control the individual sets but retain the ability to commit or rollback the entire transaction is to use savepoints. You can set a savepoint at the beginning of the outermost loop, then rollback to it if an error occurs. You might end up with something like this:
begin
--Initial batch logging
for r_record in cur_cursor loop
savepoint s_cursor loop;
begin
--Process rows
exception
when others then
rollback to s_cursor;
end;
end loop;
--Final batch logging
exception
when others then
rollback;
raise;
end;

ORA-1555: snapshot too old: rollback segment number

Any idea about
ORA-1555: snapshot too old: rollback segment number
I am getting this error and nothing seems to be wrong. Please state under what conditions in may occur and how it can be avoided?
Frequent commits can be the cause of ORA-1555.
It's all about read consistency. The time you start a query oracle records a before image. So the result of your query is not altered by DML that takes place in the mean time (your big transaction). The before image uses the rollback segments to get the values of data that is changed after the before image is taken.
By committing in your big transaction you tell oracle the rollback data of that transaction can be overwritten.
If your query need data from the rollback segments that is overwritten you get this error. The less you commit the less chance you have that the rollback data you need is overwritten.
One common cause of ORA-1555 is a procedure that does this all in itself : a cursor on a table, loop through the records, and updates/delete the same table and commits every x records.
As guigui told : let the rollback segments grow to contain your whole transaction
I suggest you read Tom's answer :
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1441804355350
"The ORA-1555 happens when people try to save space typically. They'll have small
rollback segments that could grow if they needed (and will shrink using OPTIMAL). So,
they'll start with say 10 or so 1meg rollback segments. These rollback segments COULD
grow to 100meg each if we let them (in this example) however, they will NEVER grow unless
you get a big transaction.
"
Typically this occurs when code commits inside a cursor.
eg.
for x in (select ... from ...)
loop
do something
commit;
end loop;
See the AskTom link form guigui42 though for other examples.

Resources