ORA-1555: snapshot too old: rollback segment number - oracle

Any idea about
ORA-1555: snapshot too old: rollback segment number
I am getting this error and nothing seems to be wrong. Please state under what conditions in may occur and how it can be avoided?

Frequent commits can be the cause of ORA-1555.
It's all about read consistency. The time you start a query oracle records a before image. So the result of your query is not altered by DML that takes place in the mean time (your big transaction). The before image uses the rollback segments to get the values of data that is changed after the before image is taken.
By committing in your big transaction you tell oracle the rollback data of that transaction can be overwritten.
If your query need data from the rollback segments that is overwritten you get this error. The less you commit the less chance you have that the rollback data you need is overwritten.
One common cause of ORA-1555 is a procedure that does this all in itself : a cursor on a table, loop through the records, and updates/delete the same table and commits every x records.
As guigui told : let the rollback segments grow to contain your whole transaction

I suggest you read Tom's answer :
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1441804355350
"The ORA-1555 happens when people try to save space typically. They'll have small
rollback segments that could grow if they needed (and will shrink using OPTIMAL). So,
they'll start with say 10 or so 1meg rollback segments. These rollback segments COULD
grow to 100meg each if we let them (in this example) however, they will NEVER grow unless
you get a big transaction.
"

Typically this occurs when code commits inside a cursor.
eg.
for x in (select ... from ...)
loop
do something
commit;
end loop;
See the AskTom link form guigui42 though for other examples.

Related

Performance of rollback on nothing to commit connection

I have a context in which I have a connection pool in which connections are freed with no guarantee of having been committed or rolled back.
In this context I'm thinking on implementing in the pool itself a rollback on every connection that gets freed but I guess what performance impact this action could have.
This question is specifically about Oracle DB. What actions does Oracle do when a rollback is performed in a transaction with no pending inserts nor updates? For example, what happens (in performance means) if you rollback two consecutive times or do a commit and immediately a rollback?
There will be no performance impact of unnecessarily rolling back every session. Compared to opening and closing a session, a rollback is practically free. For example, the below PL/SQL block rolls back one million times in 6 seconds on my machine.
begin
for i in 1 .. 1000000 loop
rollback;
end loop;
end;
/
Oracle writes every change to the UNDO tablespace before anything is committed or rolled back. There are some significant costs associated with that approach, but it means when you rollback, Oracle does not have to check every table in the database, it only has to check the UNDO data for anything related to the current transaction. If nothing is found, then nothing needs to be done. I would guess that every rollback requires at worst one index lookup. Which is not something you need to worry about if it only occurs once per session.

Is commit required in an Oracle stored procedure which is called from Java class?

I have an Oracle Stored Procedure that does some inserts and updates on a table in DB.
There is no explicit Commit or Rollback statement at the end of the procedure.
However, when I call this SP through a java class, I see that the inserts and updates are committed into the DB.
So can anyone help me understand if we really need a commit statement at the end of the stored procedure in Oracle?
I am not java experience but as far as I know when you close the connection of the database the data are committed (unless if you rollback them). Now to return into your question is when to use the commit in SP.
When you use DML(insert,update,delete) operation in the procedure on a table, the table will be Locked therefore if any other user try to access the locked table, it has to wait till you commit/rollback your operation. so if your procedure was taking time, due to a long loop or bad optimized query then the user will be blocked. So if you had a commit before the DMl, the no blocks will happen.
Other reason, is the undo tablespace, where all the data not committed will wait there till you commit them, so if for example you inserted lot of data (millions), your undo might get full depend on your size and youll get an error.
so short answer , if your procedure doesn't has lot of operations on big tables and it fast then you can pass by the commit , otherwise it better to add commits.

PostgreSQL vs. Oracle default transaction management

In PostgreSQL, if you encounter an error in transaction (for example when your insert statement violates unique constraint), the whole transaction is aborted, you cannot commit it and no rows are inserted:
database=# begin;
BEGIN
database=# insert into table (id, something) values ('1','whatever');
INSERT 0 1
database=# insert into table (id, something) values ('1','whatever');
ERROR: duplicate key value violates unique constraint "table_id_key"
Key (id)=(1) already exists.
database=# insert into table (id, something) values ('2','whatever');
ERROR: current transaction is aborted, commands ignored until end of transaction block
database=# rollback;
database=# select * from table;
id | something |
-----+------------+
(0 rows)
You can change that by setting ON_ERROR_ROLLBACK to "on" or "interactive", after that you can do multiple inserts ignoring errors, commit and have only successfully inserted rows in table after transaction end.
database=# \set ON_ERROR_ROLLBACK interactive
In Oracle, this is the default transaction management behaviour, which surprises me. Isn't this completely counterintuitive and dangerous?
When I start a transaction I want to be sure that all the statements were successfull. What if my multiple inserts comprise some kind of an object or data structure? I end up completely unaware of the data state in my database and should be checking it after the commit.
If one of the inserts fails I want to be sure that other inserts will be rollbacked or not even evaluated after the first error, which is exactly how it's done in PostgreSQL.
Why does Oracle have such way of transaction management as a default, and why is it considered good practice?
For example, some random guy here in comments
This is a very neat feature.
I don't understand this, though: "Normally, any error you make will
throw an exception and cause your current transaction to be marked as
aborted. This is sane and expected behavior..."
No, it's really not. Oracle doesn't work this way, nor does MySQL. I
have no experience with MSSQL or DB2 but I'll bet a dollar each they
don't work this way either. There no intuitive reason why a syntax
error, or any other error for that matter, should abort a transaction.
I can only assume there's either some limitation deep in the Postgres
guts that requires this behavior, or that it conforms to some obscure
part of the SQL standard that everyone else sensibly ignores. There's
certainly no API / UX reason why it should work this way.
We really shouldn't be too proud of any workarounds we've developed
for this pathological behavior. It's like IT Stockholm Syndrome.
Does not it violate even the definition of the transaction?
Transactions provide an "all-or-nothing" proposition, stating that
each work-unit performed in a database must either complete in its
entirety or have no effect whatsoever.
I agree with you. I think it's a mistake not to abort the whole tx. But people are used to that, so they think it's reasonable and correct. Like people who use MySQL think that the DBMS should accept 0000-00-00 as a date, or people using Oracle expect that '' IS NULL.
The idea that there's a clear distinction between a syntax error and something else is flawed.
If I write
BEGIN;
CREATE TABLE new_customers (...);
INSET INTO new_customers (...)
SELECT ... FROM customers;
DROP TABLE customers;
COMMIT;
I don't care that it's a typo resulting in a syntax error that caused me to lose my data. I care that the transaction didn't successfully execute all its statements but still committed.
It'd be technically feasible to allow soft rollback in PostgreSQL before any rows are actually written by a statement - probably before we even enter the executor. So failures in the parse and parameter binding phases could allow the tx not to be aborted. We have a statement memory context we could use to clean up.
However, once the statement starts changing rows, it's doing so on disk with the same transaction ID as the prior statements in the tx. So you can't roll it back without rolling back the whole tx. To allow statement rollback Pg needs to assign a new subtransaction ID. That costs resources. You can do it explicitly with SAVEPOINTs when you want to, and internally that's what psql is doing. In theory we could allow the server to do this implicitly for each statement to implement statement rollback, just at a performance cost. But I doubt any patch implementing this would get committed, at least not without a LOT of argument, because most of the PostgreSQL team are (IMO reasonably) not fond of "whoops, that broke but we'll continue anyway" transaction semantics.

Detect if data in Oracle table has changed

I'm looking for a best performance solution to detect if data in an Oracle table has changed. This will be used to kick start a calculation that uses lots of data from the same tables. It would be too expensive to poll the data to track changes. The changes happen rarely.
I have analyzed the following solutions to the problem.
ORA_ROWSCN : Too slow, will do a full table scan.
Oracle Audit : Not possible to set up in my environment.
DBMS_ALERT : Writers are not able to signal.
Then I came up with the following simple idea. Add a trigger to the tables that increments a sequence on insert, update or delete. I know this will be materialized if rolled back but I can afford some false positives. My calculations service then polls the sequence current value to detect possible changes (= very cheap query)
CREATE OR REPLACE TRIGGER trg_change_tracker
before insert or update or delete on mytable
declare
dummy number;
begin
select seq_event_seqno.nextval into dummy from dual;
end;
How does this sound? Any pitfalls?
EDIT: Yes there is a major pitfall: When the writer holds back the commit, and the reader sees the new sequence value and query for changes before the writer has committed.
I can suggest you some addition to your trigger soulution:
When your polling see changes of the sequence, you can check opened transactions on interested table, if it exists, then skip current recalc.
UPD:
Also, you can save min SCN of opened transactions, and in case of frequently table changes you will not freeze
UPD2:
It is some heuristic improvement, not full problem solution. If you will skip recal every time when you see opened transaction, then you can freeze much time in case of frequently (and may be long) DML on table.
I mean, when seq is changed and the polling see opened transactions on your table, you can store min(start_scn) from v$transaction, and when the polling see opened transactions at next time you can compare current min(start_scn) with sotred min(start_scn), if current is greater then it is some chance that it is time to recalc.
You can use ( for particular XTABLE ):
SELECT * FROM dba_tab_modifications WHERE TABLE_NAME = 'XTABLE';
it will allow you ( among other data ) to see:
INSERTS UPDATES DELETES
43,708 1,845 0
.. there are issues ( bugs in some Oracle versions ), but if you will test thoroughly in your environment - you may find this useful.
.. as this question is pretty old - it would be nice to know how you implemented the change detection mechanism.

Do the time of the COMMIT and ROLLBACK affect performance?

Suppose I have a set of ID . For each ID , I will insert many records to many different tables based on the ID .Between inserting difference tables, different business checks will be called . If any checking fail , all the records that are inserted based on this ID will be ROLLBACK .This bulk insert action is done through using PL/SQL . Do the time of the COMMIT and ROLLBACK affect the performance and how does it affect ? For example , should I COMMIT after finish the process for one ID or COMMIT after finish all ID?
This is not so much of a performance decision but a process design decision. Do you want the other IDs to stay in the database when you have to roll back a faulty ID?
For obvious reasons, rollback takes longer when more rows must be rolled back. Rollback usually takes longer (sometimes much longer!) than the operations that have to be rolled back. Commit is always fast in Oracle, so it probably doesn't matter how often you commit in that regard.
Your problem description indicates you have a large set of smaller logical transactions (each new ID is a transaction). You should commit each logical transaction. The two reasons to wait to commit the entire set of transactions are:
If the entire set of transactions is in fact a transaction itself - all inserts must succeed for any rows to be committed. In that context, your smaller "transactions" aren't truly transactions.
You don't have a restart capability in your bulk load process, which in effect makes this a special case of item 1. If your bulk load process aborts, you need a way to skip successfully applied ID's.
Tom Kyte's advice is to commit each logical unit of work - the transaction.
Don't take the transaction time longer. make it short as possible as you can. Because according to your query some locks have been created. This locks may cause perfomance issues... so do it ID by ID...
There are two "forces" at work....
locking
during your open transaction, oracle puts locks on the changed rows.
whenever another transaction needs to update any of the locked rows,
it has to wait.
in the worst case, you can even build a deadlock.
synchronous write
every commit performs a synchronous write.
(there are ways to disable that, but it is usually the thing everybody wants: integrity).
that synchronous write can take (much) longer then the a regular write (that can be buffered).
Not to forget that there is usually an additional network round trip involved with an commit.
so, the one force says "commit as soon as possible (considering your integrity requirements)" the other says "commit as as less often as possible".
There are some other issues to consider as well, e.g. the maximum transaction size. every uncommited transaction needs some temporary space. the bigger the transaction gets, the more you need. You can also run into ORA-01555 "snapshot too old".
If there is any advice to give, then it is to implement a configurable "commit frequency" so that you can easily change it as needed.
One option if you need to control the individual sets but retain the ability to commit or rollback the entire transaction is to use savepoints. You can set a savepoint at the beginning of the outermost loop, then rollback to it if an error occurs. You might end up with something like this:
begin
--Initial batch logging
for r_record in cur_cursor loop
savepoint s_cursor loop;
begin
--Process rows
exception
when others then
rollback to s_cursor;
end;
end loop;
--Final batch logging
exception
when others then
rollback;
raise;
end;

Resources