Rollback time for insert and delete comparison - oracle

With respect to Oracle RDBMS, which rollback is faster?
Rollback1 : Insert 1000000 records and then rollback
or
Rollback2 : Delete 1000000 records and then rollback

You can find out the % of completion of query using this:
SELECT SESSION_ID, percent_complete, estimated_completion_time
FROM SYS.DM_EXEC_REUQESTS

To answer this question we should know how Oracle handles insert ande delete operations internally. I know that when you insert something, it inserts on memory, then when you commit. Oracle writes to disk.
For delete operation I found this : http://www.dba-oracle.com/t_oracle_soft_logical_deletes.htm So it logically deletes usually and when it is possible it deletes real-time.
Now we should talk about rollback, when you rollback an insert, you clear rows from memory and clear your redo logs. It sounds simple. When you rollback a delete (if Oracle deleted the rows real-time) it should go to redo log, read, then insert the deleted rows back to database.
So if I am right and logical, rollbacking a delete operation should take more time than rollbacking insert.
Also if you are deleting with condition, delete process should take more time than inserting by itself.
P.S. Thanks for the question by the way, it's interesting and made me do some research on Oracle internals.

Related

Oracle 11g Deleting large amount of data without generating archive logs

I need to delete a large amount of data from my database on a regular basis. The process generates huge volume of archive logs. We had a database crash at one point because there was no storage space available on archive destination. How can I avoid generation of logs while I delete data?
The data to be deleted is already marked as inactive in the database. Application code ignores inactive data. I do not need the ability to rollback the operation.
I cannot partition the data in such a way that inactive data falls in one partition that can be dropped. I have to delete the data with delete statements.
I can ask DBAs to set certain configuration at table level/schema level/tablespace level/server level if needed.
I am using Oracle 11g.
What proportion of the data on the table would be deleted, what volume? Are there any referential integrity constraints to manage or is this table childless?
Depending on the answers , you might consider:
"CREATE TABLE keep_data UNRECOVERABLE AS SELECT * FROM ... WHERE
[keep condition]"
Then drop the original table
Then rename keep_table to original table
Rebuild the indexes (again with unrecoverable to prevent redo),constraints etc.
The problem with this approach is it's a multi-step DDL, process, which you will have a job to make fault tolerant and reversible.
A safer option might be to use data-pump to:
Data-pump expdp to extract the "Keep" data
TRUNCATE the table
Data-pump impdp import of data from step 1, with direct-path
At this point I suggest you read the Oracle manual on Data Pump, particularly the section on Direct Path Loads to be sure this will work for you.
MY preferred option would be partitioning.
Of course, the best way would be TenG solution (CTAS, drop and rename table) but it seems it's impossible for you.
Your only problem is the amount of archive logs and database crash problem. In this case, maybe you could partition your delete statement (for example per 10.000 rows).
Something like:
declare
e number;
i number
begin
select count(*) from myTable where [delete condition];
f :=trunc(e/10000)+1;
for i in 1.. f
loop
delete from myTable where [delete condition] and rownum<=10000;
commit;
dbms_lock.sleep(600); -- purge old archive if it's possible
end loop;
end;
After this operation, you should reorganize your table which is surely fragmented.
Alter the table to set NOLOGGING, delete the rows, then turn logging back on.

Is there something similar to commit for DDL?

When I want to update, delete, insert I need to commit. That's helpful most of the time, I might update wrong information or delete something by mistake and I can undo that.
When dropping a column, I don't need a commit. Is there something like rollback (not flashback), which enables me to undo my changes quickly? Dropping a column, even after a long analysis can probably cause damage to the table (pk, fk).
Why did Oracle provided a commit for DML but not for DDL?
Why did Oracle provided a commit for DML but not for DDL?
When you issue a DDL statement, you basically start a transaction against the Oracle data dictionary, and this transaction, to eliminate any overheads, has to be as short as possible and take effect as soon as possible. Because of this, DDL statement does double commit, before the DDL statement and then right after(or rollback, if something went wrong) the statement. This behavior makes Oracle's DDL not transactional DDL and you cannot commit or rollback it explicitly. It's just the way it is.
Having said that, if you dropped a table, then starting from 10g and up you can use flashback table technology to get it back in one statement, because Oracle, after you issue drop table statement wont drop it, it rather puts it in the recycle bin:
flashback table <<table_name>> to before drop
Unfortunately you cannot use flashback table, to restore a dropped column of a table, simply because dropped column wont be placed in the recycle bin. You will have to perform a point in time recovery of your full database or a single tablespace, or if there is a logical backup(*.dmp file), restore table from it by using imp or impdp utility.

Keeping tables consistent during trigger execution?

I have a trigger that checks another couple of tables before allowing a row to be inserted. However between the time I check the other tables and insert the row the other tables may get updated.
How do I ensure the tables I'm checking remain in a consistent state until after the new row is inserted? I was thinking of taking locks out but everything I've read boils down to if you are not leaving locking to Oracle you're almost certainly doing it wrong.
Oracle is already doing this for you, when you perform a select it will look at all tables as of the time the transaction started ( the time of the first DML ). This wont stop the data from being changed under you though, your transaction just wont see it being changed. If you want to stop that data from being changed then you can use "SELECT FOR UPDATE" as Justin Cave suggests.
I would seriously question what you are doing though, triggers, except in the most trivial cases, almost always lead to unexpected side effects.

Can you using joins with direct path inserts?

I have tried to find examples but they are all simple with a single where clause. Here is the situation. I have a bunch of legacy data transferred from another database. I also have the "good" tables in that same database. I need to transfer (data-conversion) data from the legacy tables to thew tables. Because this is a different set of tables the data-conversion requires complex joins to put the old data into the new tables correctly.
So, old tables old data.
New tables must have the old data but it requires lots of joins to get that old data into the new tables correctly.
Can I use direct path with lots of joins like this? INSERT SELECT (lots of joins)
Does direct path apply to tables that are already on the same database (transfer between tables)? Is it only for loading tables from say a text file?
Thank you.
The query in your SELECT can be as complex as you'd like with a direct-path insert. The direct-path refers only to the destination table. It has nothing to do with the way that data is read or processed.
If you're doing a direct-path insert, you're asking Oracle to insert the new data above the high water mark of the table so you bypass the normal code that reuses space in existing blocks for new rows to be inserted. It also has to block other inserts since you can't have the high water mark of the table change during a direct-path insert. This probably isn't a big deal if you've got a downtime window in which to do the load but it would be quite problematic if you wanted the existing tables to be available for other applications during the load.
No, on the contrary, it means you need to do a backup after a NOLOGGING load, not that you can't backup the database.
Allow me to elaborate a bit. Normally, when you do DML in Oracle, the before images of the changes you are are making get logged in UNDO, and all the changes (including the UNDO changes) are first written to REDO. This is how Oracle manages transactions, instance recovery, and database recovery. If a transaction is aborted or rolled back, Oracle uses the information in UNDO to undo the changes your transaction made. If the instance crashes, then on instance restart, Oracle will use the information in REDO and UNDO to recover up to the last committed transaction. First, Oracle will read the REDO and roll forward, then, use UNDO to roll back all the transactions that were not committed at the time of the crash. In this way, Oracle is able to recover up to the last committed transaction.
Now, when you specify an APPEND hint on an insert statement, Oracle will execute the INSERT with direct load. This means that data is loaded into brand new, never before used blocks, from above the highwater mark. Because the blocks being loaded are brand new, there is no "before image", so, Oracle can avoid writing UNDO, which improves performance. If the database is in NOARCHIVELOG mode, then Oracle will also not write REDO. On a database in ARCHIVELOG mode, Oracle will still write REDO, unless, before you do the insert /*+ append */, you set the table to NOLOGGING, (i.e. alter table tab_name nologging;). In that case, REDO logging is disabled for the table. However, this is where you could run into backup/recovery implications. If you do a NOLOGGING direct load, and then you suffer a media failure, and the datafile containing the segment with the nologging operation is restored from a backup taken before the nologging load, then the redo log will not contain the changes required to recover that segment. So, what happens? Well, when you do a NOLOGGING load, Oracle writes extent invaldation records to the redo log, instead of the actual changes. Then, if you use that redo in recovery, those data blocks will be marked logically corrupt. Any subsequent queries against that segment will get an ORA-26040 error.
So, how to avoid this? Well, you should always take a backup imediately following any NOLOGGING direct load. If you restore/recover from a backup taken after the nologging load, there is no problem, because the data will be in the datablocks in the file that was restored.
Hope that's clear,
-Mark
Yes, there should not be any arbitrary limits on query complexity.
If you do
insert /*+ APPEND */ into target_table select .... from source1, source2..., sourceN where
It should work fine. Consider though, that the performance of the load will be limited by the performance of that query, so, be sure it's well-tuned, if you're expecting good performance.
Finally, consider whether setting NOLOGGING on the target table would improve performance significantly. But, also consider the backup recovery implications, if you decide to implement NOLOGGING.
Hope that helps,
-Mark

do we need to truncate a large table before dropping?

it is said that we should always truncate a large table before dropping, it improves performance. Is it true?
IMO in general if you simply want to drop a table then DROP is appropriate. It will release space the same way as TRUNCATE would and it will have the advantage of being atomic (no query will have the opportunity to see the table "empty").
From 10g+, a dropped table won't be deleted immediately however: if there is sufficient space it will be put in the recycle bin. If you truncate a table first, no data will remain in the recycle bin. This may be why you have been told to truncate first (?).
In any case, if you want to bypass the recycle bin you could issue DROP TABLE your_table PURGE and this statement will be atomic.
It entirely depends if you want to be able to roll back if something goes wrong.
Deletion of data records the deletion against the transaction logs of the database until you commit the change.
Truncation removes all the data from the table without recording those logs, so there can be a significant performance improvement in doing this. Just be sure you know what you are doing, as there's no way back.
It may be a good idea in order to reset the high water mark.

Resources