Handling data in global temporary tables in case of transaction rollback - oracle

I've a job which runs with multiple instances i.e. the code base for all instances is same, but each instance works on set of data allocated to it so as to achieve parallelism and better throughput for the application.
These jobs use global temporary table for working through the data as there are multiple complex operations performed before final output is computed.
In case of failure, the transaction is rolled back (as it should), but with this I'm also losing the data in gtt.
Is there a way that the records in gtt can be copied over to another permanent table while rolling back the transaction.
I know it sounds weird, but this is a practical problem I'm facing.
I need to somehow store data in session table in case of failure of any sql, while rolling back the transaction as one of the sql has failed.
Thanks.

Hm, maybe something like this:
create a permanent table which will hold GTT data in case of failure
create an autonomous transaction procedure which would insert into permanent select * from gtt and commit
in exception handler section call that procedure and then rollback

The only way is printing the required data before your rollback.
You can use UTL_FILE to store data in the file. Later, you can use external table concept of oracle to retrieve data in the table.
Cheers!!

Related

benefits of temporary table in oracle over an ordinary table

I came accross creating the temporary table in oracle. But could not understand the best use of this.
Can someone help me to understand what is the features and benefits of using a temporary table in Oracle (create temporary table temp_table) over an ordinary table (create table temp_table)
)
From the concepts guide:
A temporary table definition persists in the same way as a permanent
table definition, but the data exists only for the duration of a
transaction or session. Temporary tables are useful in applications
where a result set must be held temporarily, perhaps because the
result is constructed by running multiple operations.
And:
Data in a temporary table is private to the session, which means that
each session can only see and modify its own data.
So one aspect is that the data is private to your session. Which is also true of uncommitted data in a permanent table, but with a temporary table the data can persist and yet stay private across a commit (based on the on commit clause on creation).
Another aspect is that they use temporary segments, which means you generate much less redo and undo overhead using a temporary table than you would if you put the same data temporarily into a permanent table, optionally updated it, and then removed it when you'd finished with it. You also avoid contention and locking issues if more than one session needs its own version of the temporary data.
Given below are some points why and when we should temporary table :-
1)Temporary tables are created for storing the data in a tabular form, for easy retrieval of it when needed, with in that particular session.
2)It also add a security purpose of keeping the data available only for that particular session.
3) When a code goes long and a lot of cursors are opened it better to put the data in a temporary table so that it can be easily fetched at the time needed.

Can you using joins with direct path inserts?

I have tried to find examples but they are all simple with a single where clause. Here is the situation. I have a bunch of legacy data transferred from another database. I also have the "good" tables in that same database. I need to transfer (data-conversion) data from the legacy tables to thew tables. Because this is a different set of tables the data-conversion requires complex joins to put the old data into the new tables correctly.
So, old tables old data.
New tables must have the old data but it requires lots of joins to get that old data into the new tables correctly.
Can I use direct path with lots of joins like this? INSERT SELECT (lots of joins)
Does direct path apply to tables that are already on the same database (transfer between tables)? Is it only for loading tables from say a text file?
Thank you.
The query in your SELECT can be as complex as you'd like with a direct-path insert. The direct-path refers only to the destination table. It has nothing to do with the way that data is read or processed.
If you're doing a direct-path insert, you're asking Oracle to insert the new data above the high water mark of the table so you bypass the normal code that reuses space in existing blocks for new rows to be inserted. It also has to block other inserts since you can't have the high water mark of the table change during a direct-path insert. This probably isn't a big deal if you've got a downtime window in which to do the load but it would be quite problematic if you wanted the existing tables to be available for other applications during the load.
No, on the contrary, it means you need to do a backup after a NOLOGGING load, not that you can't backup the database.
Allow me to elaborate a bit. Normally, when you do DML in Oracle, the before images of the changes you are are making get logged in UNDO, and all the changes (including the UNDO changes) are first written to REDO. This is how Oracle manages transactions, instance recovery, and database recovery. If a transaction is aborted or rolled back, Oracle uses the information in UNDO to undo the changes your transaction made. If the instance crashes, then on instance restart, Oracle will use the information in REDO and UNDO to recover up to the last committed transaction. First, Oracle will read the REDO and roll forward, then, use UNDO to roll back all the transactions that were not committed at the time of the crash. In this way, Oracle is able to recover up to the last committed transaction.
Now, when you specify an APPEND hint on an insert statement, Oracle will execute the INSERT with direct load. This means that data is loaded into brand new, never before used blocks, from above the highwater mark. Because the blocks being loaded are brand new, there is no "before image", so, Oracle can avoid writing UNDO, which improves performance. If the database is in NOARCHIVELOG mode, then Oracle will also not write REDO. On a database in ARCHIVELOG mode, Oracle will still write REDO, unless, before you do the insert /*+ append */, you set the table to NOLOGGING, (i.e. alter table tab_name nologging;). In that case, REDO logging is disabled for the table. However, this is where you could run into backup/recovery implications. If you do a NOLOGGING direct load, and then you suffer a media failure, and the datafile containing the segment with the nologging operation is restored from a backup taken before the nologging load, then the redo log will not contain the changes required to recover that segment. So, what happens? Well, when you do a NOLOGGING load, Oracle writes extent invaldation records to the redo log, instead of the actual changes. Then, if you use that redo in recovery, those data blocks will be marked logically corrupt. Any subsequent queries against that segment will get an ORA-26040 error.
So, how to avoid this? Well, you should always take a backup imediately following any NOLOGGING direct load. If you restore/recover from a backup taken after the nologging load, there is no problem, because the data will be in the datablocks in the file that was restored.
Hope that's clear,
-Mark
Yes, there should not be any arbitrary limits on query complexity.
If you do
insert /*+ APPEND */ into target_table select .... from source1, source2..., sourceN where
It should work fine. Consider though, that the performance of the load will be limited by the performance of that query, so, be sure it's well-tuned, if you're expecting good performance.
Finally, consider whether setting NOLOGGING on the target table would improve performance significantly. But, also consider the backup recovery implications, if you decide to implement NOLOGGING.
Hope that helps,
-Mark

Calling Oracle autonomous stored procedure from trigger

I have an Oracle trigger which is calling a stored procedure that has PRAGMA AUTONOMOUS_TRANSACTION defined. The values that are passed from the trigger have been committed already but it appears that the values are not available in the stored procedure? I'm not positive of this since the ability to debug/log/commit is difficult and the timing of the output is confusing me a bit. I'd like to know if it's expected that any passed values are simply available in the stored procedure regardless of the AUTONOMOUS_TRANSACTION?
Thanks
Values passed in to a stored procedure as parameters will always be available to the stored procedure. It doesn't matter whether the procedure is declared using an autonomous transaction.
Code running in an autonomous transaction cannot see changes made by the calling transaction. 9 times out of 10, when people are describing problems seeing the data they expect, this is the source of the problem.
If your stored procedure is doing anything other than writing something to a log table, I would be exceptionally cautious about using autonomous transactions. If you are using autonomous transactions for anything other than logging, you are almost certainly using them incorrectly. And you are probably introducing a whole host of bugs related to race conditions and transactional integrity.
"The trigger logic is conditionally
updating Table B which calls the
stored procedure to select from the
values on Table A so that Table B can
be updated with a calculated value. "
Perhaps Table B really ought to be a Materialized View derived from Table A? We can build a lot of complexity into the WHERE clauses of the queries which populate MViews. Find out more.
If you have a row level trigger on table_x, then that trigger can be fired multiple times by the same statement as different rows are impacted by that statement.
The order in which those rows are impacted is indeterminate. As such, the state of table_x is indeterminate during the execution of a row level trigger. This is why the MUTATING TABLE exception is raised.
An autonomous transaction 'cheats' by looking at the committed state of the table (ie excluding all changes made by that statement, and other statements in the transaction).
If you want a stored procedure to look at the state of table_x in response to activity on that table, then it needs to be done after all the rows changes have been made (ie in a statement level trigger, not a row level trigger).
The design pattern for this is often to set a flag (package level variable) in a row level trigger, check the flag in an AFTER statement level trigger, and if necessary action it and reset it.

How do Oracle temporary tables exactly work in a stored procedure like this?

Suppose I'm using the following Oracle code in a stored procedure:
CREATE GLOBAL TEMPORARY TABLE temp_table (
field1 NUMBER,
field2 NUMBER
)
ON COMMIT DELETE ROWS
This particular stored procedure may be called concurrently by different users at any single moment. As I understand it, the data visible to the user in the temporary table will be private to him or her, and these rows are deleted on a COMMIT.
However, how do the following work with respect to this:
Is it safe to call the CREATE statement above every single time the stored procedure is called? Would this result in an error cause there already "exists" a temporary table (possibly) created by a different user (/session)? Or would this be OK, since the server treats them privately anyway?
What exactly happens with the ON COMMIT DELETE ROWS? I assume that this only deletes the rows specific to the particular user session, leaving the data by other sessions unharmed, correct?
Any help would be appreciated. :)
Q1: Is it safe to call the
CREATE statement above every single
time the stored procedure is called?
The main reason you create a global temporary table (GTT) is to create once (not inside procedure) and use it as private table for a session. It will throw an error if the table already exist.
Q2: What exactly happens with the ON COMMIT DELETE ROWS?
Yes. The data gets deleted once you commit. This happens only for the session you operate.
Check creating GTT and its use.
I'd just leave the table there. No sense in dropping and recreating it all the time. It will cause concurency issues as you say.
Yes.

Global Temporary Table Concurrency

I have a global temp table which is set as delete on commit. How does it behave on concurrency issue? I mean what happens if another session wants to use that global temporary table? The answer will probably not be "they share the same data".
Now, if my guess is correct :), is the table locked until the first connection commits, or does the dbms create a global temp table for each connection? ( something like an instance of the table? )
From the documentation:
The data in a temporary table is visible only to the session that inserts the data into the table.
Each session will have its logical independent copy of the temporary table.
Since you can not see other sessions' data and since Oracle deals with locks at the row level, you can not be blocked by other sessions' DML. Concurrent DML (Insert, Delete, Update) won't affect other sessions.
Only DDL will need a lock on the table (ie: ALTER TABLE...)

Resources