I have a master stored procedure which calls multiple stored procedures to write data on multiple tables, around 9-10 in one go. And once all the inserts are done, there is one commit for all of them.
I want to use data concurrency and lock tables in individual sub procedures, which does not have any commit, so
LOCK TABLE table_name IN lock_mode
will work, but will hold the table until rest of the data is been inserted in the respective tables called after this and the final commit or rollback is called, which is not a good idea. Also I don't have dbms_lock opened.
Will locking all the tables in my master stored procedures, or locking the tables in the respective sub-stored procedures is the only option??
My master stored procedure looks like this
PROCEDURE POPULATE_ALL(P_ASOFDATE DATE, P_ENTITY VARCHAR2) IS
BEGIN
POPULATE_ABC_BOOK(P_ASOFDATE);
POPULATE_XYZ(P_ASOFDATE, P_ENTITY);
POPULATE_DEF(P_ASOFDATE, P_ENTITY);
POPULATE_AAA(P_ASOFDATE, P_ENTITY);
commit;
EXCEPTION
WHEN OTHERS THEN
rollback;
P_ERROR := SQLERRM;
RAISE_APPLICATION_ERROR(-20001,
'*** Unexpected Error in POPULATE_ALL -->' ||
P_ERROR);
END POPULATE_ALL;
where POPULATE_XYZ is populating XYZ table.
It looks like you don't need locking at all. Just do your inserts without any explicit locking.
There is no way to unlock table in the middle of transaction. This operation has no ANY sense in Oracle; it can be useful only if database supports dirty reads. Table lock finishes only if
commit,
rollback or
rollback to savepoint before lock established
executed.
Related
Is there a way to find out the IP address who performed DML operations on a certain table in Oracle?
If ip is not possible, then how to find the SID?
Thanks.
I am not sure what you need however.
If you want to know what is your SID SELECT sys_context('USERENV', 'SID') FROM DUAL;
However IF you want to know who made DML operations on tables, then check Streams, replication of a DML
A capture process or an application creates one or more logical change
records (LCRs) and enqueues them into a queue. An LCR is a message
with a specific format that describes a database change. A capture
process reformats changes captured from the redo log into LCRs, and
applications can construct LCRs. If the change was a data manipulation
language (DML) operation, then each LCR encapsulates a row change
resulting from the DML operation to a shared table at the source
database. If the change was a data definition language (DDL)
operation, then an LCR encapsulates the DDL change that was made to a
shared database object at a source database.
You can create a trigger on your table like this:
CREATE TABLE AUDIT_TABLE (USER_NAME VARCHAR2(30), IP_ADDRESS VARCHAR2(15), SID NUMBER);
CREATE OR REPLACE TRIGGER AUDIT_DML_CERTAIN_TABLE
BEFORE INSERT OR DELETE OR UPDATE ON CERTAIN_TABLE
FOR EACH ROW
BEGIN
INSERT INTO AUDIT_TABLE
VALUES (USER, SYS_CONTEXT('USERENV', 'IP_ADDRESS'), SYS_CONTEXT('USERENV', 'SESSIONID'));
END;
But be aware, it is quite easy to bypass or counterfeit this kind of logging if someone has bad intention. The more secure way is to enable the audit trail.
I read that savepoints in Oracle global temporary tables delete all the data, but when I tested on Oracle 11g they worked like heap tables. Can anybody explain?
insert into table_1 values('one');
insert into table_1 values('two');
savepoint f1;
insert into table_1 values('three');
insert into table_1 values('four');
rollback to f1;
-- the records in table are 2 records just like heap tables, but I read that
-- savepoints in GTT truncates all the data
Where did you read this? I suspect not in the Oracle SQL Reference. So the explanation is simple: the author of that assertion hadn't tested the behaviour of global temporary tables. Either that or you were reading a description of some other SQL implementation, such as DerbyDB.
For the sake of completeness, let's rule out the role of transaction or session scope. Here are two global temporary tables:
create global temporary table gtt1
( col1 varchar2(30) )
ON COMMIT PRESERVE ROWS
/
create global temporary table gtt2
( col1 varchar2(30) )
ON COMMIT DELETE ROWS
/
Let's run your experiment for the one with session scope:
SQL> insert into gtt1 values('one');
1 row created.
SQL> insert into gtt1 values('two');
1 row created.
SQL> savepoint f1;
Savepoint created.
SQL> insert into gtt1 values('three');
1 row created.
SQL> insert into gtt1 values('four');
1 row created.
SQL> rollback to f1;
Rollback complete.
SQL> select * from gtt1;
COL1
------------------------------
one
two
SQL>
Same result for the table with transaction scope:
SQL> insert into gtt2 values('five');
1 row created.
SQL> insert into gtt2 values('six');
1 row created.
SQL> savepoint f2;
Savepoint created.
SQL> insert into gtt2 values('seven');
1 row created.
SQL> insert into gtt2 values('eight');
1 row created.
SQL> rollback to f2;
Rollback complete.
SQL> select * from gtt2;
COL1
------------------------------
five
six
SQL>
Actually this is not surprising. The official Oracle documentation states:
"The temporary table definition persists in the same way as the definitions of regular tables"
Basically they are heap tables. The differences are:
the scope (visibility) of the data
the tablespace used to persist data (global temporary tables write to a temporary tablespace).
I think you misunderstand - if you rollback to a savepoint then Oracle should undo all the work done after the savepoint (while still keeping any uncommitted work that was done prior to the savepoint).
For a temporary table, Oracle lazily allocates storage (a temporary segment for your session) when you put stuff in, and that when the data is done with (either at the end of the session, or at the end of the transaction, depending on the type) it can just deallocate the storage rather than individually deleting the rows, rather like what happens when you TRUNCATE a normal table.
I was interested to find out what happened if you had a savepoint before any data was put in, and rolled back to that savepoint - would Oracle deallocate the storage or would it keep the storage and delete the rows from within it?
It turns out the former - it behaves like a truncate.
SAVEPOINT f0;
SELECT * FROM v$tempseg_usage; -- Should show nothing for your session
insert into table_1 values('one');
insert into table_1 values('two');
SELECT * FROM v$tempseg_usage; -- Should show a DATA row for your session
savepoint f1;
insert into table_1 values('three');
insert into table_1 values('four');
rollback to f1; -- Undo three and four but preserve one and two
SELECT * FROM v$tempseg_usage; -- Still shows a DATA row for your session
rollback to f0; -- Undo all the inserts
SELECT * FROM v$tempseg_usage; -- row for your session has gone
The reason this matters is that when you do a normal delete - rather than a truncate - then any full scan of the table will still have to sift through all the data blocks to see if they have any data in. DML against an empty table can potentially incur a lot of I/O if the table had a lot of data in it at some time before!
I am trying to speed up some code that is doing exactly that - it isshoving some stuff into a temporary table as a scratchpad, partly so it canjoin to a permanent table, and returning a result to its caller. The temporary table is only for the benefit of this routine, so it's safe to clear it down at the end of the routine, but it might be called many times within a parent transaction, so I can't truncate (TRUNCATE is DDL and so commits the transaction) but I can't not clear it down either, or invocations within the same transaction will pick up one anothers' rows. Clearing down by a DELETE is causing quite a bit of overhead, especially as there is no index on the table and so selects against it will always full scan.
The option I am exploring is to have a SAVEPOINT at the start of the routine, do my temporary work, and then roll back to the savepoint just before it returns the result. Another option might be to put the routine inside an autonomous transaction, but it would mean porting C code to a PL/SQL stored procedure, and wouldn't work anyway if the temporary table needs to be joined to uncommitted data inserted by the caller.
Note that I did my research in 12c - there have been some improvements to temporary tables in this release (see https://oracle-base.com/articles/misc/temporary-tables) but I don't think that affects the behaviour wrt savepoints.
I dropped a table and tried to rollback, but to no use. Will it ever work like this or am I playing wrong here?
As from most of the comments I am clear that DDL statements cannot be undone by rollback but only by FLASHBACK.
I tried undoing
DELETE FROM STUDENT;
It still it can't be undone:
My order of execution was
INSERT,
DELETE FROM ,
ROLLBACK.
I don't believe rollback will undo schema changes.
ROLLBACK without a savepoint qualifier will roll back the entire current transaction.
For DDL statements, there is no current transaction to rollback. The DDL statement implicitly generates a COMMIT before the statement starts and after it completes. So if you issue a ROLLBACK following a DROP, no work has been done in the current transaction so there is nothing to roll back.
For DML statements, you'll roll back the entire current transaction. If you do
INSERT
DELETE
ROLLBACK
your transaction begins when you execute the INSERT operation. So when you issue the ROLLBACK, you are rolling back both the INSERT and the DELETE so you're back to having no data in the table (assuming you started with no data). If you COMMIT after the INSERT then the next transaction would begin with the DELETE and your ROLLBACK will only roll back the DELETE operation. Alternately, you can declare a savepoint after the INSERT and roll back to the savepoint
SQL> create table foo( col1 number );
Table created.
SQL> insert into foo values( 1 );
1 row created.
SQL> savepoint after_insert;
Savepoint created.
SQL> delete from foo;
1 row deleted.
SQL> rollback to savepoint after_insert;
Rollback complete.
SQL> select * from foo;
COL1
----------
1
Rollback does not undo schema changes, but to undo drop table operations you can check:
http://docs.oracle.com/cd/B19306_01/backup.102/b14192/flashptr004.htm
From the documentation:
Oracle Database implicitly commits the current transaction before and after every DDL statement.
This means that you cannot ROLLBACK a DDL statement (that is, a schema change).
Rollback will never undo Data Definition commands such as drop table alter table etc.
Dropping a table changes the structure of the database (using DDL statements like CREATE, DROP, ...).
COMMIT and ROLLBACK only work on the data which is exchanged with the database using DML statements (like INSERT, UPDATE, ...).
So, no it will never work like this.
To rollback ddl changes you need to use Flashback.
Rollback:
Discard all pending changes by using the ROLLBACK statement. Following a ROLLBACK statement:
Data changes are undone.
The previous state of the data is restored.
The locks on the affected rows are released.
Example
While attempting to remove a record from the TEST table, you can accidentally empty the table. You can correct the mistake, reissue the proper statement, and make the data change permanent.
DELETE FROM test;
25,000 rows deleted.
ROLLBACK;
Rollback complete.
DELETE FROM test
WHERE id = 100;
1 row deleted.
SELECT *
FROM test
WHERE id = 100;
No rows selected.
COMMIT;
Commit complete
After giving commit we can't rollback.
I have an Oracle trigger which is calling a stored procedure that has PRAGMA AUTONOMOUS_TRANSACTION defined. The values that are passed from the trigger have been committed already but it appears that the values are not available in the stored procedure? I'm not positive of this since the ability to debug/log/commit is difficult and the timing of the output is confusing me a bit. I'd like to know if it's expected that any passed values are simply available in the stored procedure regardless of the AUTONOMOUS_TRANSACTION?
Thanks
Values passed in to a stored procedure as parameters will always be available to the stored procedure. It doesn't matter whether the procedure is declared using an autonomous transaction.
Code running in an autonomous transaction cannot see changes made by the calling transaction. 9 times out of 10, when people are describing problems seeing the data they expect, this is the source of the problem.
If your stored procedure is doing anything other than writing something to a log table, I would be exceptionally cautious about using autonomous transactions. If you are using autonomous transactions for anything other than logging, you are almost certainly using them incorrectly. And you are probably introducing a whole host of bugs related to race conditions and transactional integrity.
"The trigger logic is conditionally
updating Table B which calls the
stored procedure to select from the
values on Table A so that Table B can
be updated with a calculated value. "
Perhaps Table B really ought to be a Materialized View derived from Table A? We can build a lot of complexity into the WHERE clauses of the queries which populate MViews. Find out more.
If you have a row level trigger on table_x, then that trigger can be fired multiple times by the same statement as different rows are impacted by that statement.
The order in which those rows are impacted is indeterminate. As such, the state of table_x is indeterminate during the execution of a row level trigger. This is why the MUTATING TABLE exception is raised.
An autonomous transaction 'cheats' by looking at the committed state of the table (ie excluding all changes made by that statement, and other statements in the transaction).
If you want a stored procedure to look at the state of table_x in response to activity on that table, then it needs to be done after all the rows changes have been made (ie in a statement level trigger, not a row level trigger).
The design pattern for this is often to set a flag (package level variable) in a row level trigger, check the flag in an AFTER statement level trigger, and if necessary action it and reset it.
I need to execute a bunch of (up to ~1000000) sql statements on an Oracle database. These statements should result in a referentially consistent state at the end, and all the statements should be rolled back if an error occurs. These statements do not come in a referential order. So if foreign key constraints are enabled, one of the statements may cause a foreign key violation even though, this violation would be fixed with a statement that would be executed later on.
I tried disabling foreign keys first and enabling them after all statements were executed. I thought I would be able to roll back when there was an actual foreign key violation. I was wrong though, I found out that every DDL statement in Oracle started with a commit, so there was no way to rollback the statements this way. Here is my script for disabling foreign keys:
begin
for i in (select constraint_name, table_name from user_constraints
where constraint_type ='R' and status = 'ENABLED')
LOOP execute immediate 'alter table '||i.table_name||' disable constraint
'||i.constraint_name||'';
end loop;
end;
After some research, I found out that it was recommended to execute DDL statements, like in this case, in an autonomous transaction. So I tried to run DDL statements in an autonomous transaction. This resulted in the following error:
ORA-00054: resource busy and acquire with NOWAIT specified
I am guessing this is because the main transaction still has DDL lock on the tables.
Am I doing something wrong here, or is there any other way to make this scenario work?
There's several potential approaches.
The first thing to consider is that whatever you do at the table level will apply to all sessions using that table. If you haven't got exclusive access to that table, you probably don't want to drop/recreate constraints, or disable/enable them.
The second thing to consider is that you probably don't want to be in a position of rolling back a million inserts/updates. Rolling back can be SLOW.
Generally I would load into a temporary table. Then do a single INSERT from the temporary table into the destination table. As a single statement, Oracle will apply all the check constraints at the end.
If you can't go through a temporary table (eg updates to existing data), before starting make the constraints deferrable initially immediate. Then, within your session,
SET CONSTRAINTS emp_job_nn, emp_salary_min DEFERRED;
You can then apply the changes and, when you commit, the constraints will be validated.
You should aquaint yourself with DML error logging as it can help identify any rows causing violations.