What would happen when this database trigger is fired?
Command (as user SYS):
SQL> CREATE OR REPLACE TRIGGER audit-table-DEPT AFTER
INSERT OR UPDATE OR DELETE ON DEPT FOR EACH ROW
declare
audit_data DEPT$audit%ROWTYPE;
begin
if inserting then audit_data.change_type := 'I';
elsif updating then audit_data.change_type :='U';
else audit_data.change_type := 'D';
end if;
audit_data.changed_by := user;
audit_data.changed_time := sysdate;
case audit_data.change_type
when 'I' then
audit_data.DEPTNO := :new.DEPTNO;
audit_data.DNAME := :new.DNAME;
audit_data.LOC := :new.LOC;
else
audit_data.DEPTNO := :old.DEPTNO;
audit_data.DNAME := :old.DNAME;
audit_data.LOC := :old.LOC;
end case;
insert into DEPT$audit values audit_data;
end;
/
how this can affect normal database operations?
What will happen if you run this command? Nothing, The trigger won't compile as you have given it an invalid object name. (Replace those dashes with underscores.)
After that, you have a trigger which inserts an audit record for DML activity on the DEPT table. When inserting you get an AUDIT_DEPT record with the values of the inserted DEPT record. When deleting you get an AUDIT_DEPT record with the values of the deleted DEPT record. When updating you get a somewhat useless AUDIT_DEPT record which tells you a DEPT record was updated but doesn't identify which one.
how this can affect normal database operations?
It won't cause anything to fail. However, you are executing additional insert statements every time you execute DML on the DEPT table. You probably won't notice the impact on single-row statements, but you might notice a slower response time if you insert, update or delete a large number of DEPT records. You will need to bench mark it.
One last observation:
Command (as user SYS):
Uh oh.
The better interpretation of this statement is that the trigger won't compile, because the SYS schema doesn't have a DEPT table. Connect as the user which owns the DEPT table, then run the CREATE TRIGGER statement.
The worrying option is that the trigger compiles because you have put a DEPT table in its schema. This is bad practice. The SYS schema is maintained by Oracle to run the database internal software. Changing the SYS schema unless authorised by Oracle could corrupt your database and invalidate any support contract you have. What you should do is use SYS (or SYSTEM) to create a user to host your application's objects, then connect as that user to build tables, triggers and whatever else you need.
Related
I am working on PLSQL based Procedures in Oracle 11g & 12c.
I want to keep logs of table name and row count when I issue commit command in one of my procedure/function.
This is for audit logs.
Can you please suggest how do I accomplish this?
Your PL/SQL code will need to keep track of its activity and log it. There is no way to ask Oracle "how many rows are you committing right now and to which tables?"
So, e.g.,
DECLARE
l_row_count NUMBER;
BEGIN
UPDATE table_1 SET column_a = 'whatever' WHERE column_b = 'some condition';
l_row_count := SQL%ROWCOUNT;
INSERT INTO my_audit ( action, cnt ) VALUES ('Updated table_1', l_row_count);
-- Notice the audit is part of the transaction; if I don't commit the UPDATE,
-- I won't commit the log of the update.
-- ... do other similar updates / inserts / deleted, using SQL%ROWCOUNT to
-- to determine the number of rows affected and log each one ...
COMMIT;
END;
Again, it is not practical to do a bunch of DML statements (inserts, updates, deletes) and then ask Oracle after the fact "what I have done so far in this transaction?" You need to record it as you go.
I have an 'after create on database' trigger to provide select access on newly created tables within specific schemas to different Oracle roles.
If I execute a create table ... as select statement and then query the new table in the same block of code within TOAD or a different UI I encounter an error, but it works if I run the commands separately:
create table schema1.table1 as select * from schema2.table2 where rownum < 2;
select count(*) from schema1.table1;
If I execute them as one block of code I get:
ORA-01031: insufficient privileges
If I execute them individually, I don't get an error and am able to obtain the correct count.
Sample snippet of AFTER CREATE trigger
CREATE OR REPLACE TRIGGER TGR_DATABASE_AUDIT AFTER
CREATE OR DROP OR ALTER ON Database
DECLARE
vOS_User VARCHAR2(30);
vTerminal VARCHAR2(30);
vMachine VARCHAR2(30);
vSession_User VARCHAR2(30);
vSession_Id INTEGER;
l_jobno NUMBER;
BEGIN
SELECT sys_context('USERENV', 'SESSIONID'),
sys_context('USERENV', 'OS_USER'),
sys_context('USERENV', 'TERMINAL'),
sys_context('USERENV', 'HOST'),
sys_context('USERENV', 'SESSION_USER')
INTO vSession_Id,
vOS_User,
vTerminal,
vMachine,
vSession_User
FROM Dual;
insert into schema3.event_table VALUES (vSession_Id, SYSDATE,
vSession_User, vOS_User, vMachine, vTerminal, ora_sysevent,
ora_dict_obj_type,ora_dict_obj_owner,ora_dict_obj_name);
IF ora_sysevent = 'CREATE' THEN
IF (ora_dict_obj_owner = 'SCHEMA1') THEN
IF DICTIONARY_OBJ_TYPE = 'TABLE' THEN
dbms_job.submit(l_jobno,'sys.execute_app_ddl(''GRANT SELECT
ON '||ora_dict_obj_owner||'.'||ora_dict_obj_name||' TO
Role1,Role2'');');
END IF;
END IF;
END IF;
END;
Jobs are asynchronous. Your code is not.
Ignoring for the moment the fact that if you're dynamically granting privileges that something in the world is creating new tables live in production without going through a change control process (at which point a human reviewer would ensure that appropriate grants were included) which implies that you have a much bigger problem...
When you run the CREATE TABLE statement, the trigger fires and a job is scheduled to run. That job runs in a separate session and can't start until your CREATE TABLE statement issues its final implicit commit and returns control to the first session. Best case, that job runs a second or two after the CREATE TABLE statement completes. But it could be longer depending on how many background jobs are allowed to run simultaneously, what other jobs are running, how busy Oracle is, etc.
The simplest approach would be to add a dbms_lock.sleep call between the CREATE TABLE and the SELECT that waits a reasonable amount of time to give the background job time to run. That's trivial to code (and useful to validate that this is, in fact, the only problem you have) but it's not foolproof. Even if you put in a delay that's "long enough" for testing, you might encounter a longer delay in the future. The more complicated approach would be to query dba_jobs, look to see if there is a job there related to the table you just created, and sleep if there is in a loop.
if it is possible... how can do that? i'm new in databases , please help me!
i'm using oracle and I want when create a new user when I insert a new row in a table "users"
As JGreenwell has mentioned, yes, you can do it, but is is not a good idea. The main problem is not the password in clear, is the transactional side effects.
Let's see the solution to password in clear:
It is easy to solve the problem of the password (do not store password in clear, encrypt it or hash it). If you have installed DMBS_CRYPTO and have privileges on it then you can create a procedure (I assume you have a table USERS with to columns: USERNAME and USERPASS):
CREATE OR REPLACE PROCEDURE MYCREATEUSER( NAME VARCHAR, PASS VARCHAR )
IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
EXECUTE IMMEDIATE 'CREATE USER ' || NAME || ' IDENTIFIED BY "' || PASS || '"';
END;
and a trigger:
CREATE OR REPLACE TRIGGER TUSERS BEFORE INSERT ON USERS FOR EACH ROW
BEGIN
MYCREATEUSER(:NEW.USERNAME, :NEW.USERPASS);
:NEW.USERPASS := DBMS_CRYPTO.HASH(:NEW.USERPASS, DBMS_CRYPTO.HASH_SH1);
END;
Your insert statement uses the username and the password, the trigger creates the user, but before inserting the row it replaces the password with its SHA1 hash value. You must remember this when you query the table.
One example:
INSERT INTO USERS VALUES('Mary', 'Mary123');
To check:
SELECT COUNT(*)
FROM USERS
WHERE USERNAME = 'Mary' AND
USERPASS = DBMS_CRYPTO.HASH('Mary123', DBMS_CRYPTO.HASH_SH1);
If count = 1 then the user and password exists, but if count = 0 the user/password does not exists.
This example only works if: you have installed DBMS_CRYPTO and has enough privileges, and you have CREATE TABLE privileges (remember that when a procedure executes statements it only has explicit granted privileges, privileges that came from ROLES are disabled - so, it is not enough to have the RESOURCE privilege, you need the CREATE TABLE explicit privilege).
But as I have mentioned it is not a good idea, because you can be affected by transactional side effects. Let's see the main problem:
To maintain ACID principles (Atomicity, Consistency, Isolation and Durability) Oracle needs to ensure that one statement is viewed as an Atomic operation. If I execute INSERT INTO USERS SELECT * FROM OTHER_USER_TABLES this statement must be viewed as an atomic unit. The atomic unit starts when you send the statement and ends when oracle notifies you the error/ok code.
But trigger is executed inside the statement (inside its temporal scope). The trigger controls the insert statement (if it fails all the statement fails, if it says ok, the statement is ok). Note that the statement ends after the triggers finalization. This means that it is not possible to commit or rollback the current transaction before the statement ends (because in this case the Atomicity is not ensured). So, a trigger cannot COMMIT/ROLLBACK the current transaction (it cannot commit and it cannot call a procedure to commit it, nobody can do it before statement ends).
CREATE TABLE is an autocommit statement (if forces a commit) so it is not possible to CREATE a table inside the trigger scope.
But we say "you can" and we have a working example. How?
The answer is the PRAGMA AUTONOMOUS_TRANSACTION.
This pragma forces oracle to execute the affected PL/SQL block in a new different transaction. So, in our example, oracle executes MYCREATEUSER procedure in a new transaction.
Oracle does not have a nested transaction model, so the inner AUTONOMOUS_TRANSACTION does not depend on the outer trigger's transaction. They are both flat transactions. This means that if the procedures ends (the user is created) and the insert finally fails (due to any reason) the row is not in the table but the new user exists.
Let's see an example:
INSERT INTO USERS VALUES ('Anne', 'An232131');
INSERT INTO USERS VALUES ('Mike', "ABC123');
ROLLBACK;
This examples inserts two rows (it creates two users) and rollback the transaction. So, finally the inserts are canceled and the rows (Anne an Mike) are not in users table. But Anne and Mike Oracle's users exists (the rollback) does not affect the user creation because they have been created by a different transaction that finally commited (the autonomous transaction).
This problem is not easy to solve without a nested transactional model (in a nested transactional model a inner transaction only finally commits when its outer transaction does).
I have a table which can hold many records for one account: different amounts.
ACCOUNTID | AMOUNT
id1 | 1
id1 | 2
id2 | 3
id2 | 4
Every time a record in this table is inserted/updated/deleted we need to evaluate an overall amount in order to know if we should trigger or not an event (by inserting data into another table). The amount is computed based on the sum of records (per account) present in this table.
The computation of the amount should use new values of the records, but we need also old values in order to check some conditions (e.g. old value was X - new value is Y: if [X<=threshold and Y>threshold] then trigger event by inserting a record into another table).
So in order to compute and trigger the event, we created a trigger on this table. Something like this:
CREATE OR REPLACE TRIGGER <trigger_name>
AFTER INSERT OR UPDATE OR DELETE OF MOUNT ON <table_name>
FOR EACH ROW
DECLARE
BEGIN
1. SELECT SUM(AMOUNT) INTO varSumAmounts FROM <table_name> WHERE accountid = :NEW.accountid;
2. varAmount := stored_procedure(varSumAmounts);
END <trigger_name>;
The issue is that statement 1. throws the following error: 'ORA-04091: table is mutating, trigger/function may not see it'.
We tried the following but without success (same exception/error) to select all records which have rowId different than current rowId:
(SELECT SUM(AMOUNT)
INTO varSumAmounts
FROM <table_name>
WHERE accountId = :NEW.accountid
AND rowid <> :NEW.rowid;)
in order to compute the amount as the sum of amounts of all rows beside current row + the amount of current row (which we have in the context of the trigger).
We searched for other solutions and we found some but I don’t know which of them is better and what is the downside for each of them (although they are somehow similar)
Use compound trigger
http://www.oracle-base.com/articles/9i/mutating-table-exceptions.php
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551198119097816936
To avoid 'table is mutating' error based on solutions 1&2, I used a combination of compound triggers with global temporary tables.
Now we have a compound trigger which uses some global temporary tables to store relevant data from :OLD and :NEW pseudo records. Basically we do the next things:
CREATE OR REPLACE TRIGGER trigger-name
FOR trigger-action ON table-name
COMPOUND TRIGGER
-------------------
BEFORE STATEMENT IS
BEGIN
-- Delete data from global temporary table (GTT) for which source is this trigger
-- (we use same global temporary tables for multiple triggers).
END BEFORE STATEMENT;
-------------------
AFTER EACH ROW IS
BEGIN
-- Here we have access to :OLD and :NEW objects.
-- :NEW and :OLD objects are defined only inside ROW STATEMENTS.
-- Save relevant data regarding :NEW and :OLD into GTT table to use it later.
END AFTER EACH ROW;
--------------------
AFTER STATEMENT IS
BEGIN
-- In this block DML operations can be made on table-name(the same table on which
--the trigger is created) safely.
-- Table is mutating error will no longer appear because this block is not for EACH ROW specific.
-- But we can't access :OLD and :NEW objects. This is the reason why in 'AFTER EACH ROW' we saved them in GTT.
-- Because previously we saved :OLD and :NEW data, now we can continue with our business logic.
-- if (oldAmount<=threshold && newAmount>threshold) then
-- trigger event by inserting record into another table
END AFTER STATEMENT;
END trigger-name;
/
The global temporary tables used are created with option 'ON COMMIT DELETE ROWS', this way I make sure that data from this table will be cleaned at the end of the transaction.
Yet, this error occurred: 'ORA-14450: attempt to access a transactional temp table already in use'.
The problem is that the application uses distributed transactions and in oracle documentation is mentioned that:
"A variety of internal errors can be reported when using Global Temporary Tables (GTTs) in conjunction with Distributed or XA transactions.
...
Temporary tables are not supported in any distributed, and therefore XA, coordinated transaction.
The safest option is to not use temporary tables within distributed or XA transactions as their use in this context is not officially supported.
...
A global temporary table can be safely used if there is only single branch transaction at the database using it, but if there are loopback database links or XA transactions involving multiple branches, then problems can occur including block corruption as per Bug 5344322.
"
It's worth mentioning that I can't avoid XA transactions or making DML on same table which is the subject of the trigger (fixing the data model is not a feasible solution). I've tried using instead of the global temporary table a trigger variable - a collection (table of objects) but I am not sure regarding this approach. Is it safe regarding distributed transactions?
Which other solutions will be suitable in this case to fix either initial issue: 'ORA-04091: table name is mutating, trigger/function may not see it', or the second one: 'ORA-14450: attempt to access a transactional temp table already in use'?
You should carefuly check that you code doesn't use autonomous transactions to access temporary table data:
SQL> create global temporary table t (x int) on commit delete rows
2 /
SQL> insert into t values(1)
2 /
SQL> declare
2 pragma autonomous_transaction;
3 begin
4 insert into t values(1);
5 commit;
6 end;
7 /
declare
*
error in line 1:
ORA-14450: attempt to access a transactional temp table already in use
ORA-06512: error in line 4
In case you do a DELETE FROM <temp-table-name> in BEFORE STATEMENT and AFTER STATEMENT is should not matter if you GTT is defined with ON COMMIT PRESERVE ROWS or ON COMMIT DELETE ROWS.
In your trigger you can define a RECORD/TABLE variable. This variable you can initialize in BEFORE STATEMENT block and loop over it in BEFORE STATEMENT block.
Would be something like this:
CREATE OR REPLACE TRIGGER TRIGGER-NAME
FOR TRIGGER-action ON TABLE-NAME
COMPOUND TRIGGER
TYPE GTT_RECORD_TYPE IS RECORD (ID NUMBER, price NUMBER, affected_row ROWID);
TYPE GTT_TABLE_TYPE IS TABLE OF GTT_RECORD_TYPE;
GTT_TABLE GTT_TABLE_TYPE;
-------------------
BEFORE STATEMENT IS
BEGIN
GTT_TABLE := GTT_TABLE_TYPE(); -- init the table variable
END BEFORE STATEMENT;
-------------------
AFTER EACH ROW IS
BEGIN
GTT_TABLE.EXTEND;
GTT_TABLE(GTT_TABLE.LAST) := GTT_RECORD_TYPE(:OLD.ID, :OLD.PRICE, :OLD.ROWID);
END AFTER EACH ROW;
--------------------
AFTER STATEMENT IS
BEGIN
FOR i IN GTT_TABLE.FIRST..GTT_TABLE.LAST LOOP
-- do something with values
END LOOP;
END AFTER STATEMENT;
END TRIGGER-NAME;
/
We have an Oracle database, and the customer account table has about a million rows. Over the years, we've built four different UIs (two in Oracle Forms, two in .Net), all of which remain in use. We have a number of background tasks (both persistent and scheduled) as well.
Something is occasionally holding a long lock (say, more than 30 seconds) on a row in the account table, which causes one of the persistent background tasks to fail. The background task in question restarts itself once the update times out. We find out about it a few minutes after it happens, but by then the lock has been released.
We have reason to believe that it might be a misbehaving UI, but haven't been able to find a "smoking gun".
I've found some queries that list blocks, but that's for when you've got two jobs contending for a row. I want to know which rows have locks when there's not necessarily a second job trying to get a lock.
We're on 11g, but have been experiencing the problem since 8i.
Oracle's locking concept is quite different from that of the other systems.
When a row in Oracle gets locked, the record itself is updated with the new value (if any) and, in addition, a lock (which is essentially a pointer to transaction lock that resides in the rollback segment) is placed right into the record.
This means that locking a record in Oracle means updating the record's metadata and issuing a logical page write. For instance, you cannot do SELECT FOR UPDATE on a read only tablespace.
More than that, the records themselves are not updated after commit: instead, the rollback segment is updated.
This means that each record holds some information about the transaction that last updated it, even if the transaction itself has long since died. To find out if the transaction is alive or not (and, hence, if the record is alive or not), it is required to visit the rollback segment.
Oracle does not have a traditional lock manager, and this means that obtaining a list of all locks requires scanning all records in all objects. This would take too long.
You can obtain some special locks, like locked metadata objects (using v$locked_object), lock waits (using v$session) etc, but not the list of all locks on all objects in the database.
you can find the locked tables in Oracle by querying with following query
select
c.owner,
c.object_name,
c.object_type,
b.sid,
b.serial#,
b.status,
b.osuser,
b.machine
from
v$locked_object a ,
v$session b,
dba_objects c
where
b.sid = a.session_id
and
a.object_id = c.object_id;
Rather than locks, I suggest you look at long-running transactions, using v$transaction. From there you can join to v$session, which should give you an idea about the UI (try the program and machine columns) as well as the user.
Look at the dba_blockers, dba_waiters and dba_locks for locking. The names should be self explanatory.
You could create a job that runs, say, once a minute and logged the values in the dba_blockers and the current active sql_id for that session. (via v$session and v$sqlstats).
You may also want to look in v$sql_monitor. This will be default log all SQL that takes longer than 5 seconds. It is also visible on the "SQL Monitoring" page in Enterprise Manager.
The below PL/SQL block finds all locked rows in a table. The other answers only find the blocking session, finding the actual locked rows requires reading and testing each row.
(However, you probably do not need to run this code. If you're having a locking problem, it's usually easier to find the culprit using GV$SESSION.BLOCKING_SESSION and other related data dictionary views. Please try another approach before you run this abysmally slow code.)
First, let's create a sample table and some data. Run this in session #1.
--Sample schema.
create table test_locking(a number);
insert into test_locking values(1);
insert into test_locking values(2);
commit;
update test_locking set a = a+1 where a = 1;
In session #2, create a table to hold the locked ROWIDs.
--Create table to hold locked ROWIDs.
create table locked_rowids(the_rowid rowid);
--Remove old rows if table is already created:
--delete from locked_rowids;
--commit;
In session #2, run this PL/SQL block to read the entire table, probe each row, and store the locked ROWIDs. Be warned, this may be ridiculously slow. In your real version of this query, change both references to TEST_LOCKING to your own table.
--Save all locked ROWIDs from a table.
--WARNING: This PL/SQL block will be slow and will temporarily lock rows.
--You probably don't need this information - it's usually good enough to know
--what other sessions are locking a statement, which you can find in
--GV$SESSION.BLOCKING_SESSION.
declare
v_resource_busy exception;
pragma exception_init(v_resource_busy, -00054);
v_throwaway number;
type rowid_nt is table of rowid;
v_rowids rowid_nt := rowid_nt();
begin
--Loop through all the rows in the table.
for all_rows in
(
select rowid
from test_locking
) loop
--Try to look each row.
begin
select 1
into v_throwaway
from test_locking
where rowid = all_rows.rowid
for update nowait;
--If it doesn't lock, then record the ROWID.
exception when v_resource_busy then
v_rowids.extend;
v_rowids(v_rowids.count) := all_rows.rowid;
end;
rollback;
end loop;
--Display count:
dbms_output.put_line('Rows locked: '||v_rowids.count);
--Save all the ROWIDs.
--(Row-by-row because ROWID type is weird and doesn't work in types.)
for i in 1 .. v_rowids.count loop
insert into locked_rowids values(v_rowids(i));
end loop;
commit;
end;
/
Finally, we can view the locked rows by joining to the LOCKED_ROWIDS table.
--Display locked rows.
select *
from test_locking
where rowid in (select the_rowid from locked_rowids);
A
-
1
Given some table, you can find which rows are not locked with SELECT FOR UPDATESKIP LOCKED.
For example, this query will lock (and return) every unlocked row:
SELECT * FROM mytable FOR UPDATE SKIP LOCKED
References
Ask TOM "How to get ROWID for locked rows in oracle".