Oracle 'after create' trigger to grant privileges - oracle

I have an 'after create on database' trigger to provide select access on newly created tables within specific schemas to different Oracle roles.
If I execute a create table ... as select statement and then query the new table in the same block of code within TOAD or a different UI I encounter an error, but it works if I run the commands separately:
create table schema1.table1 as select * from schema2.table2 where rownum < 2;
select count(*) from schema1.table1;
If I execute them as one block of code I get:
ORA-01031: insufficient privileges
If I execute them individually, I don't get an error and am able to obtain the correct count.
Sample snippet of AFTER CREATE trigger
CREATE OR REPLACE TRIGGER TGR_DATABASE_AUDIT AFTER
CREATE OR DROP OR ALTER ON Database
DECLARE
vOS_User VARCHAR2(30);
vTerminal VARCHAR2(30);
vMachine VARCHAR2(30);
vSession_User VARCHAR2(30);
vSession_Id INTEGER;
l_jobno NUMBER;
BEGIN
SELECT sys_context('USERENV', 'SESSIONID'),
sys_context('USERENV', 'OS_USER'),
sys_context('USERENV', 'TERMINAL'),
sys_context('USERENV', 'HOST'),
sys_context('USERENV', 'SESSION_USER')
INTO vSession_Id,
vOS_User,
vTerminal,
vMachine,
vSession_User
FROM Dual;
insert into schema3.event_table VALUES (vSession_Id, SYSDATE,
vSession_User, vOS_User, vMachine, vTerminal, ora_sysevent,
ora_dict_obj_type,ora_dict_obj_owner,ora_dict_obj_name);
IF ora_sysevent = 'CREATE' THEN
IF (ora_dict_obj_owner = 'SCHEMA1') THEN
IF DICTIONARY_OBJ_TYPE = 'TABLE' THEN
dbms_job.submit(l_jobno,'sys.execute_app_ddl(''GRANT SELECT
ON '||ora_dict_obj_owner||'.'||ora_dict_obj_name||' TO
Role1,Role2'');');
END IF;
END IF;
END IF;
END;

Jobs are asynchronous. Your code is not.
Ignoring for the moment the fact that if you're dynamically granting privileges that something in the world is creating new tables live in production without going through a change control process (at which point a human reviewer would ensure that appropriate grants were included) which implies that you have a much bigger problem...
When you run the CREATE TABLE statement, the trigger fires and a job is scheduled to run. That job runs in a separate session and can't start until your CREATE TABLE statement issues its final implicit commit and returns control to the first session. Best case, that job runs a second or two after the CREATE TABLE statement completes. But it could be longer depending on how many background jobs are allowed to run simultaneously, what other jobs are running, how busy Oracle is, etc.
The simplest approach would be to add a dbms_lock.sleep call between the CREATE TABLE and the SELECT that waits a reasonable amount of time to give the background job time to run. That's trivial to code (and useful to validate that this is, in fact, the only problem you have) but it's not foolproof. Even if you put in a delay that's "long enough" for testing, you might encounter a longer delay in the future. The more complicated approach would be to query dba_jobs, look to see if there is a job there related to the table you just created, and sleep if there is in a loop.

Related

Oracle - DBA_SCHEDULER_JOB_RUN_DETAILS get's different results (owners) from script and procedure

on one of my servers i detect weird behaviour:
I have table TEST_OWNERS(owner varchar2(100)
When I execute this statement:
INSERT INTO TEST_OWNERS
select distinct owner from DBA_SCHEDULER_JOB_RUN_DETAILS;
commit;
I have all owners in the table.
But, when I try to create procedure with the same statement:
CREATE OR REPLACE PROCEDURE TEST_OWNERS_P IS
BEGIN
EXECUTE IMMEDIATE('TRUNCATE TABLE TEST_OWNERS');
INSERT INTO TEST_OWNERS
select distinct owner from DBA_SCHEDULER_JOB_RUN_DETAILS;
commit;
END;
And execute it from the same window, the same session - I can see only user, who executed procedure :|
On another server evyrything is ok - script and procedure returns the same results.
What can be wrong?
The query that you are using to populate the table is dependent on the jobs that have been run in the database. If there are different jobs running in DB1 than in DB2, then your query will likely have different results.
If you are just trying to get a list of users in the database, then you can use select username from dba_users

Audit-table-DEPT Trigger

What would happen when this database trigger is fired?
Command (as user SYS):
SQL> CREATE OR REPLACE TRIGGER audit-table-DEPT AFTER
INSERT OR UPDATE OR DELETE ON DEPT FOR EACH ROW
declare
audit_data DEPT$audit%ROWTYPE;
begin
if inserting then audit_data.change_type := 'I';
elsif updating then audit_data.change_type :='U';
else audit_data.change_type := 'D';
end if;
audit_data.changed_by := user;
audit_data.changed_time := sysdate;
case audit_data.change_type
when 'I' then
audit_data.DEPTNO := :new.DEPTNO;
audit_data.DNAME := :new.DNAME;
audit_data.LOC := :new.LOC;
else
audit_data.DEPTNO := :old.DEPTNO;
audit_data.DNAME := :old.DNAME;
audit_data.LOC := :old.LOC;
end case;
insert into DEPT$audit values audit_data;
end;
/
how this can affect normal database operations?
What will happen if you run this command? Nothing, The trigger won't compile as you have given it an invalid object name. (Replace those dashes with underscores.)
After that, you have a trigger which inserts an audit record for DML activity on the DEPT table. When inserting you get an AUDIT_DEPT record with the values of the inserted DEPT record. When deleting you get an AUDIT_DEPT record with the values of the deleted DEPT record. When updating you get a somewhat useless AUDIT_DEPT record which tells you a DEPT record was updated but doesn't identify which one.
how this can affect normal database operations?
It won't cause anything to fail. However, you are executing additional insert statements every time you execute DML on the DEPT table. You probably won't notice the impact on single-row statements, but you might notice a slower response time if you insert, update or delete a large number of DEPT records. You will need to bench mark it.
One last observation:
Command (as user SYS):
Uh oh.
The better interpretation of this statement is that the trigger won't compile, because the SYS schema doesn't have a DEPT table. Connect as the user which owns the DEPT table, then run the CREATE TRIGGER statement.
The worrying option is that the trigger compiles because you have put a DEPT table in its schema. This is bad practice. The SYS schema is maintained by Oracle to run the database internal software. Changing the SYS schema unless authorised by Oracle could corrupt your database and invalidate any support contract you have. What you should do is use SYS (or SYSTEM) to create a user to host your application's objects, then connect as that user to build tables, triggers and whatever else you need.

Translation of DB2 Trigger to Oracle triggers ORA-04091 Error

So I have two tables: JOBS and TASKS.
The TASKS Table has a TAKS_STATUS Column that stores a progression of states (e.g. 'Submitting', 'Submitted', 'Authored', 'Completed').
The JOBS Table has a JOB_STATUS Column which is a roll-up (i.e. minimum state) of the TASK_STATUS Columns in the TASKS Table. The JOBS Table also has a TASK_COUNT value that contains the number of TASKS associated with a Job.
Jobs can have one or more tasks: JOB_ID in each table links them.
In DB2, I have a series of simple triggers to roll-up this state; here's one in the middle:
create or replace trigger JOB_AUTHORED
after update of TASK_STATUS on TASKS
referencing NEW as N
for each row
when (TASK_STATUS = 'Authored')
update JOBS
set JOB_STATUS = 'Authored'
where JOBS.JOB_ID = N.JOB_ID
and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = N.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed'))
This works fine in DB2 because the trigger runs in the same work unit as the triggering event and thus it can see the work unit's uncommitted changes and can count the TASK_STATUS change that just occurred without hitting a row-lock.
Here's the translated trigger in Oracle:
create or replace trigger JOB_AUTHORED
after update of TASK_STATUS on TASKS
for each row
when (NEW.TASK_STATUS = 'Authored')
BEGIN
update JOBS
set JOB_STATUS='Authored'
where JOBS.JOB_ID = :NEW.JOB_ID and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = :NEW.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed'));
END;
In Oracle this fails with:
ORA-04091: table MYSCHEMA.TASKS is mutating, trigger/function may not see it#012ORA-06512: at "MYSCHEMA.JOB_AUTHORED", line 1#012ORA-04088: error during execution of trigger 'MYSCHEMA.JOB_AUTHORED'#012] [query: UPDATE TASKS SET TASK_STATUS=:1 where TASK_ID=:2
Apparently Oracle's Triggers do not run in the same context, can not see the uncommitted triggering updates and thus will never be able to count the number of tasks in specific states which include the triggering row.
I guess I could change the AFTER Trigger to an INSTEAD OF Trigger and update the TASK_STATUS (as well as the JOB_STATUS) within the Trigger (so the Job Update can see the Task Update), but would I run into the same error? Maybe not the first Task Update, but what if the triggering program is updating a bunch of TASKS before committing: what happens when the second Task is updated?
I've also considered removing the Trigger and have a program scan active Jobs for the status of its Tasks, but that seems inelegant.
What is the Best Practice with something like this in Oracle?
The best practice is to avoid triggers if you can.
See these links for an answer why one shoud not use triggers:
http://www.oracle.com/technetwork/testcontent/o58asktom-101055.html
http://rwijk.blogspot.com/2007/09/database-triggers-are-evil.html
Use a procedure (an API) in place of triggers - you can create a package with a few procedures like add_new_job, add_new_task, change_task_status etc., and place all logic in these procedures (checking, changing tasks' states, changing jobs' states etc.) in one place. Easy to understand, easy to maintain, easy to debug and easy to tracking errors.
If you insist on using triggers, then you can create the compound trigger, as Tom Kyte mentioned in the frist link above as workaround, for example:
create or replace TRIGGER JOB_AUTHORED
FOR UPDATE OF TASK_STATUS on TASKS
COMPOUND TRIGGER
TYPE JOB_ID_TABLE_TYPE IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
JOB_ID_TABLE JOB_ID_TABLE_TYPE;
dummy CHAR;
AFTER EACH ROW IS
BEGIN
-- SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE;
JOB_ID_TABLE( :NEW.JOB_ID ) := :NEW.JOB_ID;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
FORALL x IN INDICES OF JOB_ID_TABLE
UPDATE jobs set JOB_STATUS='Authored'
WHERE JOBS.JOB_ID = JOB_ID_TABLE( x )
and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = JOBS.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed')
);
END AFTER STATEMENT;
END JOB_AUTHORED;
but .....
I am not sure if there are no any pitfals in this example we are not aware at this time.
For example one pitfall is in this scenario:
Suposse there are 18 tasks with status Authored
At time X user A runs UPDATE TASK SET status ' Authored' WHERE
task_id = 2. The trigger is fired and sees 18+1 commited tasks with
status Authored
At time X+10ms user B runs UPDATE TASK1 SET
status ' Authored' task_id = 4. The trigger is fired and sees 18+1
commited tasks with status Authored
At time X+20ms user A
commits
At time X+30ms user B commits
At the end we have 21 tasks
with status authored. but job's status have not been changed to
Authored (but should be changed to Authored if number of tasks = 20).
To avoid this trap you can use SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE; in the after each row part of the trigger in order to place a lock at corresponding record in JOBS table in order to serialize access (I am commented it in the example above).
But I'am still not sure if this is correct solution - it may cause in turn some deadlocks in scenarious I cannot imagine and predict at this time.
In short - in Oracle, in triggers you cannot select from table on which trigger is constructed. Otherwise you may/you will get mutating table error.
You have several options:
1) No triggers at all - in my humble opinion this is the best, I would do it this way (but probably topic is wider and we don't know everything).
Create view, which replaces the need of triggers, something like:
create or replace view v_jobs_done as
select * from jobs where not exists
select 1 from tasks
where TASKS.JOB_ID = jobs.JOB_ID
and TASKS.TASK_STATUS not in ('Authored','Completed')
2) Instead of increasing value use decreasing value - so when jobs.tasks_count reaches zero you know everything's done. In this case you have to build/rebuild your other triggers,
3) Proposition close to yours - you can use modern compound trigger - I doubt in performance here, but it works:
create or replace trigger Job_Authored
for update of task_status on tasks compound trigger
type t_ids is table of tasks.job_id%type;
type t_cnts is table of number;
type t_job_counts is table of number index by varchar2(10);
v_ids t_ids;
v_cnts t_cnts;
v_job_counts t_job_counts;
before statement is
begin
select job_id, count(1)
bulk collect into v_ids, v_cnts
from tasks where tasks.task_status in ('Authored','Completed')
group by job_id;
for i in 1..v_ids.count() loop
v_job_counts(v_ids(i)) := v_cnts(i);
end loop;
end before statement;
after each row is
begin
if :new.task_status = 'Authored' then
update jobs set job_status='Authored'
where job_id = :new.job_id
and task_count = v_job_counts(:new.job_id);
end if;
end after each row;
end Job_Authored;
The best practice is to avoid triggers if you can.
See this links for an answer why one shoud not use triggers:
http://www.oracle.com/technetwork/testcontent/o58asktom-101055.html
http://rwijk.blogspot.com/2007/09/database-triggers-are-evil.html
Use a procedure (an API) in place of triggers - you can create a package with a few procedures like add_new_job, add_new_task, change_task_status etc., and place all logic in these procedures (checking, changing tasks' states, changing jobs' states etc.) in one place. Easy to understand, easy to maintain, easy to debug and easy to track errors.
If you insist on using triggers, then you can create the compound trigger, as Tom Kyte mentioned in the frist link above as workaround, for example:
create or replace TRIGGER JOB_AUTHORED
FOR UPDATE OF TASK_STATUS on TASKS
COMPOUND TRIGGER
TYPE JOB_ID_TABLE_TYPE IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
JOB_ID_TABLE JOB_ID_TABLE_TYPE;
dummy CHAR;
AFTER EACH ROW IS
BEGIN
-- SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE;
JOB_ID_TABLE( :NEW.JOB_ID ) := :NEW.JOB_ID;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
FORALL x IN INDICES OF JOB_ID_TABLE
UPDATE jobs set JOB_STATUS='Authored'
WHERE JOBS.JOB_ID = JOB_ID_TABLE( x )
and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = JOBS.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed')
);
END AFTER STATEMENT;
END JOB_AUTHORED;
but .....
I am not sure if there are no any pitfals in this example we are not aware at this time.
For example one pitfall is in this scenario:
Suposse there are 18 tasks with status Authored
At time X user A runs UPDATE TASK SET status ' Authored' WHERE
task_id = 2. The trigger is fired and sees 18+1 commited tasks with
status Authored
At time X+10ms user B runs UPDATE TASK1 SET
status ' Authored' task_id = 4. The trigger is fired and sees 18+1
commited tasks with status Authored
At time X+20ms user A
commits
At time X+30ms user A commits
At the end we have 21 tasks
with status authored. but job's status have not been changed to
Authored (but should be changed to Authored if number of tasks = 20).
To avoid this trap you can use SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE; in the after each row part of the trigger in order to place a lock at corresponding record in JOBS table in order to serialize access (I am commented it in the example above).
But I'am still not sure if this is correct solution - it may cause in turn deadlocks in scenarious I cannot imagine and predict at now.

How to create a table and insert in the same statement

I'm new to Oracle and I'm struggling with this:
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT(*) INTO cnt FROM all_tables WHERE table_name like 'Newtable';
IF(cnt=0) THEN
EXECUTE IMMEDIATE 'CREATE TABLE Newtable ....etc';
END IF;
COMMIT;
SELECT COUNT(*) INTO cnt FROM Newtable where id='something'
IF (cnt=0) THEN
EXECUTE IMMEDIATE 'INSERT INTO Newtable ....etc';
END IF;
END;
This keeps crashing and gives me the "PL/SQL: ORA-00942:table or view does not exist" on the insert-line. How can I avoid this? Or what am I doing wrong? I want these two statements (in reality it's a lot more of course) in a single transaction.
It isn't the insert that is the problem, it's the select two lines before. You have three statements within the block, not two. You're selecting from the same new table that doesn't exist yet. You've avoided that in the insert by making that dynamic, but you need to do the same for the select:
EXECUTE IMMEDIATE q'[SELECT COUNT(*) FROM Newtable where id='something']'
INTO cnt;
SQL Fiddle.
Creating a table at runtime seems wrong though. You said 'for safety issues the table can only exist if it's filled with the correct dataset', which doesn't entirely make sense to me - even if this block is creating and populating it in one go, anything that relies on it will fail or be invalidated until this runs. If this is part of the schema creation then making it dynamic doesn't seem to add much. You also said you wanted both to happen in one transaction, but the DDL will do an implicit commit, you can't roll back DDL, and your manual commit will start a new transaction for the insert(s) anyway. Perhaps you mean the inserts shouldn't happen if the table creation fails - but they would fail anyway, whether they're in the same block or not. It seems a bit odd, anyway.
Also, using all_tables for the check could still cause this to behave oddly. If that table exists in another schema, you create will be skipped, but you select and insert might still fail as they might not be able to see, or won't look for, the other schema version. Using user_tables or adding an owner check might be a bit safer.
Try the following approach, i.e. create and insert are in two different blocks
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT (*)
INTO cnt
FROM all_tables
WHERE table_name LIKE 'Newtable';
IF (cnt = 0)
THEN
EXECUTE IMMEDIATE 'CREATE TABLE Newtable(c1 varchar2(256))';
END IF;
END;
DECLARE
cnt2 NUMBER;
BEGIN
SELECT COUNT (*)
INTO cnt2
FROM newtable
WHERE c1 = 'jack';
IF (cnt2 = 0)
THEN
EXECUTE IMMEDIATE 'INSERT INTO Newtable values(''jill'')';
END IF;
END;
Oracle handles the execution of a block in two steps:
First it parses the block and compiles it in an internal representation (so called "P code")
It then runs the P code (it may be interpreted or compiled to machine code, depending on your architecture and Oracle version)
For compiling the code, Oracle must know the names (and the schema!) of the referenced tables. Your table doesn't exist yet, hence there is no schema and the code does not compile.
To your intention to create the tables in one big transaction: This will not work. Oracle always implicitly commits the current transaction before and after a DDL statement (create table, alter table, truncate table(!) etc.). So after each create table, Oracle will commit the current transaction and starts a new one.

DDL statements in PL/SQL?

I am trying the code below to create a table in PL/SQL:
DECLARE
V_NAME VARCHAR2(20);
BEGIN
EXECUTE IMMEDIATE 'CREATE TABLE TEMP(NAME VARCHAR(20))';
EXECUTE IMMEDIATE 'INSERT INTO TEMP VALUES(''XYZ'')';
SELECT NAME INTO V_NAME FROM TEMP;
END;
/
The SELECT statement fails with this error:
PL/SQL: ORA-00942: table or view does not exist
Is it possible to CREATE, INSERT and SELECT all in a single PL/SQL Block one after other?
I assume you're doing something like the following:
declare
v_temp varchar2(20);
begin
execute immediate 'create table temp(name varchar(20))';
execute immediate 'insert into temp values(''XYZ'')';
select name into v_name from temp;
end;
At compile time the table, TEMP, does not exist. It hasn't been created yet. As it doesn't exist you can't select from it; you therefore also have to do the SELECT dynamically. There isn't actually any need to do a SELECT in this particular situation though you can use the returning into syntax.
declare
v_temp varchar2(20)
begin
execute immediate 'create table temp(name varchar2(20))';
execute immediate 'insert into temp
values(''XYZ'')
returning name into :1'
returning into v_temp;
end;
However, needing to dynamically create tables is normally an indication of a badly designed schema. It shouldn't really be necessary.
I can recommend René Nyffenegger's post "Why is dynamic SQL bad?" for reasons why you should avoid dynamic SQL, if at all possible, from a performance standpoint. Please also be aware that you are much more open to SQL injection and should use bind variables and DBMS_ASSERT to help guard against it.
If you run the program multiple time you will get an error even after modifying the program to run the select statement as dynamic SQL or using a returning into clause.
Because when you run the program first time it will create the table without any issue but when you run it next time as the table already created first time and you don't have a drop statement it will cause an error: "Table already exists in the Database".
So my suggestion is before creating a table in a pl/sql program always check if there is any table with the same name already exists in the database or not. You can do this check using a Data dictionary views /system tables which store the metadata depending on your database type.
For Example in Oracle you can use following views to decide if a tables needs to be created or not:
DBA_TABLES ,
ALL_TABLES,
USER_TABLES

Resources