How can I handle uniqueness in this situation? - oracle

I have a table like this:
create table my_table
(
type1 varchar2(10 char),
type2 varchar2(10 char)
);
I want to uniqueness like this;
if type1 column has 'GENERIC' value then just type2 column must be unique for the table. for example;
type1 column has 'GENERIC' value and type2 column has 'value_x' then there must not any type2 column value that equals to 'value_x'.
But other uniqueness is looking for both column. I mean it should be unique by type1 and type2 columns.(of course first rule is constant)
I try to make it with trigger;
CREATE OR REPLACE trigger my_trigger
BEFORE INSERT OR UPDATE
ON my_table
FOR EACH ROW
DECLARE
lvn_count NUMBER :=0;
lvn_count2 NUMBER :=0;
errormessage clob;
MUST_ACCUR_ONE EXCEPTION;
-- PRAGMA AUTONOMOUS_TRANSACTION; --without this it gives mutating error but I cant use this because it will conflict on simultaneous connections
BEGIN
IF :NEW.type1 = 'GENERIC' THEN
SELECT count(1) INTO lvn_count FROM my_table
WHERE type2= :NEW.type2;
ELSE
SELECT count(1) INTO lvn_count2 FROM my_table
WHERE type1= :NEW.type1 and type2= :NEW.type2;
END IF;
IF (lvn_count >= 1 or lvn_count2 >= 1) THEN
RAISE MUST_ACCUR_ONE;
END IF;
END;
But it gives mutating error without pragma . I do not want to use it due to conflict on simultaneous connections. (error because I use same table on trigger)
I try to make it with unique index but I cant manage.
CREATE UNIQUE INDEX my_table_unique_ix
ON my_table (case when type1= 'GENERIC' then 'some_logic_here' else type1 end, type2); -- I know it does not make sense but maybe there is something different that I can use in here.
Examples;
**Example 1**
insert into my_table (type1,type2) values ('a','b'); -- its ok no problem
insert into my_table (type1,type2) values ('a','c'); -- its ok no problem
insert into my_table (type1,type2) values ('c','b'); -- its ok no problem
insert into my_table (type1,type2) values ('GENERIC','b'); -- it should be error because b is exist before (i look just second column because first column value is 'GENERIC')
EXAMPLE 2:
insert into my_table (type1,type2) values ('GENERIC','b'); -- its ok no problem
insert into my_table (type1,type2) values ('a','c'); -- its ok no problem
insert into my_table (type1,type2) values ('d','c'); -- its ok no problem
insert into my_table (type1,type2) values ('d','b'); -- it should be error because second column can not be same as the second column value that first column value is 'GENERIC'

What you're trying to do is not really straightforward in Oracle. One possible (although somewhat cumbersome) approach is to use a combination of
an additional materialized view with refresh (on commit)
a windowing function to compute the number of distinct values per group
a windowing function to compute the number of GENERIC rows per group
a check constraint to ensure that either we have only one DISTINCT value or we don't have GENERIC in the same group
This should work:
create materialized view mv_my_table
refresh on commit
as
select
type1,
type2,
count(distinct type1) over (partition by type2) as distinct_type1_cnt,
count(case when type1 = 'GENERIC' then 1 else null end)
over (partition by type2) as generic_cnt
from my_table;
alter table mv_my_table add constraint chk_type1
CHECK (distinct_Type1_cnt = 1 or generic_cnt = 0);
Now, INSERTing a duplicate won't fail immediately, but the subsequent COMMIT will fail because it triggers the materialized view refresh, and that will cause the check constraint to fire.
Disadvantages
duplicate INSERTs won't fail immediately (making debugging more painful)
depending on the size of your table, the MView refresh might slow down COMMITs considerably
Links
For a more detailed discussion of this approach, see AskTom on cross-row constraints

Try it like this:
CREATE TABLE my_table (
type1 VARCHAR2(10 CHAR),
type2 VARCHAR2(10 CHAR),
type1_unique VARCHAR2(10 CHAR) GENERATED ALWAYS AS ( NULLIF(type1, 'GENERIC') ) VIRTUAL
);
ALTER TABLE MY_TABLE ADD (CONSTRAINT my_table_unique_ix UNIQUE (type1_unique, type2) USING INDEX)
Or an index like this should also work:
CREATE UNIQUE INDEX my_table_unique_ix ON MY_TABLE (NULLIF(type1, 'GENERIC'), type2);
Or doing it in your style (you only missed the END):
CREATE UNIQUE INDEX my_table_unique_ix ON my_table (case when type1= 'GENERIC' then null else type1 end, type2);

Unless I'm missing something obvious, the logic in the answer from #Frank Schmitt can also be implemented using a statement level trigger. It is a lot simpler to implement and does not have the disadvantages that Frank mentions.
create or replace TRIGGER my_table_t
AFTER INSERT OR UPDATE OR DELETE
ON my_table
DECLARE
l_dummy NUMBER;
MUST_ACCUR_ONE EXCEPTION;
BEGIN
WITH constraint_violated AS
(
select
type1,
type2,
count(distinct type1) over (partition by type2) as distinct_type1_cnt,
count(case when type1 = 'GENERIC' then 1 else null end)
over (partition by type2) as generic_cnt
from my_table
)
SELECT 1 INTO l_dummy
FROM constraint_violated
WHERE NOT (distinct_type1_cnt = 1 or generic_cnt = 0) FETCH FIRST 1 ROWS ONLY;
RAISE MUST_ACCUR_ONE;
EXCEPTION WHEN NO_DATA_FOUND THEN
NULL;
END;
/

Related

After UPDATE trigger - NEW and OLD column alias [duplicate]

I need to create a trigger in oracle 11g for auditing a table .
I have a table with 50 columns that need to be audited.
For every new insert into a table ,i need to put an entry in audit table (1 row).
For every update ,suppose i update 1st 2nd column ,then it will create two record in audit with its old value and new value .
structure of audit table will be
id NOT NULL
attribute NOT NULL
OLD VALUE NOT NULL
NEW VALUE NOT NULL
cre_date NOT NULL
upd_date NULL
cre_time NOT NULL
upd_time NULL
In case of insert ,only the primary key (main table)i.e the id and cre_date and cre_time need to be populated and attribute equal to * ,in case of update ,suppose colA and colB is updating then all need to be populated.In this case two records will be created with attribute of first record colA and corresponding old and new value , and same for the colB
Now my solution to audit is not very optimized , i have created a row level trigger ,which will check for each and every 50 columns for that table whether it is been changed or not based on its new and old value(if -else) , and it will populate the audit table .
I am not satisfied with my soltuion thats why i am posting here.
Another solution which i have seen in the link below :
http://stackoverflow.com/questions/1421645/oracle-excluding-updates-of-one-column-for-firing-a-trigger
This is not working in my case , I have done a POC for that as shown below:
create table temp12(id number);
create or replace trigger my_trigger
after update or insert on temp12
for each row
declare
TYPE tab_col_nt IS table of varchar2(30);
v_tab_col_nt tab_col_nt;
begin
v_tab_col_nt := tab_col_nt('id','name');
for r in v_tab_col_nt.first..v_tab_col_nt.last
loop
if updating(r) then
insert into data_table values(1,'i am updating'||r);
else
insert into data_table values(2,'i am inserting'||r);
end if;
end loop;
end;
In case of updating it is calling the else part i don't know why .
Can this be possible through compound trigger
Your immediate problem with the else always being called is because you're using your index variable r directly, rather than looking up the relevant column name:
for r in v_tab_col_nt.first..v_tab_col_nt.last
loop
if updating(v_tab_col_nt(r)) then
insert into data_table values(1,'i am updating '||v_tab_col_nt(r));
else
insert into data_table values(2,'i am inserting '||v_tab_col_nt(r));
end if;
end loop;
You're also only showing an id column in your table creation, so when r is 2, it will always say it's inserting name, never updating. More importantly, if you did have a name column and were only updating that for a given id, this code would show the id as inserting when it hadn't changed. You need to split the insert/update into separate blocks:
if updating then
for r in v_tab_col_nt.first..v_tab_col_nt.last loop
if updating(v_tab_col_nt(r)) then
insert into data_table values(1,'i am updating '||v_tab_col_nt(r));
end if;
end loop;
else /* inserting */
for r in v_tab_col_nt.first..v_tab_col_nt.last loop
insert into data_table values(2,'i am inserting '||v_tab_col_nt(r));
end loop;
end if;
This will still say it's inserting name even if the column doesn't exist, but I assume that's a mistake, and I guess you'd be trying to populate the list of names from user_tab_columns anyway if you really want to try to make it dynamic.
I agree with (at least some of) the others that you'd probably be better off with an audit table that takes a copy of the whole row, rather than individual columns. Your objection seems to be the complication of individually listing which columns changed. You could still get this information, with a bit of work, by unpivoting the audit table when you need column-by-column data. For example:
create table temp12(id number, col1 number, col2 number, col3 number);
create table temp12_audit(id number, col1 number, col2 number, col3 number,
action char(1), when timestamp);
create or replace trigger temp12_trig
before update or insert on temp12
for each row
declare
l_action char(1);
begin
if inserting then
l_action := 'I';
else
l_action := 'U';
end if;
insert into temp12_audit(id, col1, col2, col3, action, when)
values (:new.id, :new.col1, :new.col2, :new.col3, l_action, systimestamp);
end;
/
insert into temp12(id, col1, col2, col3) values (123, 1, 2, 3);
insert into temp12(id, col1, col2, col3) values (456, 4, 5, 6);
update temp12 set col1 = 9, col2 = 8 where id = 123;
update temp12 set col1 = 7, col3 = 9 where id = 456;
update temp12 set col3 = 7 where id = 123;
select * from temp12_audit order by when;
ID COL1 COL2 COL3 A WHEN
---------- ---------- ---------- ---------- - -------------------------
123 1 2 3 I 29/06/2012 15:07:47.349
456 4 5 6 I 29/06/2012 15:07:47.357
123 9 8 3 U 29/06/2012 15:07:47.366
456 7 5 9 U 29/06/2012 15:07:47.369
123 9 8 7 U 29/06/2012 15:07:47.371
So you have one audit row for each action taken, two inserts and three updates. But you want to see separate data for each column that changed.
select distinct id, when,
case
when action = 'I' then 'Record inserted'
when prev_value is null and value is not null
then col || ' set to ' || value
when prev_value is not null and value is null
then col || ' set to null'
else col || ' changed from ' || prev_value || ' to ' || value
end as change
from (
select *
from (
select id,
col1, lag(col1) over (partition by id order by when) as prev_col1,
col2, lag(col2) over (partition by id order by when) as prev_col2,
col3, lag(col3) over (partition by id order by when) as prev_col3,
action, when
from temp12_audit
)
unpivot ((value, prev_value) for col in (
(col1, prev_col1) as 'col1',
(col2, prev_col2) as 'col2',
(col3, prev_col3) as 'col3')
)
)
where value != prev_value
or (value is null and prev_value is not null)
or (value is not null and prev_value is null)
order by when, id;
ID WHEN CHANGE
---------- ------------------------- -------------------------
123 29/06/2012 15:07:47.349 Record inserted
456 29/06/2012 15:07:47.357 Record inserted
123 29/06/2012 15:07:47.366 col1 changed from 1 to 9
123 29/06/2012 15:07:47.366 col2 changed from 2 to 8
456 29/06/2012 15:07:47.369 col1 changed from 4 to 7
456 29/06/2012 15:07:47.369 col3 changed from 6 to 9
123 29/06/2012 15:07:47.371 col3 changed from 3 to 7
The five audit records have turned into seven updates; the three update statements show the five columns modified. If you'll be using this a lot, you might consider making that into a view.
So lets break that down just a little bit. The core is this inner select, which uses lag() to get the previous value of the row, from the previous audit record for that id:
select id,
col1, lag(col1) over (partition by id order by when) as prev_col1,
col2, lag(col2) over (partition by id order by when) as prev_col2,
col3, lag(col3) over (partition by id order by when) as prev_col3,
action, when
from temp12_audit
That gives us a temporary view which has all the audit tables columns plus the lag column which is then used for the unpivot() operation, which you can use as you've tagged the question as 11g:
select *
from (
...
)
unpivot ((value, prev_value) for col in (
(col1, prev_col1) as 'col1',
(col2, prev_col2) as 'col2',
(col3, prev_col3) as 'col3')
)
Now we have a temporary view which has id, action, when, col, value, prev_value columns; in this case as I only have three columns, that has three times the number of rows in the audit table. Finally the outer select filters that view to only include the rows where the value has changed, i.e. where value != prev_value (allowing for nulls).
select
...
from (
...
)
where value != prev_value
or (value is null and prev_value is not null)
or (value is not null and prev_value is null)
I'm using case to just print something, but of course you can do whatever you want with the data. The distinct is needed because the insert entries in the audit table are also converted to three rows in the unpivoted view, and I'm showing the same text for all three from my first case clause.
Why not make life easier and insert the entire row when any data in any column is updated. So any update (or delete typically) on the main table has the original row copied to the audit table first. So your audit table will have same layout as the main table, but with an extra few tracking fields, something like:
create or replace trigger my_tab_tr
before update or delete
on my_tab
referencing new as new and old as old
for each row
declare
l_type varchar2(3);
begin
if (updating) then
l_type = 'UPD';
else
l_type = 'DEL';
end if;
insert into my_tab_audit(
col1,
col2,
audit_type,
audit_date)
values (
:old.col1,
:old.col2,
l_type,
sysdate
);
end;
Add additional columns as you like to the audit table, this is just a typical example
The only way I've seen field-by-field audits done is to check each of the fields :OLD and :NEW values against each other and write the appropriate records to the audit table. You can semi-automate this by having a subroutine in the trigger to which you pass the appropriate values, but one way or another I believe you're going to have to write code for each individual field. Unless someone else has a brilliant way to do this with some sort of reflective API of which I'm not aware (and "what I'm not aware of" is applicable to more stuff each day, or so it seems :-).
The choice of whether to audit individual fields or to audit the entire row (which I usually call "history" tables) depends on how you intend to use the data. In this case, where individual fields changes need to be reported, I agree that a field-by-field audit seems to be a better fit. In other cases (for example, where a data extract must be reproducible for any given date) a row-by-row audit or "history table" approach is a better fit.
Irregardless of the the audit level (field-by-field or row-by-row), the comparison logic needs to be carefully written to handle the NULL/NOT NULL cases, so that you don't get bitten by comparing :OLD.FIELD1 = :NEW.FIELD1 where one of the values (or both) is NULL, and end up not taking the appropriate action because NULL is not equal to anything, even itself. Don't ask me how I know... :-)
Just out of curiosity, what will be put in for OLD_VALUE and NEW_VALUE in the single row which will be created when an INSERT occurs?
Share and enjoy.
the way i like to do it:
create an audit table that is parallel to your existing original
table.
add a timestamp and user columns to this audit table.
whenever the original table is inserted or updated, then just insert
into the audit table.
the audi table should have a trigger to set the timestamp and user values -
all other values come in as the new values.
then you can query at any time who did what, and when.
A very unorthodox solution:
(only if you have access to system tables, at least the SELECT privilege)
You know your table's NAME. Identify the ID of the owner of the table. Look it up in SYS.USER$ by the user's (=owner's) name.
Look up your table's object-ID (= OBJ#) in SYS.OBJ$ by OWNER# (= owner's ID) and NAME (=table's name).
Look up the columns that compose the table in SYS.COL$ by OBJ#. You will find all the columns, their IDs (COL#) and names (NAME).
Write an UPDATE trigger with a cursor that moves on the set of those columns. You will have to write the nucleus of the loop only once.
and end of it: I don't provide code, because the details may differ from Oracle version to Oracle version.
This is real dynamic SQL programming. I happened to use it even on fairly large enterprise systems (the team leaders did not know about it) and it worked. It is fast and reliable.
Drawbacks: {privileges; transportability; bad consideration from responsible people}.

Cursor for loop using a selection instead of a table ( Oracle )

I'm writing a procedure to fill up a child table from a parent table. The child table however has more fields than the parent table ( as it should be ). I've conjured a cursor which point to a selection, which is essentially a join of multiple tables.
Here's the code I got so far :
CREATE OR REPLACE PROCEDURE Pop_occ_lezione
AS
x Lezione%rowtype;
CURSOR cc IS
WITH y as(
SELECT Codice_corso,
nome_modulo,
Data_inizio_ed_modulo diem,
Giorno_lezione,
ora_inizio_lezione o_i,
ora_fine_lezione o_f,
anno,
id_cdl,
nome_sede,
locazione_modulo loc
FROM lezione
join ( select id_cdl, anno, codice_corso from corso ) using (codice_corso)
join ( select codice_corso, locazione_modulo from modulo ) using (codice_corso)
join ( select nome_sede, id_cdl from cdl ) using (id_cdl)
WHERE
case
when extract (month from Data_inizio_ed_modulo) < 9 then extract (year from Data_inizio_ed_modulo) - 1
else extract (year from Data_inizio_ed_modulo)
end = extract (year from sysdate+365)
)
SELECT *
FROM y
WHERE sem_check(y.diem,sysdate+365) = 1;
--
BEGIN
FETCH cc into x;
EXIT when cc%NOTFOUND;
INSERT INTO Occr_lezione
VALUES (
x.Codice_corso,
x.Nome_modulo,
x.diem,x.giorno_lezione,
x.Ora_inizio_lezione,
to_date(to_char(next_day(sysdate,x.Giorno_lezione),'DD-MM-YYYY') || to_char(x.Ora_inizio_lezione,' hh24:mi'),'dd-mm-yyyy hh24:mi'),
to_date(to_char(next_day(sysdate,x.Giorno_lezione),'DD-MM-YYYY') || to_char(x.Ora_fine_lezione,' hh24:mi'),'dd-mm-yyyy hh24:mi'),
x.nome_sede,
0,
x.loc
);
END LOOP;
END;
/
But of course it won't work, because the variable x has the type of my initial table row, which has less columns then my selection. Unfortunately As far as I know a rowtype variable is needed to cycle trough a cursor, in order to fetch data from it. Can you see the contradiction? How can I change the code? Is there a certain type of variable which can be crafted to reflect a row from my query result? Or maybe a way to cycle trough the data in the cursor without using a support variable? Or maybe something entirely different? Please let me know.
Ok, so as suggested I tried something like this:
INSERT INTO Occr_lezione(
Codice_corso,
Nome_modulo,
Data_inizio_ed_modulo,
Giorno_lezione,
Ora_inizio_lezione,
Ora_fine_lezione,
Anno,
Id_cdl,
Nome_sede,
Locazione_modulo
)
WITH y as(
SELECT Codice_corso,
Nome_modulo,
Data_inizio_ed_modulo,
Giorno_lezione,
Ora_inizio_lezione,
Ora_fine_lezione,
Anno,
Id_cdl,
Nome_sede,
Locazione_modulo
FROM Lezione
join ( select Id_cdl, Anno, Codice_corso from Corso ) using (codice_corso)
join ( select Codice_corso, Locazione_modulo from Modulo ) using (Codice_corso)
join ( select Nome_sede, Id_cdl from Cdl ) using (id_cdl)
WHERE
case
when extract (month from Data_inizio_ed_modulo) < 9 then extract (year from Data_inizio_ed_modulo) - 1
else extract (year from Data_inizio_ed_modulo)
end = extract (year from sysdate+365)
)
SELECT *
FROM y
WHERE sem_check(y.Data_inizio_ed_modulo,sysdate+365) = 1;
END;
/
But it says PL/SQL: ORA-00904: "LOCAZIONE_MODULO": invalid identifier
which isn't true, because the query return a table in which such column is present... am I missing something?
The code is compiled with no errors, it occurs when I try to fire the procedure.
In the table Occr_lezione as you can see:
CREATE TABLE Occr_lezione (
Codice_corso varchar2(20) NOT NULL,
Nome_modulo varchar2(50) NOT NULL,
Data_inizio_ed_modulo date NOT NULL,
Giorno_lezione number(1) NOT NULL,
Ora_inizio_lezione date NOT NULL,
Data_inizio_occr_lezione date,
Data_fine_occr_lezione date NOT NULL,
Nome_sede varchar2(30) NOT NULL,
Num_aula varchar2(3) NOT NULL,
Tipo_aula varchar2(20) NOT NULL,
--
CONSTRAINT fk_Occr_lezione_lezione FOREIGN KEY (Codice_corso,Nome_modulo,Data_inizio_ed_modulo,Giorno_lezione,Ora_inizio_lezione) REFERENCES Lezione(Codice_corso,Nome_modulo,Data_inizio_ed_modulo,Giorno_lezione,Ora_inizio_lezione) ON DELETE CASCADE,
CONSTRAINT fk_Occr_lezione_aula FOREIGN KEY (Nome_sede,Num_aula,Tipo_aula) REFERENCES Aula(Nome_sede,Num_aula,Tipo_aula) ON DELETE SET NULL,
CONSTRAINT pk_Occr_lezione PRIMARY KEY (Codice_corso,Nome_modulo,Data_inizio_ed_modulo,Giorno_lezione,Ora_inizio_lezione,Data_inizio_occr_lezione),
CHECK ( trunc(Data_inizio_occr_lezione) = trunc(Data_fine_occr_lezione) ), -- data inizio = data fine // prenotazione giornaliera
CHECK ( Data_inizio_occr_lezione < Data_fine_occr_lezione ) -- ora inizio < ora fine // coerenza temporale
there is not a column named Locazione_modulo, however the last column Tipo_aula as the same type and size of Locazione modulo :
CREATE TABLE Modulo (
Codice_corso varchar2(20) NOT NULL,
Nome_modulo varchar2(50),
Locazione_modulo varchar2(20) NOT NULL,
--
CONSTRAINT fk_Modulo_Corso FOREIGN KEY(Codice_corso) REFERENCES Corso(Codice_corso) ON DELETE CASCADE,
CONSTRAINT pk_Modulo PRIMARY KEY(Codice_corso,Nome_modulo),
CHECK (Locazione_modulo IN ('Aula','Laboratorio','Conferenze'))
);
So it should be irrelevant, right?
If you really want to use explicit cursors, you can declare x to be of type cc%rowtype
CREATE OR REPLACE PROCEDURE Pop_occ_lezione
AS
CURSOR cc IS ...
x cc%rowtype;
...
Unless you are using explicit cursors because you want to be able to explicitly fetch the data into local collections that you can leverage later on in your procedure, code using implicit cursors tends to be preferrable. That eliminates the need to FETCH and CLOSE the cursor or to write an EXIT condition and it implicitly does a bulk fetch to minimize context shifts.
BEGIN
FOR x IN cc
LOOP
INSERT INTO Occr_lezione ...
END LOOP;
END;
Of course, in either case, I would hope that you'd choose more meaningful names for your local variables-- x and cc don't tell you anything about what the variables are doing.
If all you are doing is taking data from one set of tables and inserting it into another table, it would be more efficient to write a single INSERT statement rather than coding a PL/SQL loop.
INSERT INTO Occr_lezione( <<column list>> )
SELECT <<column list>>
FROM <<tables you are joining together in the cursor definition>>
WHERE <<conditions from your cursor definition>>

insert multiple row into table using select however table has primery key in oracle SQL [duplicate]

This question already has answers here:
How to create id with AUTO_INCREMENT on Oracle?
(18 answers)
Closed 8 years ago.
I am facing issue while inserting multiple row in one go into table because column id has primary key and its created based on sequence.
for ex:
create table test (
iD number primary key,
name varchar2(10)
);
insert into test values (123, 'xxx');
insert into test values (124, 'yyy');
insert into test values (125, 'xxx');
insert into test values (126, 'xxx');
The following statement creates a constraint violoation error:
insert into test
(
select (SELECT MAX (id) + 1 FROM test) as id,
name from test
where name='xxx'
);
This query should insert 3 rows in table test (having name=xxx).
You're saying that your query inserts rows with primary key ID based on a sequence. Yet, in your insert/select there is select (SELECT MAX (id) + 1 FROM test) as id, which clearly is not based on sequence. It may be the case that you are not using the term "sequence" in the usual, Oracle way.
Anyway, there are two options for you ...
Create a sequence, e.g. seq_test_id with the starting value of select max(id) from test and use it (i.e. seq_test_id.nextval) in your query instead of the select max(id)+1 from test.
Fix the actual subselect to nvl((select max(id) from test),0)+rownum instead of (select max(id)+1 from test).
Please note, however, that the option 2 (as well as your original solution) will cause you huge troubles whenever your code runs in multiple concurrent database sessions. So, option 1 is strongly recommended.
Use
insert into test (
select (SELECT MAX (id) FROM test) + rownum as id,
name from test
where name='xxx'
);
as a workaround.
Of course, you should be using sequences for integer-primary keys.
If you want to insert an ID/Primary Key value generated by a sequence you should use the sequence instead of selecting the max(ID)+1.
Usually this is done using a trigger on your table wich is executed for each row. See sample below:
CREATE TABLE "MY_TABLE"
(
"MY_ID" NUMBER(10,0) CONSTRAINT PK_MY_TABLE PRIMARY KEY ,
"MY_COLUMN" VARCHAR2(100)
);
/
CREATE SEQUENCE "S_MY_TABLE"
MINVALUE 1 MAXVALUE 999999999999999999999999999
INCREMENT BY 1 START WITH 10 NOCACHE ORDER NOCYCLE NOPARTITION ;
/
CREATE OR REPLACE TRIGGER "T_MY_TABLE"
BEFORE INSERT
ON
MY_TABLE
REFERENCING OLD AS OLDEST NEW AS NEWEST
FOR EACH ROW
WHEN (NEWEST.MY_ID IS NULL)
DECLARE
IDNOW NUMBER;
BEGIN
SELECT S_MY_TABLE.NEXTVAL INTO IDNOW FROM DUAL;
:NEWEST.MY_ID := IDNOW;
END;
/
ALTER TRIGGER "T_MY_TABLE" ENABLE;
/
insert into MY_TABLE (MY_COLUMN) values ('DATA1');
insert into MY_TABLE (MY_COLUMN) values ('DATA2');
insert into MY_TABLE (MY_ID, MY_COLUMN) values (S_MY_TABLE.NEXTVAL, 'DATA3');
insert into MY_TABLE (MY_ID, MY_COLUMN) values (S_MY_TABLE.NEXTVAL, 'DATA4');
insert into MY_TABLE (MY_COLUMN) values ('DATA5');
/
select * from MY_TABLE;

auditing 50 columns using oracle trigger

I need to create a trigger in oracle 11g for auditing a table .
I have a table with 50 columns that need to be audited.
For every new insert into a table ,i need to put an entry in audit table (1 row).
For every update ,suppose i update 1st 2nd column ,then it will create two record in audit with its old value and new value .
structure of audit table will be
id NOT NULL
attribute NOT NULL
OLD VALUE NOT NULL
NEW VALUE NOT NULL
cre_date NOT NULL
upd_date NULL
cre_time NOT NULL
upd_time NULL
In case of insert ,only the primary key (main table)i.e the id and cre_date and cre_time need to be populated and attribute equal to * ,in case of update ,suppose colA and colB is updating then all need to be populated.In this case two records will be created with attribute of first record colA and corresponding old and new value , and same for the colB
Now my solution to audit is not very optimized , i have created a row level trigger ,which will check for each and every 50 columns for that table whether it is been changed or not based on its new and old value(if -else) , and it will populate the audit table .
I am not satisfied with my soltuion thats why i am posting here.
Another solution which i have seen in the link below :
http://stackoverflow.com/questions/1421645/oracle-excluding-updates-of-one-column-for-firing-a-trigger
This is not working in my case , I have done a POC for that as shown below:
create table temp12(id number);
create or replace trigger my_trigger
after update or insert on temp12
for each row
declare
TYPE tab_col_nt IS table of varchar2(30);
v_tab_col_nt tab_col_nt;
begin
v_tab_col_nt := tab_col_nt('id','name');
for r in v_tab_col_nt.first..v_tab_col_nt.last
loop
if updating(r) then
insert into data_table values(1,'i am updating'||r);
else
insert into data_table values(2,'i am inserting'||r);
end if;
end loop;
end;
In case of updating it is calling the else part i don't know why .
Can this be possible through compound trigger
Your immediate problem with the else always being called is because you're using your index variable r directly, rather than looking up the relevant column name:
for r in v_tab_col_nt.first..v_tab_col_nt.last
loop
if updating(v_tab_col_nt(r)) then
insert into data_table values(1,'i am updating '||v_tab_col_nt(r));
else
insert into data_table values(2,'i am inserting '||v_tab_col_nt(r));
end if;
end loop;
You're also only showing an id column in your table creation, so when r is 2, it will always say it's inserting name, never updating. More importantly, if you did have a name column and were only updating that for a given id, this code would show the id as inserting when it hadn't changed. You need to split the insert/update into separate blocks:
if updating then
for r in v_tab_col_nt.first..v_tab_col_nt.last loop
if updating(v_tab_col_nt(r)) then
insert into data_table values(1,'i am updating '||v_tab_col_nt(r));
end if;
end loop;
else /* inserting */
for r in v_tab_col_nt.first..v_tab_col_nt.last loop
insert into data_table values(2,'i am inserting '||v_tab_col_nt(r));
end loop;
end if;
This will still say it's inserting name even if the column doesn't exist, but I assume that's a mistake, and I guess you'd be trying to populate the list of names from user_tab_columns anyway if you really want to try to make it dynamic.
I agree with (at least some of) the others that you'd probably be better off with an audit table that takes a copy of the whole row, rather than individual columns. Your objection seems to be the complication of individually listing which columns changed. You could still get this information, with a bit of work, by unpivoting the audit table when you need column-by-column data. For example:
create table temp12(id number, col1 number, col2 number, col3 number);
create table temp12_audit(id number, col1 number, col2 number, col3 number,
action char(1), when timestamp);
create or replace trigger temp12_trig
before update or insert on temp12
for each row
declare
l_action char(1);
begin
if inserting then
l_action := 'I';
else
l_action := 'U';
end if;
insert into temp12_audit(id, col1, col2, col3, action, when)
values (:new.id, :new.col1, :new.col2, :new.col3, l_action, systimestamp);
end;
/
insert into temp12(id, col1, col2, col3) values (123, 1, 2, 3);
insert into temp12(id, col1, col2, col3) values (456, 4, 5, 6);
update temp12 set col1 = 9, col2 = 8 where id = 123;
update temp12 set col1 = 7, col3 = 9 where id = 456;
update temp12 set col3 = 7 where id = 123;
select * from temp12_audit order by when;
ID COL1 COL2 COL3 A WHEN
---------- ---------- ---------- ---------- - -------------------------
123 1 2 3 I 29/06/2012 15:07:47.349
456 4 5 6 I 29/06/2012 15:07:47.357
123 9 8 3 U 29/06/2012 15:07:47.366
456 7 5 9 U 29/06/2012 15:07:47.369
123 9 8 7 U 29/06/2012 15:07:47.371
So you have one audit row for each action taken, two inserts and three updates. But you want to see separate data for each column that changed.
select distinct id, when,
case
when action = 'I' then 'Record inserted'
when prev_value is null and value is not null
then col || ' set to ' || value
when prev_value is not null and value is null
then col || ' set to null'
else col || ' changed from ' || prev_value || ' to ' || value
end as change
from (
select *
from (
select id,
col1, lag(col1) over (partition by id order by when) as prev_col1,
col2, lag(col2) over (partition by id order by when) as prev_col2,
col3, lag(col3) over (partition by id order by when) as prev_col3,
action, when
from temp12_audit
)
unpivot ((value, prev_value) for col in (
(col1, prev_col1) as 'col1',
(col2, prev_col2) as 'col2',
(col3, prev_col3) as 'col3')
)
)
where value != prev_value
or (value is null and prev_value is not null)
or (value is not null and prev_value is null)
order by when, id;
ID WHEN CHANGE
---------- ------------------------- -------------------------
123 29/06/2012 15:07:47.349 Record inserted
456 29/06/2012 15:07:47.357 Record inserted
123 29/06/2012 15:07:47.366 col1 changed from 1 to 9
123 29/06/2012 15:07:47.366 col2 changed from 2 to 8
456 29/06/2012 15:07:47.369 col1 changed from 4 to 7
456 29/06/2012 15:07:47.369 col3 changed from 6 to 9
123 29/06/2012 15:07:47.371 col3 changed from 3 to 7
The five audit records have turned into seven updates; the three update statements show the five columns modified. If you'll be using this a lot, you might consider making that into a view.
So lets break that down just a little bit. The core is this inner select, which uses lag() to get the previous value of the row, from the previous audit record for that id:
select id,
col1, lag(col1) over (partition by id order by when) as prev_col1,
col2, lag(col2) over (partition by id order by when) as prev_col2,
col3, lag(col3) over (partition by id order by when) as prev_col3,
action, when
from temp12_audit
That gives us a temporary view which has all the audit tables columns plus the lag column which is then used for the unpivot() operation, which you can use as you've tagged the question as 11g:
select *
from (
...
)
unpivot ((value, prev_value) for col in (
(col1, prev_col1) as 'col1',
(col2, prev_col2) as 'col2',
(col3, prev_col3) as 'col3')
)
Now we have a temporary view which has id, action, when, col, value, prev_value columns; in this case as I only have three columns, that has three times the number of rows in the audit table. Finally the outer select filters that view to only include the rows where the value has changed, i.e. where value != prev_value (allowing for nulls).
select
...
from (
...
)
where value != prev_value
or (value is null and prev_value is not null)
or (value is not null and prev_value is null)
I'm using case to just print something, but of course you can do whatever you want with the data. The distinct is needed because the insert entries in the audit table are also converted to three rows in the unpivoted view, and I'm showing the same text for all three from my first case clause.
Why not make life easier and insert the entire row when any data in any column is updated. So any update (or delete typically) on the main table has the original row copied to the audit table first. So your audit table will have same layout as the main table, but with an extra few tracking fields, something like:
create or replace trigger my_tab_tr
before update or delete
on my_tab
referencing new as new and old as old
for each row
declare
l_type varchar2(3);
begin
if (updating) then
l_type = 'UPD';
else
l_type = 'DEL';
end if;
insert into my_tab_audit(
col1,
col2,
audit_type,
audit_date)
values (
:old.col1,
:old.col2,
l_type,
sysdate
);
end;
Add additional columns as you like to the audit table, this is just a typical example
The only way I've seen field-by-field audits done is to check each of the fields :OLD and :NEW values against each other and write the appropriate records to the audit table. You can semi-automate this by having a subroutine in the trigger to which you pass the appropriate values, but one way or another I believe you're going to have to write code for each individual field. Unless someone else has a brilliant way to do this with some sort of reflective API of which I'm not aware (and "what I'm not aware of" is applicable to more stuff each day, or so it seems :-).
The choice of whether to audit individual fields or to audit the entire row (which I usually call "history" tables) depends on how you intend to use the data. In this case, where individual fields changes need to be reported, I agree that a field-by-field audit seems to be a better fit. In other cases (for example, where a data extract must be reproducible for any given date) a row-by-row audit or "history table" approach is a better fit.
Irregardless of the the audit level (field-by-field or row-by-row), the comparison logic needs to be carefully written to handle the NULL/NOT NULL cases, so that you don't get bitten by comparing :OLD.FIELD1 = :NEW.FIELD1 where one of the values (or both) is NULL, and end up not taking the appropriate action because NULL is not equal to anything, even itself. Don't ask me how I know... :-)
Just out of curiosity, what will be put in for OLD_VALUE and NEW_VALUE in the single row which will be created when an INSERT occurs?
Share and enjoy.
the way i like to do it:
create an audit table that is parallel to your existing original
table.
add a timestamp and user columns to this audit table.
whenever the original table is inserted or updated, then just insert
into the audit table.
the audi table should have a trigger to set the timestamp and user values -
all other values come in as the new values.
then you can query at any time who did what, and when.
A very unorthodox solution:
(only if you have access to system tables, at least the SELECT privilege)
You know your table's NAME. Identify the ID of the owner of the table. Look it up in SYS.USER$ by the user's (=owner's) name.
Look up your table's object-ID (= OBJ#) in SYS.OBJ$ by OWNER# (= owner's ID) and NAME (=table's name).
Look up the columns that compose the table in SYS.COL$ by OBJ#. You will find all the columns, their IDs (COL#) and names (NAME).
Write an UPDATE trigger with a cursor that moves on the set of those columns. You will have to write the nucleus of the loop only once.
and end of it: I don't provide code, because the details may differ from Oracle version to Oracle version.
This is real dynamic SQL programming. I happened to use it even on fairly large enterprise systems (the team leaders did not know about it) and it worked. It is fast and reliable.
Drawbacks: {privileges; transportability; bad consideration from responsible people}.

Update or insert based on if employee exist in table

Do want to create Stored procc which updates or inserts into table based on the condition if current line does not exist in table?
This is what I have come up with so far:
PROCEDURE SP_UPDATE_EMPLOYEE
(
SSN VARCHAR2,
NAME VARCHAR2
)
AS
BEGIN
IF EXISTS(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN)
--what ? just carry on to else
ELSE
INSERT INTO pb_mifid (ssn, NAME)
VALUES (SSN, NAME);
END;
Is this the way to achieve this?
This is quite a common pattern. Depending on what version of Oracle you are running, you could use the merge statement (I am not sure what version it appeared in).
create table test_merge (id integer, c2 varchar2(255));
create unique index test_merge_idx1 on test_merge(id);
merge into test_merge t
using (select 1 id, 'foobar' c2 from dual) s
on (t.id = s.id)
when matched then update set c2 = s.c2
when not matched then insert (id, c2)
values (s.id, s.c2);
Merge is intended to merge data from a source table, but you can fake it for individual rows by selecting the data from dual.
If you cannot use merge, then optimize for the most common case. Will the proc usually not find a record and need to insert it, or will it usually need to update an existing record?
If inserting will be most common, code such as the following is probably best:
begin
insert into t (columns)
values ()
exception
when dup_val_on_index then
update t set cols = values
end;
If update is the most common, then turn the procedure around:
begin
update t set cols = values;
if sql%rowcount = 0 then
-- nothing was updated, so the record doesn't exist, insert it.
insert into t (columns)
values ();
end if;
end;
You should not issue a select to check for the row and make the decision based on the result - that means you will always need to run two SQL statements, when you can get away with one most of the time (or always if you use merge). The less SQL statements you use, the better your code will perform.
BEGIN
INSERT INTO pb_mifid (ssn, NAME)
select SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN);
END;
UPDATE:
Attention, you should name your parameter p_ssn(distinguish to the column SSN ), and the query become:
INSERT INTO pb_mifid (ssn, NAME)
select P_SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = P_SSN);
because this allways exists:
SELECT * FROM tblEMPLOYEE a where a.ssn = SSN

Resources