two triggers on one table - oracle

I am very new to oracle database my office is using oracle 10g. My question is
I have two tables one is current_cases having columns as case_id, col1, col2 col3..... another table backup_cases have backup_id, case_id, col1,col2,col3...
where case_id of current_cases is the same as case_id of backup_cases
I would like to create a trigger before update current_cases to insert all row the data into backup_cases, but there is already one more trigger on backup_cases to insert backup_sequence next value. Then how to create the update trigger, will the nextval trigger on backup_cases will automatically fill or should I over ride and take the sequence.next val an insert into the backup_cases. please give some idea about this small problem.....

...will the nextval trigger on backup_cases will automatically fill?
Trigger on backup_cases will work, but you must explicitly list all the inserted values, not this way: insert ... select * ....
Test: (everything is simplified, no primary keys, indices, foreign keys, constraints, just to address your question in short, readable way):
-- tables creation
create table current_cases (case_id number, col1 varchar2(20),
col2 varchar2(20));
create table backup_cases (backup_id number, case_id number, col1 varchar2(20),
col2 varchar2(20));
-- sequences creation
create sequence cc_seq;
create sequence bc_seq;
-- triggers
create or replace trigger bc_trg before insert on backup_cases
for each row
begin
select bc_seq.nextval into :new.backup_id from dual;
end;
create or replace trigger cc_trg before insert or update on current_cases
for each row
begin
if inserting then
select cc_seq.nextval into :new.case_id from dual;
else
insert into backup_cases (case_id, col1, col2)
values (:old.case_id, :old.col1, :old.col2);
end if;
end;
-- inserts and update sample data
insert into current_cases (col1, col2) values ('a1', 'a1');
insert into current_cases (col1, col2) values ('b1', 'b1');
insert into current_cases (col1, col2) values ('c1', 'c1');
update current_cases set col1 = 'b2a', col2='b2b' where case_id=2;
Results:
select * from current_cases;
CASE_ID COL1 COL2
---------- -------------------- --------------------
1 a1 a1
2 b2a b2b
3 c1 c1
select * from backup_cases;
BACKUP_ID CASE_ID COL1 COL2
---------- ---------- -------------------- --------------------
1 2 b1 b1

It doesn't look like you have anything to worry about. I assume there is an insert trigger on the backup table to generate the backup ID. I also assume there is an insert trigger on the current table to insert the incoming row also to the backup table. It may also be generating the current ID.
If you add an update trigger on the current table, it can write the NEW row to the backup table and everything should work normally. You don't need to make any changes to any existing trigger on either table.
If you have any doubts, this is a very easy operation to test.

Related

Convert MERGE to UPDATE

I have the following MERGE statement.Is there a way to convert this into an update statement without using MERGE?
MERGE INTO tab1
USING (SELECT tab1.col1, tab2.col2
FROM tab1, tab2
WHERE tab1.col1 = tab2.col1) tab3
ON (tab1.col1 = tab3.col1)
WHEN MATCHED THEN UPDATE SET col2 = tab3.col2
What you are asking about is called "update through join", and contrary to a widely held belief, it is possible in Oracle. But there is a catch.
Obviously, the update - no matter how you attempt to perform it - is not well defined unless column col1 is unique in table tab2. That column is used for lookup in the update process; if its values are not unique, the update will be ambiguous. I ignore here idiotic retorts such as "uniqueness is needed only for those values also found in tab1.col1", or "there is no ambiguity as long as all values in tab2.col2 are equal when the corresponding values in tab2.col1 are equal".
The "catch" is this. The uniqueness of tab2.col1 may be a matter of data (you know it when you inspect the data), or a matter of metadata (there is a unique constraint, or a unique index, or a PK constraint, etc., on tab2.col1, which the parser can inspect without ever looking at the actual data).
merge will work even when uniqueness is only known by inspecting the data. It will still throw an error if uniqueness is violated - but that will be a runtime error (only after the data in tab2 is accessed from disk). By contrast, updating through a join requires the same uniqueness to be known ahead of time, through the metadata (or in other ways: for example if the second rowset - not a table but the table-like result of a query - is the result of an aggregation grouping on the join column; then the uniqueness is guaranteed by the definition of "aggregation").
Here is a brief example to show the difference.
Test data:
create table tab1 (col1 number, col2 number);
insert into tab1 (col1, col2) values (1, 3);
create table tab2 (col1 number, col2 number);
insert into tab2 (col1, col2) values (1, 6);
commit;
merge statement (with check at the end):
merge into tab1
using(
select tab1.col1,
tab2.col2
from tab1,tab2
where tab1.col1 = tab2.col1) tab3
on(tab1.col1 = tab3.col1)
when matched then
update
set col2 = tab3.col2;
1 row merged.
select * from tab1;
COL1 COL2
---------- ----------
1 6
Now let's restore table tab1 to its original data for the next test(s):
rollback;
select * from tab1;
COL1 COL2
---------- ----------
1 3
Update through join - with no uniqueness guaranteed in the metadata (will result in error):
update
( select t1.col2 as t1_c2, t2.col2 as t2_c2
from tab1 t1 join tab2 t2 on t1.col1 = t2.col1
)
set t1_c2 = t2_c2;
Error report -
SQL Error: ORA-01779: cannot modify a column which maps to a non key-preserved table
01779. 00000 - "cannot modify a column which maps to a non key-preserved table"
*Cause: An attempt was made to insert or update columns of a join view which
map to a non-key-preserved table.
*Action: Modify the underlying base tables directly.
Now let's add a unique constraint on the lookup column:
alter table tab2 modify (col1 unique);
Table TAB2 altered.
and try the update again (with the same update statement), plus verification:
update
( select t1.col2 as t1_c2, t2.col2 as t2_c2
from tab1 t1 join tab2 t2 on t1.col1 = t2.col1
)
set t1_c2 = t2_c2;
1 row updated.
select * from tab1;
COL1 COL2
---------- ----------
1 6
So - you can do it, if you use the correct syntax (as I have shown here) AND - very important - you have a unique or PK constraint or a unique index on column tab2.col1.

How to select column value from a nested table

I created 1 object.
create type tab_billing as object(invoice_no number,
customername varchar2(100)
);
Now i created a table with the object as a column.
CREATE TABLE tab1 (col1 number,COL2 tab_billing);
Is there anyway I can ONLY select invoice_no from the tab1.
select col2 from tab1;
Is givng me both invoice_no and customername. Substr function is not working here.
You can query the column value's object field directly, but to avoid confusing the object name resolution steps you have to supply and use a table alias:
select t1.col2.invoice_no from tab1 t1;
This is mentioned in the documentation:
To avoid inner capture and similar problems resolving references, Oracle Database requires you to use a table alias to qualify any dot-notational reference to subprograms or attributes of objects.
Qualifying the column with the the table name isn't enough; using select tab1.col2.invoice_no from tab1 gets ORA-00904. You have to use a table alias - although, slightly bizarrely, it still works if the alias is the same as the table name, so select tab1.col2.invoice_no from tab1 tab1 (i.e. aliasing tab1 as tab1, which is normally redundant) works too.
Quick demo:
create type tab_billing as object(invoice_no number,
customername varchar2(100)
);
/
Type TAB_BILLING compiled
CREATE TABLE tab1 (col1 number,COL2 tab_billing);
Table TAB1 created.
insert into tab1 values (1, tab_billing(42, 'Test'));
1 row inserted.
select t1.col2.invoice_no from tab1 t1;
COL2.INVOICE_NO
---------------------------------------
42
You can use TREAT:
SQL> create type tab_billing as object(invoice_no number,
2 customername varchar2(100)
3 );
4 /
Type created.
SQL> CREATE TABLE tab1 (col1 number,COL2 tab_billing);
Table created.
SQL> insert into tab1 values (1, tab_billing(10, 'ten')) ;
1 row created.
SQL> select col1,
2 TREAT(col2 AS tab_billing).invoice_no as invoice_no,
3 TREAT(col2 AS tab_billing).customername as customername
4 from tab1;
COL1 INVOICE_NO CUSTOMERNAME
------ ---------- --------------------
1 10 ten

How to insert data in one table from another two tables using triggers in oracle

I have been trying to insert data in a table say admin dynamically when the data in inserted in two other tables namely table_1 and table_2 . I am able to get the desired output only for one table but not multiple tables. How can i achieve this by using triggers in oracle?
You need to create two separate INSERT triggers, one on table_1 and another on table_2 to insert the data into admin table.
Trigger 1:
CREATE OR REPLACE TRIGGER table_1_after_insert
AFTER INSERT
ON table_1
FOR EACH ROW
BEGIN
-- Insert record into admin table
INSERT INTO admin
( column1,
column2,
column3,
column4,
column5 )
VALUES
( :new.column1,
:new.column2,
:new.column3,
:new.column4,
:new.column5 );
END;
/
Trigger 2:
CREATE OR REPLACE TRIGGER table_2_after_insert
AFTER INSERT
ON table_2
FOR EACH ROW
BEGIN
-- Insert record into admin table
INSERT INTO admin
( column1,
column2,
column3,
column4,
column5 )
VALUES
( :new.column1,
:new.column2,
:new.column3,
:new.column4,
:new.column5 );
END;
/
create 2 triggers on 2 tables as lalit suggested
CREATE or replace TRIGGER TRG_TAB1
BEFORE INSERT ON tab1...
... insert into admin (name) values (:new.name)
on the other table
CREATE or replace TRIGGER TRG_TAB2
BEFORE INSERT ON tab2...
... insert into admin (name) values (:new.name)

delete row by id

I'm in the middle of creating a tool similar to the SQL Developer table data viewer. My db is Oracle based.
I simply need to delete eg.: 'row number 3' from a SELECT result. That table doesn't have any PK nor unique records. I've tried various techniques with ROWNUM etc. but no luck.
Oracle has a ROWID pseudocolumn that you can use for this purpose in simple cases.
select rowid, ... from your_table where ... ;
delete from your_table where rowid = <what you got above>;
If your interface allows the user to make complex views/joins/aggregates, then knowing what the user intended to delete (so knowing what set of rowids to gather and what set of tables to delete from) is going to be tricky.
Warning: rowids are unique only within a given table, and, quoting the above documentation:
If you delete a row, then Oracle may reassign its rowid to a new row inserted later.
So be very, very careful if you do this.
Assuming that it is a standard heap-organized table (index-organized tables and clusters potentially introduce additional complexity), if you don't have any other way to identify a row, you can use the ROWID pseudocolumn. This gives you information about the physical location of a row on disk. This means that the ROWID for a particular row can change over time and the ROWID can and will be reused when you delete a row and then a subsequent INSERT operation inserts a new row that happens to be in the same physical location on disk. For most applications, it is reasonable to assume that the ROWID will remain constant between the time that you execute the query and the time that you issue the DELETE but you shouldn't try to store the ROWID for any period of time.
For example, if we create a simple two-column table and a few rows
SQL> create table foo( col1 number, col2 varchar2(10) );
Table created.
SQL> insert into foo values( 1, 'Justin' );
1 row created.
SQL> insert into foo values( 1, 'Justin' );
1 row created.
SQL> insert into foo values( 2, 'Bob' );
1 row created.
SQL> insert into foo values( 2, 'Charlie' );
1 row created.
SQL> commit;
Commit complete.
We can SELECT the ROWID and then DELETE the third row using the ROWID
SQL> select *
2 from foo;
COL1 COL2
---------- ----------
1 Justin
1 Justin
2 Bob
2 Charlie
SQL> select rowid, col1, col2
2 from foo;
ROWID COL1 COL2
------------------ ---------- ----------
AAAfKXAAEAABt7vAAA 1 Justin
AAAfKXAAEAABt7vAAB 1 Justin
AAAfKXAAEAABt7vAAC 2 Bob
AAAfKXAAEAABt7vAAD 2 Charlie
SQL> delete from foo where rowid = 'AAAfKXAAEAABt7vAAC';
1 row deleted.
SQL> select * from foo;
COL1 COL2
---------- ----------
1 Justin
1 Justin
2 Charlie
Try using ROWID instead of ROWNUM.

Oracle Equivalent to MySQL INSERT IGNORE?

I need to update a query so that it checks that a duplicate entry does not exist before insertion. In MySQL I can just use INSERT IGNORE so that if a duplicate record is found it just skips the insert, but I can't seem to find an equivalent option for Oracle. Any suggestions?
If you're on 11g you can use the hint IGNORE_ROW_ON_DUPKEY_INDEX:
SQL> create table my_table(a number, constraint my_table_pk primary key (a));
Table created.
SQL> insert /*+ ignore_row_on_dupkey_index(my_table, my_table_pk) */
2 into my_table
3 select 1 from dual
4 union all
5 select 1 from dual;
1 row created.
Check out the MERGE statement. This should do what you want - it's the WHEN NOT MATCHED clause that will do this.
Do to Oracle's lack of support for a true VALUES() clause the syntax for a single record with fixed values is pretty clumsy though:
MERGE INTO your_table yt
USING (
SELECT 42 as the_pk_value,
'some_value' as some_column
FROM dual
) t on (yt.pk = t.the_pke_value)
WHEN NOT MATCHED THEN
INSERT (pk, the_column)
VALUES (t.the_pk_value, t.some_column);
A different approach (if you are e.g. doing bulk loading from a different table) is to use the "Error logging" facility of Oracle. The statement would look like this:
INSERT INTO your_table (col1, col2, col3)
SELECT c1, c2, c3
FROM staging_table
LOG ERRORS INTO errlog ('some comment') REJECT LIMIT UNLIMITED;
Afterwards all rows that would have thrown an error are available in the table errlog. You need to create that errlog table (or whatever name you choose) manually before running the insert using DBMS_ERRLOG.CREATE_ERROR_LOG.
See the manual for details
I don't think there is but to save time you can attempt the insert and ignore the inevitable error:
begin
insert into table_a( col1, col2, col3 )
values ( 1, 2, 3 );
exception when dup_val_on_index then
null;
end;
/
This will only ignore exceptions raised specifically by duplicate primary key or unique key constraints; everything else will be raised as normal.
If you don't want to do this then you have to select from the table first, which isn't really that efficient.
Another variant
Insert into my_table (student_id, group_id)
select distinct p.studentid, g.groupid
from person p, group g
where NOT EXISTS (select 1
from my_table a
where a.student_id = p.studentid
and a.group_id = g.groupid)
or you could do
Insert into my_table (student_id, group_id)
select distinct p.studentid, g.groupid
from person p, group g
MINUS
select student_id, group_id
from my_table
A simple solution
insert into t1
select from t2
where not exists
(select 1 from t1 where t1.id= t2.id)
This one isn't mine, but came in really handy when using sqlloader:
create a view that points to your table:
CREATE OR REPLACE VIEW test_view
AS SELECT * FROM test_tab
create the trigger:
CREATE OR REPLACE TRIGGER test_trig
INSTEAD OF INSERT ON test_view
FOR EACH ROW
BEGIN
INSERT INTO test_tab VALUES
(:NEW.id, :NEW.name);
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN NULL;
END test_trig;
and in the ctl file, insert into the view instead:
OPTIONS(ERRORS=0)
LOAD DATA
INFILE 'file_with_duplicates.csv'
INTO TABLE test_view
FIELDS TERMINATED BY ','
(id, field1)
How about simply adding an index with whatever fields you need to check for dupes on and say it must be unique? Saves a read check.
yet another "where not exists"-variant using dual...
insert into t1(id, unique_name)
select t1_seq.nextval, 'Franz-Xaver' from dual
where not exists (select 1 from t1 where unique_name = 'Franz-Xaver');

Resources