Adding Data in 1 tables through trigger on multiple tables - oracle

How do I add data in a table which contains attributes of two different tables which are not linked together?
Table 1 has col1,col2,col3 (assume all are numbers)
Table 2 has col4,col5,col6 (assume all are numbers)
Table 3 has A,B,C,D,E,F (assume all are numbers)
Question is that if any insertion occurs on table 1 and table 2, their data should be loaded in table 3.
I used the normal approach
of
create or replace trigger trig_name
before insert on table1,table2 --> not allowed
for each row
begin
if inserting then
insert into table3 values (:new.col1,:new.col2,:new.col3,:new.col4,:new.col5,:new.col6)
end if;
end;
/
So if I have to make two different triggers for table 1 and table 2, wouldn't that create two rows of data leaving some columns null?

Although creating a view as suggested by #EdStevens is (IMO) a superior answer, what you want can be done. And yes it requires 2 triggers. In Oracle, and all other DBMS that I'm aware of, a trigger can only fire on 1 table.
However, they can be reduced somewhat. There is no need for the "if inserting" test. The trigger is declared as "on insert" therefore the test will always be true. The actual issue it you cannot refer columns from table2 in the table1 trigger and likewise the other way around. What you need is to name the columns you are inserting, actually always a better approach. So:
create or replace trigger table1_bir
before insert on table1
for each row
begin
insert into table3(col1, col2, col3)
values (:new.col1,:new.col2,:new.col3);
end;
/
create or replace trigger table2_bir
before insert on table2
for each row
begin
insert into table3(col4, col5, col6)
values (:new.col4,:new.col5,:new.col6);
end;
/

Related

adding data from two different tables using a trigger

I created three tables A (id, name, date, realnumber, integer), B (id, name, date, realnumber, integer), and C which is identical to table A. It only has two more columns called integerB and sequence s. I want to create a trigger which would fire after insert on table B for each row input so that it saves the referenced row of Table A and adds integer from input row of table B in column integerB of table C. If the row already exists in Table C only integerB should be added. When it comes to sequence s, next value is added with first insert of row of table A.
Simple explanation: Table C is a copy of table A with two additional columns: integerB and sequence. The point of the trigger is to add new rows from table A without repetition, integerB from table B(integer in table B) and sequence should start with 1 and increment by 1. If the row in table A is repeated then only integerB should be updated.
I did not work with triggers that much, so I am not sure how to solve the problem when I have to insert data from multiple tables. Here is my trigger.
CREATE OR REPLACE TRIGGER trig1
AFTER INSERT ON B
FOR EACH ROW
INSERT INTO C (integerB) VALUES (NEW.integer);
INSERT INTO C (id, name, date, realnumber)
SELECT a.id, a.name, a.date, a.realnumber FROM A a;
END;
/
First off you really need to use better column and table names as a lot of these are reserved words... This makes everything far more complicated than it needs to be.
It isn't entirely clear what you want to do but it seems that if someone was to insert a record with ID = 1 into B then you want to get the values from A for ID = 1 and store them in C along with the integer value inserted into B
In which case you want to use an MERGE statement (UPSERT) in your trigger and something like
CREATE OR REPLACE TRIGGER T1
AFTER INSERT ON B
FOR EACH ROW
BEGIN
MERGE INTO C C
USING (SELECT * FROM A WHERE ID = :NEW.ID) A
ON (C.ID = :NEW.ID)
WHEN MATCHED THEN UPDATE
SET C.INTEGERB = :NEW.INTEGER,
C.SEQUENCE = C.SEQUENCE + 1
WHEN NOT MATCHED THEN
INSERT (ID, NAME, DATE, REALNUMBER, INTEGER, INTEGERB, SEQUENCE)
VALUES (A.ID, A.NAME. A.DATE, A.REALNUMBER, A.INTEGER, :NEW.INTEGER, 0);
END;
/
For sequence this has been set to 0 when a new record is inserted into C and then incremented each time integerB is updated. I am not sure if this is waht you want or not.
You should be able to tweak this to match the exact joins and logic you need.
Tip
Get your SQL statement working with literal values first and then translate it into your trigger. It will be much easier if you can get something working manually first before you attempt to make things more complicated

Inserting Row Number based on existing value in the table

I have a requirement that I need to insert row number in a table based on value already present in the table. For example, the max row_nbr record in the current table is something like this:
+----------+----------+------------+---------+
| FST_NAME | LST_NAME | STATE_CODE | ROW_NBR |
+----------+----------+------------+---------+
| John | Doe | 13 | 123 |
+----------+----------+------------+---------+
Now, I need to insert more records, with given FST_NAME and LST_NAME values. ROW_NBR needs to be generated while inserting the data into table with values auto-incrementing from 123.
I can't use a sequence, as my loading process is not the only process that inserts data into this table. And I can't use a cursor as well, as due to high volume of data the TEMP space gets filled up quickly. And I'm inserting data as given below:
insert into final_table
( fst_name,lst_name,state_code)
(select * from staging_table
where state_code=13);
Any ideas how to implement this?
It sounds like other processes are finding the current maximum row_nbr value and incrementing it as they do single-row inserts in a cursor loop.
You could do something functionally similar, either finding the maximum in advance and incrementing it (if you're already running this in a PL/SQL block):
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, variable_holding_maximum + rownum
from staging_table st
where st.state_code=13;
or by querying the table as part of the query, which doesn't need PL/SQL:
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, (select max(row_nbr) from final_table) + rownum
from staging_table st
where st.state_code=13;
db<>fiddle
But this isn't a good solution because it doesn't prevent clashes from different processes and sessions trying to insert at the same time; but neither would the cursor loop approach, unless it is catching unique constraint errors and re-attempting with a new value, perhaps.
It would be better to use a sequence, which would be an auto-increment column but you said you can't change the table structure; and you need to let the other processes continue to work without modification. You can still do that with a sequence and trigger approach, having the trigger always set the row_nbr value form the sequence, regardless of whether the insert statement supplied a value.
If you create a sequence that starts from the current maximum, with something like:
create sequence final_seq start with <current max + 1>
or without manually finding it:
declare
start_with pls_integer;
begin
select nvl(max(row_nbr), 0) + 1 into start_with from final_table;
execute immediate 'create sequence final_seq start with ' || start_with;
end;
/
then your trigger could just be:
create trigger final_trig
before insert on final_table
for each row
begin
:new.row_nbr := final_seq.nextval;
end;
/
Then your insert ... select statement doesn't need to supply or even think about the row_nbr value, so you can leave it as you have it now (except I'd avoid select * even in that construct, and list the staging table columns explicitly); and any existing inserts that do supply the row_nbr don't need to be modified and the value they supply will just be overwritten from the sequence.
db<>fiddle showing inserts with and withouth row_nbr specified.

how to use one sql insert data to two table?

I have two table,and they are connected by one field : B_ID of table A & id of table B.
I want to use sql to insert data to this two table.
how to write the insert sql ?
1,id in table B is auto-increment.
2,in a stupid way,I can insert data to table B first,and then select the id from table B,then add the id to table A as message_id.
You cannot insert data to multiple tables in one SQL statement. Just insert data first to B table and then table A. You could use RETURNING statement to get ID value and get rid of additional select statement between inserts.
See: https://oracle-base.com/articles/misc/dml-returning-into-clause
Have you heard about AFTER INSERT trigger? I think it is what you are looking for.
Something like this might do what you want:
CREATE OR REPLACE TRIGGER TableB_after_insert
AFTER INSERT
ON TableB
FOR EACH ROW
DECLARE
v_id int;
BEGIN
/*
* 1. Select your id from TableB
* 2. Insert data to TableA
*/
END;
/

Oracle finding last row inserted

Let say I have table my table has values(which they are varchar):
values
a
o
g
t
And I have insert a new value called V
values
V
a
o
g
t
Is there a way or query that can specify what is the last value was insert in the column ? the desired query : select * from dual where rown_num = count(*) -- just an example and the result will be V
Rows in a table have no inherent order. rownum is a pseudocolumn that's part of the select so it isn't useful here. There is no way to tell where in the storage a new row will physically be placed, so you can't rely on rowid, for example.
The only way to do this reliably is to have a timestamp column (maybe set by a trigger so you don't have to worry about it). That would let you order the rows by timestamp and find the row with the highest (most recent) timestamp.
You are still restricted by the precision of the timestamp, as I discovered creating a SQL Fiddle demo; without forcing a small gap between the inserts the timestamps were all the same, but then it only seems to support `timestamp(3). That probably won't be a significant issue in the real world, unless you're doing bulk inserts, but then the last row inserted is still a bit of an arbitrary concept.
As quite correctly pointed out in comments, if the actual time doesn't need to be know, a numeric field populated by a sequence would be more reliable and performant; another SQL Fiddle demo here, and this is the gist:
create table t42(data varchar2(10), id number);
create sequence seq_t42;
create trigger bi_t42
before insert on t42
for each row
begin
:new.id := seq_t42.nextval;
end;
/
insert into t42(data) values ('a');
insert into t42(data) values ('o');
insert into t42(data) values ('g');
insert into t42(data) values ('t');
insert into t42(data) values ('V');
select data from (
select data, row_number() over (order by id desc) as rn
from t42
)
where rn = 1;

Pattern to substitute for MERGE INTO Oracle syntax when not allowed

I have an application that uses the Oracle MERGE INTO... DML statement to update table A to correspond with some of the changes in another table B (table A is a summary of selected parts of table B along with some other info). In a typical merge operation, 5-6 rows (out of 10's of thousands) might be inserted in table B and 2-3 rows updated.
It turns out that the application is to be deployed in an environment that has a security policy on the target tables. The MERGE INTO... statement can't be used with these tables (ORA-28132: Merge into syntax does not support security policies)
So we have to change the MERGE INTO... logic to use regular inserts and updates instead. Is this a problem anyone else has run into? Is there a best-practice pattern for converting the WHEN MATCHED/WHEN NOT MATCHED logic in the merge statement into INSERT and UPDATE statements? The merge is within a stored procedure, so it's fine for the solution to use PL/SQL in addition to the DML if that is required.
Another way to do this (other than Merge) would be using two sql statements one for insert and one for update. The "WHEN MATCHED" and "WHEN NOT MATCHED" can be handled using joins or "in" Clause.
If you decide to take the below approach, it is better to run the update first (sine it only runs for the matching records) and then insert the non-Matching records. The Data sets would be the same either way, it just updates less number of records with the order below.
Also, Similar to the Merge, this update statement updates the Name Column even if the names in Source and Target match. If you dont want that, add that condition to the where as well.
create table src_table(
id number primary key,
name varchar2(20) not null
);
create table tgt_table(
id number primary key,
name varchar2(20) not null
);
insert into src_table values (1, 'abc');
insert into src_table values (2, 'def');
insert into src_table values (3, 'ghi');
insert into tgt_table values (1, 'abc');
insert into tgt_table values (2,'xyz');
SQL> select * from Src_Table;
ID NAME
---------- --------------------
1 abc
2 def
3 ghi
SQL> select * from Tgt_Table;
ID NAME
---------- --------------------
2 xyz
1 abc
Update tgt_Table tgt
set Tgt.Name =
(select Src.Name
from Src_Table Src
where Src.id = Tgt.id
);
2 rows updated. --Notice that ID 1 is updated even though value did not change
select * from Tgt_Table;
ID NAME
----- --------------------
2 def
1 abc
insert into tgt_Table
select src.*
from Src_Table src,
tgt_Table tgt
where src.id = tgt.id(+)
and tgt.id is null;
1 row created.
SQL> select * from tgt_Table;
ID NAME
---------- --------------------
2 def
1 abc
3 ghi
commit;
There could be better ways to do this, but this seems simple and SQL-oriented. If the Data set is Large, then a PL/SQL solution won't be as performant.
There are at least two options I can think of aside from digging into the security policy, which I don't know much about.
Process the records to merge row by row. Attempt to do the update, if it fails to update then insert, or vise versa, depending on whether you expect most records to need updating or inserting (ie optimize for the most common case that will reduce the number of SQL statements fired), eg:
begin
for row in (select ... from source_table) loop
update table_to_be_merged
if sql%rowcount = 0 then -- no row matched, so need to insert
insert ...
end if;
end loop;
end;
Another option may be to bulk collect the records you want to merge into an array, and then attempted to bulk insert them, catching all the primary key exceptions (I cannot recall the syntax for this right now, but you can get a bulk insert to place all the rows that fail to insert into another array and then process them).
Logically a merge statement has to check for the presence of each records behind the scenes anyway, and I think it is processed quite similarly to the code I posted above. However, merge will always be more efficient than coding it in PLSQL as it will be only 1 SQL call instead of many.

Resources