Best practice for updating column specific triggers - oracle

Welcome Oracle pro's
In an Oracle 12 database (upgrade is already scheduled ;-)) we have a setup of different tables updating a common base table via "after update" triggers like following:
Search_Flat
ID
Field_A
Field_B
Field_C
Now table1 contains n columns where let's say 2 out of n are relevant for the Search_Flat table. As the update of table1 may only affect columns not relevant for Seach_Flat we want to add checks to the trigger. So our first approach is like following:
CREATE OR REPLACE TRIGGER tr_tbl_1_au_search
AFTER UPDATE OF
field_a,
field_b
ON schemauser.search_flat
FOR EACH ROW
BEGIN
IF :new.field_a <> :old.field_a THEN
UPDATE schemauser.search_flat SET field_a = :new.field_a WHERE id = :new.ID;
END IF;
IF :new.field_b <> :old.field_b THEN
UPDATE schemauser.search_flat SET field_b = :new.field_b WHERE id = :new.ID;
END IF;
END;
Alternatively we could also setup the trigger like following:
CREATE OR REPLACE TRIGGER tr_tbl_1_au_search
AFTER UPDATE OF
field_a,
field_b
ON schemauser.search_flat
FOR EACH ROW
BEGIN
IF :new.field_a <> :old.field_a OR :new.field_b <> :old.field_b THEN
UPDATE schemauser.search_flat
SET field_a = :new.field_a,
field_b = :new.field_b
WHERE id = :new.ID;
END IF;
END;
The question now is about the setup of the triggers themselves. Which approach is the better with respect to:
locking time of search_flat rows
overall performance of affected components (i.e., table_1, trigger and search_flat)
In production we are talking about 4 tables with 10 fields each considered in the triggers. And we have independent app servers accessing the shared database updating the 4 tables simultaneously. From time to time we detect the following error which is the reason we wan't to optimize the triggers:
ORA-02049: timeout: distributed transaction waiting for lock
Sidenote: This setup has been chosen instead of a view or materialized view due to performance reasons as the base table is used in gui with the requirement to be instantly updated and the number of records of the 4 feeding tables are too high for updating materialized view on update.
I'm looking forward to the discussion and your thoughts.

As I understand your post, you have 4 live tables (called "table1", "table2", etc.) that you want to search on, but querying from them is too slow, so you want to maintain a single, flattened table to search on instead and have triggers to keep that flattened table always up-to-date.
You want to know which of two trigger approaches is better.
I think the answer is "neither", since both are prone to deadlocks. Imagine this scenario
User 1 -
UPDATE table1
SET field_a = 500
WHERE <condition effecting 200 distinct IDs>
User 2 at about the same time -
UPDATE table1
SET field_b = 700
WHERE <condition effecting 200 distinct IDs>
Triggers start processing. You cannot control the order in which the rows are updated. Maybe it goes like this:
User 1's trigger, time index 100 ->
UPDATE search_flat SET field_a = 500 WHERE id = 90;
User 2's trigger, time index 101 ->
UPDATE search_flat SET field_b = 700 WHERE id = 91;
User 1's trigger, time index 102 ->
UPDATE search_flat SET field_a = 500 WHERE id = 91; (waits on user 2's session)
User 2's trigger, time index 103 ->
UPDATE search_flat SET field_b = 700 WHERE id = 90; (deadlock error)
User 2's original update fails and rolls back.
You have multiple concurrent processes all updating the same set of rows in search_flat with no control over the processing order. That is a recipe for deadlocks.
If you wanted to do this safely, you should consider neither of the FOR EACH ROW trigger approaches you outlines. Rather, make a compound trigger to do this.
Here's some sample code to illustrate the idea. Be sure to read the comments.
-- Aside: consider setting this at the system level if on 12.2 or later
-- alter system set temp_undo_enabled=false;
CREATE GLOBAL TEMPORARY TABLE table1_updates_gtt (
id NUMBER,
field_a VARCHAR2(80),
field_b VARCHAR2(80)
) ON COMMIT DELETE ROWS;
CREATE GLOBAL TEMPORARY TABLE table2_updates_gtt (
id NUMBER,
field_a VARCHAR2(80)
) ON COMMIT DELETE ROWS;
-- .. so on for table3 and 4.
CREATE OR REPLACE TRIGGER table1_search_maint_trg
FOR INSERT OR UPDATE OR DELETE ON table1 -- with similar compound triggers for table2, 3, 4.
COMPOUND TRIGGER
AFTER EACH ROW IS
BEGIN
-- Update the table-1 specific GTT with the changes.
CASE WHEN INSERTING OR UPDATING THEN
-- Assumes ID is immutable primary key
INSERT INTO table1_updates_gtt (id, field_a) VALUES (:new.id, :new.field_a);
WHEN DELETING THEN
INSERT INTO table1_updates_gtt (id, field_a) VALUES (:old.id, null); -- or figure out what you want to do about deletes.
END CASE;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
-- Write the data from the GTT to the search_flat table.
-- NOTE: The ORDER BY in the next line is what saves us from deadlocks.
FOR r IN ( SELECT id, field_a, field_b FROM table1_updates_gtt ORDER BY id ) LOOP
-- TODO: replace with BULK processing for better performance, if DMLs can affect a lot of rows
UPDATE search_flat sf
SET sf.field_a = r.field_a,
sf.field_b = r.field_b
WHERE sf.id = r.id
AND ( sf.field_a <> r.field_a
OR (sf.field_a IS NULL AND r.field_a IS NOT NULL)
OR (sf.field_a IS NOT NULL AND r.field_a IS NULL)
OR sf.field_b <> r.field_b
OR (sf.field_b IS NULL AND r.field_b IS NOT NULL)
OR (sf.field_b IS NOT NULL AND r.field_b IS NULL)
);
END LOOP;
END AFTER STATEMENT;
END table1_search_maint_trg;
Also, as numerous commenters have pointed out, it's probably better to use a materialized view for this. If you are on 12.2 or later, real-time materialized views (aka "ENABLE ON QUERY COMPUTATION") offer a lot of promise for this sort of thing. No COMMIT overhead to your application and real-time search results. It's just that search time degrades slightly if there are a lot of recent updates to the underlying tables.

Related

Prevent record insert without mutating

I am trying to prevent inserts of records into a table for scheduling. If the start date of the class is between the start and end date of a previous record, and that record is the same location as the new record, then it should not be allowed.
I wrote the following trigger, which compiles, but of course mutates, and therefore has issues. I looked into compound triggers to handle this, but either it can't be done, or my understanding is bad, because I couldn't get that to work either. I would have assumed for a compound trigger that I'd want to do these things on before statement, but I only got errors.
I also considered after insert/update, but doesn't that apply after it's already inserted? It feels like that wouldn't be right...plus, same issue with mutation I believe.
The trigger I wrote is:
CREATE OR REPLACE TRIGGER PREVENT_INSERTS
before insert or update on tbl_classes
DECLARE
v_count number;
v_start TBL_CLASS_SCHED.start_date%type;
v_end TBL_CLASS_SCHED.end_date%type;
v_half TBL_CLASS_SCHED.day_is_half%type;
BEGIN
select start_date, end_date, day_is_half
into v_start, v_end, v_half
from tbl_classes
where class_id = :NEW.CLASS_ID
and location_id = :NEW.location_id;
select count(*)
into v_count
from TBL_CLASS_SCHED
where :NEW.START_DATE >= (select start_date
from TBL_CLASS_SCHED
where class_id = :NEW.CLASS_ID
and location_id = :NEW.location_id)
and :NEW.START_DATE <= (select end_date
from TBL_CLASS_SCHED
where class_id = :NEW.CLASS_ID
and location_id = :NEW.location_id);
if (v_count = 2) THEN
RAISE_APPLICATION_ERROR(-20001,'You cannot schedule more than 2 classes that are a half day at the same location');
end if;
if (v_count = 1 and :NEW.day_is_half = 1) THEN
if (v_half != 1) THEN
RAISE_APPLICATION_ERROR(-20001,'You cannot schedule a class during another class''s time period of the same type at the same location');
end if;
end if;
EXCEPTION
WHEN NO_DATA_FOUND THEN
null;
END;
end PREVENT_INSERTS ;
Perhaps it can't be done with a trigger, and I need to do it multiple ways? For now I've done it using the same logic before doing an insert or update directly, but I'd like to put it as a constraint/trigger so that it will always apply (and so I can learn about it).
There are two things you'll need to fix.
Mutating occurs because you are trying to do a SELECT in the row level part of a trigger. Check out COMPOUND triggers as a way to mitigate this. Basically you capture info at row level, and the process that info at the after statement level. Some examples of that in my video here https://youtu.be/EFj0wTfiJTw
Even with the mutating issue resolved, there is a fundamental flaw in the logic here (trigger or otherwise) due to concurrency. All you need is (say) three or four people all using this code at the same time. All of them will get "Yes, your count checks are ok" because none of them can see each others uncommitted data. Thus they all get told they can proceed and when they finally commit, you'll have multiple rows stored hence breaking the rule your tirgger (or wherever your code is run) was trying to enforce. You'll need to look an appropriate row so that you can controlling concurrent access to the table. For an UPDATE, that is easy because this means there is already some row(s) for the location/class pairing. For an INSERT, you'll need to ensure an appropriate unique constraint is in place on a parent table somewhere. Hard to say without seeing the entire model
In principle a compound trigger could be this one:
CREATE OR REPLACE TYPE CLASS_REC AS OBJECT(
CLASS_ID INTEGER,
LOCATION_ID INTEGER,
START_DATE DATE,
END_DATE DATE,
DAY_IS_HALF INTEGER
);
CREATE OR REPLACE TYPE CLASS_TYPE AS TABLE OF CLASS_REC;
CREATE OR REPLACE TRIGGER UIC_CLASSES
FOR INSERT OR UPDATE ON TBL_CLASSES
COMPOUND TRIGGER
classes CLASS_TYPE;
v_count NUMBER;
v_half TBL_CLASS_SCHED.DAY_IS_HALF%TYPE;
BEFORE STATEMENT IS
BEGIN
classes := CLASS_TYPE();
END BEFORE STATEMENT;
BEFORE EACH ROW IS
BEGIN
classes.EXTEND;
classes(classes.LAST) := CLASS_REC(:NEW.CLASS_ID, :NEW.LOCATION_ID, :NEW.START_DATE, :NEW.END_DATE, :NEW.DAY_IS_HALF);
END BEFORE EACH ROW;
AFTER STATEMENT IS
BEGIN
FOR i IN classes.FIRST..classes.LAST LOOP
SELECT COUNT(*), v_half
INTO v_count, v_half
FROM TBL_CLASSES
WHERE CLASS_ID = classes(i).CLASS_ID
AND LOCATION_ID = classes(i).LOCATION_ID
AND classes(i).START_DATE BETWEEN START_DATE AND END_DATE
GROUP BY v_half;
IF v_count = 2 THEN
RAISE_APPLICATION_ERROR(-20001,'You cannot schedule more than 2 classes that are a half day at the same location');
END IF;
IF v_count = 1 AND classes(i).DAY_IS_HALF = 1 THEN
IF v_half != 1 THEN
RAISE_APPLICATION_ERROR(-20001,'You cannot schedule a class during another class''s time period of the same type at the same location');
end if;
end if;
END LOOP;
END AFTER STATEMENT;
END;
/
But as stated by #Connor McDonald, there are several design flaws - even in a single user environment.
A user may update the DAY_IS_HALF, I don't think the procedure covers all variants. Or a user updates END_DATE and by that, the new time intersects with existing classes.
Better avoid direct insert into the table and create a PL/SQL stored procedure in which you perform all the validations you need and then, if none of the validations fail, perform the insert. And grant execute on that procedure to the applications and do not grant applications insert on that table. That is a way to have all the data-related business rules in the database and make sure that no data that violates those rules in entered into the tables, no matter by what client application, for any client application will call a stored procedure to perform insert or update and will not perform DML directly on the table.
I think the main problem is the ambiguity of the role of the table TBL_CLASS_SCHED and the lack of clear definition of the DAY_IS_HALF column (morning, afternoon ?).
If the objective is to avoid 2 reservations of the same location at the same half day, the easiest solution is to use TBL_CLASS_SCHED to enforce the constraint with (start_date, location_id) being the primary key, morning reservation having start_date truncated at 00:00 and afternoon reservation having start_date set at 12:00, and you don't need end_date in that table, since each row represents an half day reservation.
The complete solution will then need a BEFORE trigger on TBL_CLASSES for UPDATE and INSERT events to make sure start_date and end_date being clipped to match the 00:00 and 12:00 rule and an AFTER trigger on INSERT, UPDATE and DELETE where you will calculate all the half-day rows to maintain in TBL_CLASS_SCHED. (So 1 full day reservation in TBL_CLASSES, will generate 2 rows in TBL_CLASS_SCHED). Since you maintain TBL_CLASS_SCHED from triggers set on TBL_CLASSES without any need to SELECT on the later, you don't have to worry about mutating problem, and because you will constraint start_date to be either at 00:00 or 12:00, the primary key constraint will do the job for you. You may even add a unique index on (start_date, classe_id) in TBL_CLASS_SCHED to avoid a classe to be scheduled at 2 locations at the same time, if you need to.

How to create a procedure which checks if there are any recently added records to the table and if there are then move them to archive table

I have to create a procedure which searches any recently added records and if there are then move them to ARCHIVE table.
This is my statement which filters recently added records
SELECT
CL_ID,
CL_NAME,
CL_SURNAME,
CL_PHONE,
VEH_ID,
VEH_REG_NO,
VEH_MODEL,
VEH_MAKE_YEAR,
WD_ID,
WORK_DESC,
INV_ID,
INV_SERIES,
INV_NUM,
INV_DATE,
INV_PRICE
FROM
CLIENT,
INVOICE,
VEHICLE,
WORKS,
WORKS_DONE
WHERE
Client.CL_ID=Invoice.INV_CL_ID and
Invoice.INV_CL_ID = Client.CL_ID and
Client.CL_ID = Vehicle.VEH_CL_ID and
Vehicle.VEH_ID = Works_Done.WD_VEH_ID and
Works_done.WD_INV_ID = Invoice.INV_ID and
WORKS_DONE.WD_WORK_ID = Works.WORK_ID and
Works_done. Timestamp >= sysdate -1;
You may need something like this (pseudo-code):
create or replace procedure moveRecords is
vLimitDate timestamp := systimestamp -1;
begin
insert into table2
select *
from table1
where your_date >= vLimitDate;
--
delete table1
where your_date >= vLimitDate;
end;
Here are the steps I've used for this sort of task in the past.
Create a global temporary table (GTT) to hold a set of ROWIDs
Perform a multitable direct path insert, which selects the rows to be archived from the source table and inserts their ROWIDs into the GTT and the rest of the data into the archive table.
Perform a delete from the source table, where the source table ROWID is in the GTT of rowids
Issue a commit.
The business with the GTT and the ROWIDs ensures that you have 100% guaranteed stability in the set of rows that you are selecting and then deleting from the source table, regardless of any changes that might occur between the start of your select and the start of your delete (other than someone causing a partitioned table row migration or shrinking the table).
You could alternatively achieve that through changing the transaction isolation level.
O.K. may be something like this...
The downside is - it can be slow for large tables.
The upside is that there is no dependence on date and time - so you can run it anytime and synchronize your archives with live data...
create or replace procedure archive is
begin
insert into archive_table
(
select * from main_table
minus
select * from archive_table
);
end;

Trigger to find next available inventory location

I am trying to implement inventory tracking and am running into problems. As this is my first foray into database triggers (& PL/SQL in general) I think I need an adjustment to my thinking/understanding of how to solve this issue.
My situation is as follows: Each time a new item is added to my inventory, I need to auto-assign it the first available physical storage location. When items are consumed, they are removed from the inventory thus freeing up a physical location (i.e. we are recycling these physical locations). I have two tables: one inventory table and one table containing all legal location names/Ids.
Table: ALL_LOCATIONS
Location_ID
SP.1.1.1.a
SP.1.1.1.b
SP.1.1.1.c
SP.1.1.2.a
SP.1.1.2.b
SP.1.1.2.c
SP.1.1.3.a
SP.1.1.3.b
SP.1.1.3.c
...
SP.25.5.6.c
Table: ITEM_INVENTORY
Item_ID | Location_ID
1 SP.1.1.1.a
2 SP.1.1.1.b
4 SP.1.1.2.a
5 SP.1.1.2.b
6 SP.1.1.2.c
21 SP.1.1.4.a
… …
Note: First available location_ID should be SP.1.1.1.c
I need to create a trigger that will assign the next available Location_ID to the inserted row(s). Searching this site I see several similar questions along these lines, however they are geared towards the logic of determining the next available location. In my case, i think I have that down, but I don't know how to implement it as a trigger. Let's just focus on the insert trigger. The "MINUS" strategy (shown below) works well in picking the next available location, but Oracle doesn't like this inside a trigger since I am reading form the same table that I am editing (throws a mutating table error).
I've done some reading on mutating table errors and some workarounds are suggested (autonomous transactions etc.) however, the key message from my reading is, "you're going about it the wrong way." So my question is, "what's another way of approaching this problem so that I can implement a clean & simple solution without having to hack my way around mutating tables?"
Note: I am certain you can find all manner of things not-quite-right with my trigger code and I will certainly learn something if you point them out -- however my goal here is to learn new ways to approach/think about the fundamental problem with my design.
create or replace TRIGGER Assign_Plate_Location
BEFORE INSERT ON ITEM_INVENTORY
FOR EACH ROW
DECLARE
loc VARCHAR(100) := NULL;
BEGIN
IF(:new.LOCATION_ID IS NULL) THEN
BEGIN
SELECT LOCATION_ID INTO loc FROM
(SELECT DISTINCT LOCATION_ID FROM ALL_LOCATIONS
MINUS
SELECT DISTINCT LOCATION_ID FROM ITEM_INVENTORY)
WHERE ROWNUM = 1;
EXCEPTION
WHEN NO_DATA_FOUND THEN
loc := NULL;
END;
IF(loc IS NOT NULL) THEN
:new.LOCATION_ID := loc;
END IF;
END IF;
END;
There are several ways to do it. You could add column AVAILABLE or OCCUPIED to first table
and select data only from this table with where available = 'Y'. In this case you need also triggers
for delete and for update of location_id on second table.
Second option - when inserting data use merge or some procedure retrieving data from all_locations when item_inventory.location_id is null.
Third option - Oracle 11g introduced compound triggers
which allows better handling mutating tables. In this case trigger would look something like this:
create or replace trigger assign_plate_location
for insert on item_inventory compound trigger
loc varchar2(15) := null;
type t_locs is table of item_inventory.location_id%type;
v_locs t_locs;
i number := 1;
before statement is
begin
select location_id
bulk collect into v_locs from all_locations al
where not exists (
select location_id from item_inventory ii
where ii.location_id = al.location_id );
end before statement;
before each row is
begin
if :new.location_id is null then
if i <= v_locs.count() then
:new.location_id := v_locs(i);
i := i + 1;
end if;
end if;
end before each row;
end assign_plate_location;
I tested it on data from your example, inserts (with select) looked OK. You can give it a try, check if it's efficient, maybe this will suit you.
And last notes - in your select you do not need distinct, MINUS makes values distinct.
Also think about ordering data, now your select (and mine) may take random row from ALL_LOCATIONS.

Writing Procedure to enforce constraints + Testing

I need to set a constraint that the user is unable to enter any records after he/she has entered 5 records in a single month. Would it be advisable that I write a trigger or procedure for that? Else is that any other ways that I can setup the constraint?
Instead of writing a trigger i have opt to write a procedure for the constraint but how do i check if the procedure is working?
Below is the procedure:
CREATE OR REPLACE PROCEDURE InsertReadingCheck
(
newReadNo In Int,
newReadValue In Int,
newReaderID In Int,
newMeterID In Int
)
AS
varRowCount Int;
BEGIN
Select Count(*) INTO varRowCount
From Reading
WHERE ReaderID = newReaderID
AND Trunc(ReadDate,'mm') = Trunc(Sysdate,'mm');
IF (varRowCount >= 5) THEN
BEGIN
DBMS_OUTPUT.PUT_LINE('*************************************************');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE(' You attempting to enter more than 5 Records ');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('*************************************************');
ROLLBACK;
RETURN;
END;
ELSIF (varRowCount < 5) THEN
BEGIN
INSERT INTO Reading
VALUES(seqReadNo.NextVal, sysdate, newReadValue,
newReaderID, newMeterID);
COMMIT;
END;
END IF;
END;
Anyone can help me look through
This is the sort of thing that you should avoid putting in a trigger. Especially the ROLLBACK and the COMMIT. This seems extremely dangerous (and I'm not even sure whether it's possible). You might have other transactions that you wish to commit that you rollback or vice versa.
Also, by putting this in a trigger you are going to get the following error:
ORA-04091: table XXXX is mutating, trigger/function may not see it
There are ways round this but they're excessive and involve doing something funky in order to get round Oracle's insistence that you do the correct thing.
This is the perfect opportunity to use a stored procedure to insert data into your table. You can check the number of current records prior to doing the insert meaning that there is no need to do a ROLLBACK.
It depends upon your application, if insert is already present in your application many times then trigger is better option.
This is a behavior constraint. Its a matter of opinion but I would err on the side of keeping this kind of business logic OUT of your database. I would instead keep track of who added what records in the records table, and on what day/times. You can have a SP to get this information, but then your code behind should handle whether or not the user can see certain links (or functions) based on the data that's returned. Whether that means keeping the user from accessing the page(s) where they insert records, or give them read only views is up to you.
One declarative way you could solve this problem that would obey all concurrency rules is to use a separate table to keep track of number of inserts per user per month:
create table inserts_check (
ReaderID integer not null,
month date not null,
number_of_inserts integer constraint max_number_of_inserts check (number_of_inserts <= 5),
primary key (ReaderID, month)
);
Then create a trigger on the table (or all tables) for which inserts should be capped at 5:
create trigger after insert on <table>
for each row
begin
MERGE INTO inserts_check t
USING (select 5 as ReaderID, trunc(sysdate, 'MM') as month, 1 as number_of_inserts from dual) s
ON (t.ReaderID = s.ReaderID and t.month = s.month)
WHEN MATCHED THEN UPDATE SET t.number_of_inserts = t.number_of_inserts + 1
WHEN NOT MATCHED THEN INSERT (ReaderID, month, number_of_inserts)
VALUES (s.ReaderID, s.month, s.number_of_inserts);
end;
Once the user has made 5 inserts, the constraint max_number_of_inserts will fail.

Pattern to substitute for MERGE INTO Oracle syntax when not allowed

I have an application that uses the Oracle MERGE INTO... DML statement to update table A to correspond with some of the changes in another table B (table A is a summary of selected parts of table B along with some other info). In a typical merge operation, 5-6 rows (out of 10's of thousands) might be inserted in table B and 2-3 rows updated.
It turns out that the application is to be deployed in an environment that has a security policy on the target tables. The MERGE INTO... statement can't be used with these tables (ORA-28132: Merge into syntax does not support security policies)
So we have to change the MERGE INTO... logic to use regular inserts and updates instead. Is this a problem anyone else has run into? Is there a best-practice pattern for converting the WHEN MATCHED/WHEN NOT MATCHED logic in the merge statement into INSERT and UPDATE statements? The merge is within a stored procedure, so it's fine for the solution to use PL/SQL in addition to the DML if that is required.
Another way to do this (other than Merge) would be using two sql statements one for insert and one for update. The "WHEN MATCHED" and "WHEN NOT MATCHED" can be handled using joins or "in" Clause.
If you decide to take the below approach, it is better to run the update first (sine it only runs for the matching records) and then insert the non-Matching records. The Data sets would be the same either way, it just updates less number of records with the order below.
Also, Similar to the Merge, this update statement updates the Name Column even if the names in Source and Target match. If you dont want that, add that condition to the where as well.
create table src_table(
id number primary key,
name varchar2(20) not null
);
create table tgt_table(
id number primary key,
name varchar2(20) not null
);
insert into src_table values (1, 'abc');
insert into src_table values (2, 'def');
insert into src_table values (3, 'ghi');
insert into tgt_table values (1, 'abc');
insert into tgt_table values (2,'xyz');
SQL> select * from Src_Table;
ID NAME
---------- --------------------
1 abc
2 def
3 ghi
SQL> select * from Tgt_Table;
ID NAME
---------- --------------------
2 xyz
1 abc
Update tgt_Table tgt
set Tgt.Name =
(select Src.Name
from Src_Table Src
where Src.id = Tgt.id
);
2 rows updated. --Notice that ID 1 is updated even though value did not change
select * from Tgt_Table;
ID NAME
----- --------------------
2 def
1 abc
insert into tgt_Table
select src.*
from Src_Table src,
tgt_Table tgt
where src.id = tgt.id(+)
and tgt.id is null;
1 row created.
SQL> select * from tgt_Table;
ID NAME
---------- --------------------
2 def
1 abc
3 ghi
commit;
There could be better ways to do this, but this seems simple and SQL-oriented. If the Data set is Large, then a PL/SQL solution won't be as performant.
There are at least two options I can think of aside from digging into the security policy, which I don't know much about.
Process the records to merge row by row. Attempt to do the update, if it fails to update then insert, or vise versa, depending on whether you expect most records to need updating or inserting (ie optimize for the most common case that will reduce the number of SQL statements fired), eg:
begin
for row in (select ... from source_table) loop
update table_to_be_merged
if sql%rowcount = 0 then -- no row matched, so need to insert
insert ...
end if;
end loop;
end;
Another option may be to bulk collect the records you want to merge into an array, and then attempted to bulk insert them, catching all the primary key exceptions (I cannot recall the syntax for this right now, but you can get a bulk insert to place all the rows that fail to insert into another array and then process them).
Logically a merge statement has to check for the presence of each records behind the scenes anyway, and I think it is processed quite similarly to the code I posted above. However, merge will always be more efficient than coding it in PLSQL as it will be only 1 SQL call instead of many.

Resources