Insert in Merge not working in Oracle - oracle

I am new to Oracle. I have a table in Oracle which has 4 columns Period, Open_Flag,Creation_Dt,Updated_By.
The Period column is the Primary key of the table. I have created a proc which will check the value of period from input parameter in the table, if its existing, the value of Open_flag has to be updated else a new record shall be inserted.
create or replace
PROCEDURE PROC_REF_SAP_PERIOD(
V_PERIOD IN NUMBER,V_OPEN_FLAG IN VARCHAR2,V_CREATION_DT IN DATE,V_UPDATED_BY IN VARCHAR2)
AS
BEGIN
MERGE INTO REF_SAP_PERIOD T
USING (SELECT * FROM REF_SAP_PERIOD WHERE PERIOD=V_PERIOD )S
ON (T.PERIOD=S.PERIOD )
WHEN MATCHED THEN UPDATE SET OPEN_FLAG = V_OPEN_FLAG --WHERE PERIOD=V_PERIOD AND CREATION_DT=V_CREATION_DT AND UPDATED_BY=V_UPDATED_BY
WHEN NOT MATCHED THEN INSERT (PERIOD,OPEN_FLAG,CREATION_DT,UPDATED_BY) VALUES (V_PERIOD,V_OPEN_FLAG,V_CREATION_DT,V_UPDATED_BY);
END;
The issue is that the Update is working well in this case, however, the insert is not working. Please help.

You are merging table with itself, filtered by period. Obviously, it will never see your non-existent values in itself.
Try this line instead of your USING line:
using (select V_PERIOD "period" from dual)S

Related

Inserting Row Number based on existing value in the table

I have a requirement that I need to insert row number in a table based on value already present in the table. For example, the max row_nbr record in the current table is something like this:
+----------+----------+------------+---------+
| FST_NAME | LST_NAME | STATE_CODE | ROW_NBR |
+----------+----------+------------+---------+
| John | Doe | 13 | 123 |
+----------+----------+------------+---------+
Now, I need to insert more records, with given FST_NAME and LST_NAME values. ROW_NBR needs to be generated while inserting the data into table with values auto-incrementing from 123.
I can't use a sequence, as my loading process is not the only process that inserts data into this table. And I can't use a cursor as well, as due to high volume of data the TEMP space gets filled up quickly. And I'm inserting data as given below:
insert into final_table
( fst_name,lst_name,state_code)
(select * from staging_table
where state_code=13);
Any ideas how to implement this?
It sounds like other processes are finding the current maximum row_nbr value and incrementing it as they do single-row inserts in a cursor loop.
You could do something functionally similar, either finding the maximum in advance and incrementing it (if you're already running this in a PL/SQL block):
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, variable_holding_maximum + rownum
from staging_table st
where st.state_code=13;
or by querying the table as part of the query, which doesn't need PL/SQL:
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, (select max(row_nbr) from final_table) + rownum
from staging_table st
where st.state_code=13;
db<>fiddle
But this isn't a good solution because it doesn't prevent clashes from different processes and sessions trying to insert at the same time; but neither would the cursor loop approach, unless it is catching unique constraint errors and re-attempting with a new value, perhaps.
It would be better to use a sequence, which would be an auto-increment column but you said you can't change the table structure; and you need to let the other processes continue to work without modification. You can still do that with a sequence and trigger approach, having the trigger always set the row_nbr value form the sequence, regardless of whether the insert statement supplied a value.
If you create a sequence that starts from the current maximum, with something like:
create sequence final_seq start with <current max + 1>
or without manually finding it:
declare
start_with pls_integer;
begin
select nvl(max(row_nbr), 0) + 1 into start_with from final_table;
execute immediate 'create sequence final_seq start with ' || start_with;
end;
/
then your trigger could just be:
create trigger final_trig
before insert on final_table
for each row
begin
:new.row_nbr := final_seq.nextval;
end;
/
Then your insert ... select statement doesn't need to supply or even think about the row_nbr value, so you can leave it as you have it now (except I'd avoid select * even in that construct, and list the staging table columns explicitly); and any existing inserts that do supply the row_nbr don't need to be modified and the value they supply will just be overwritten from the sequence.
db<>fiddle showing inserts with and withouth row_nbr specified.

Merge function error - On Clause cannot be updated - PK issue

Procedure is to check the the eid ,and do merge. While updating the existing row, it needs to update with the eid.seq.nextval.
I have created a the sequence and calling in Procedure.
CREATE OR REPLACE PROCEDURE Temp(
Eid Number,
dpt varchar2
) As
BEGIN
MERGE INTO Src1 e
USING (select v_eid as eid
, v_dept as dept
FROM dual) d
ON (e.eid = d.eid)
WHEN NOT MATCHED THEN
INSERT (e.eid,e.dept)
VALUES(d.eid, d.dept)
WHEN MATCHED THEN
UPDATE SET e.eid = eid_SEQ.nextval, e.dept = d.dept;
END;
/
Error:
1.ORA--38104:ORA-38104: Columns referenced in the ON Clause cannot be updated.
IF I remove the ON clause condition then PK cannot be null error .
Also, the best procedure to call the seq.nextval in the procedure.
Any help is greatly appreciated!
What you want to do is impossible with a merge statement. Think about it: you're trying to update the value of a field that is used to determine if the field should be updated. This is not safe, especially if the merge statement updates/inserts more than one row.
Apart from that it is close to impossible to suggest an alternative since eid seems to be a primary key and updating primary keys is normally a bad thing.

TRIGGER Oracle to prevent updating or inserting

I am having problems with this code below, which is a trigger used in Oracle SQL:
CREATE OR REPLACE TRIGGER TRG_TUTOR_BLOCK
BEFORE INSERT OR UPDATE ON tutors
FOR EACH ROW
DECLARE
BEGIN
IF :new.tutorName = :old.tutorName
THEN
RAISE_APPLICATION_ERROR(-20101, 'A tutor with the same name currently exists.');
ROLLBACK;
END IF;
END;
/
This trigger is used to prevent users from entering the same tutor name at different records.
After I insert two records with the same tutorname, the trigger does not block me from inserting it. Is there anyone can tell me what are the problems with this coding? Here are the sample format and insert values:
INSERT INTO tutors VALUES (tutorID, tutorName tutorPhone, tutorAddress, tutorRoom, loginID);
INSERT INTO tutors VALUES ('13SAS01273', 'Tian Wei Hao', '019-8611123','No91, Jalan Wangsa Mega 2, 53100 KL', 'A302', 'TianWH');
Trigger in Kamil's example will throw ORA-04091, you can see this with your own eyes here. ROLLBACK in a trigger is unnecessary, it runs implicitly when a trigger makes a statement to fail.
You can prohibit any DML on table by altering it with read only clause:
alter table tutors read only;
At last, integrity should be declarated with integrity constraints and not with triggers.
Good luck!
You don't need a trigger for this in Oracle.
You can do it with an "unique index" on the tutorName column (see http://docs.oracle.com/cd/B28359_01/server.111/b28310/indexes003.htm#i1106547).
Note: about your trigger, it fails on checking for another record with the same tutorName because it's not scanning the tutors table for another record with the same tutorName, it's just comparing the tutorName values of the row you are creating (in this case, old.tutorName is just NULL, because the row doesn't exist yet).
Check the case in yours trigger body
IF :new.tutorName = :old.tutorName
It returns true only if 'tutorName' value is the same in new and old record. When you'll trying to updat some value you'll get
IF 'someTutorName' = 'someTutorName'
which will return TRUE.
Inserting row cannot fire this rule because you're trying to compare something like that:
'someTutorName' = NULL
This case always returns FALSE.
Try to use something like that
CREATE OR REPLACE TRIGGER TRG_TUTOR_BLOCK
BEFORE INSERT OR UPDATE ON tutors
FOR EACH ROW
DECLARE
rowsCount INTEGER;
BEGIN
SELECT COUNT(*) FROM tutors WHERE tutorName is :new.tutorName INTO rowsCount;
IF rowsCount > 0
THEN
RAISE_APPLICATION_ERROR(-20101, 'A tutor with the same name currently exists.');
ROLLBACK;
END IF;
END;
/
But the best solution is the one mentioned by friol - use unique index by executing SQL like this
ALTER TABLE tutors
ADD CONSTRAINT UNIQUE_TUTOR_NAME UNIQUE (tutorName);
If you wanna completely ignore recording a row to a table you can follow these steps
rename table to something else and create a view with the same name and create an instead of trigger.
create table usermessages (id number(10) not null)
GO
alter table usermessages rename to xusermessages
GO
create or replace view usermessages as (select * from xusermessages)
GO
create or replace trigger usermessages_instead_of_trg
instead of insert or update on usermessages
for each row
begin
Null ;
end ;
GO
insert into usermessages(123)
Live test available here below
http://sqlfiddle.com/#!4/ad6bc/2

Alter column data type in production database

I'm looking for the best way to change a data type of a column in a populated table. Oracle only allows changing of data type in colums with null values.
My solution, so far, is a PLSQL statement which stores the data of the column to be modified in a collection, alters the table and then iterates over the collection, restoring the original data with data type converted.
-- Before: my_table ( id NUMBER, my_value VARCHAR2(255))
-- After: my_table (id NUMBER, my_value NUMBER)
DECLARE
TYPE record_type IS RECORD ( id NUMBER, my_value VARCHAR2(255));
TYPE nested_type IS TABLE OF record_type;
foo nested_type;
BEGIN
SELECT id, my_value BULK COLLECT INTO foo FROM my_table;
UPDATE my_table SET my_value = NULL;
EXECUTE IMMEDIATE 'ALTER TABLE my_table MODIFY my_value NUMBER';
FOR i IN foo.FIRST .. foo.LAST
LOOP
UPDATE my_table
SET = TO_NUMBER(foo(i).my_value)
WHERE my_table.id = foo(i).id;
END LOOP;
END;
/
I'm looking for a more experienced way to do that.
The solution is wrong. The alter table statement does an implicit commit. So the solution has the following problems:
You cannot rollback after alter the alter table statement and if the database crashes after the alter table statement you will loose data
Between the select and the update users can make changes to the data
Instead you should have a look at oracle online redefinition.
Your solution looks a bit dangerous to me. Loading the values into a collection and subsequently deleting them fom the table means that these values are now only available in memory. If something goes wrong they are lost.
The proper procedure is:
Add a column of the correct type to the table.
Copy the values to the new column.
Drop the old column.
Rename the new column to the old columns name.

Pattern to substitute for MERGE INTO Oracle syntax when not allowed

I have an application that uses the Oracle MERGE INTO... DML statement to update table A to correspond with some of the changes in another table B (table A is a summary of selected parts of table B along with some other info). In a typical merge operation, 5-6 rows (out of 10's of thousands) might be inserted in table B and 2-3 rows updated.
It turns out that the application is to be deployed in an environment that has a security policy on the target tables. The MERGE INTO... statement can't be used with these tables (ORA-28132: Merge into syntax does not support security policies)
So we have to change the MERGE INTO... logic to use regular inserts and updates instead. Is this a problem anyone else has run into? Is there a best-practice pattern for converting the WHEN MATCHED/WHEN NOT MATCHED logic in the merge statement into INSERT and UPDATE statements? The merge is within a stored procedure, so it's fine for the solution to use PL/SQL in addition to the DML if that is required.
Another way to do this (other than Merge) would be using two sql statements one for insert and one for update. The "WHEN MATCHED" and "WHEN NOT MATCHED" can be handled using joins or "in" Clause.
If you decide to take the below approach, it is better to run the update first (sine it only runs for the matching records) and then insert the non-Matching records. The Data sets would be the same either way, it just updates less number of records with the order below.
Also, Similar to the Merge, this update statement updates the Name Column even if the names in Source and Target match. If you dont want that, add that condition to the where as well.
create table src_table(
id number primary key,
name varchar2(20) not null
);
create table tgt_table(
id number primary key,
name varchar2(20) not null
);
insert into src_table values (1, 'abc');
insert into src_table values (2, 'def');
insert into src_table values (3, 'ghi');
insert into tgt_table values (1, 'abc');
insert into tgt_table values (2,'xyz');
SQL> select * from Src_Table;
ID NAME
---------- --------------------
1 abc
2 def
3 ghi
SQL> select * from Tgt_Table;
ID NAME
---------- --------------------
2 xyz
1 abc
Update tgt_Table tgt
set Tgt.Name =
(select Src.Name
from Src_Table Src
where Src.id = Tgt.id
);
2 rows updated. --Notice that ID 1 is updated even though value did not change
select * from Tgt_Table;
ID NAME
----- --------------------
2 def
1 abc
insert into tgt_Table
select src.*
from Src_Table src,
tgt_Table tgt
where src.id = tgt.id(+)
and tgt.id is null;
1 row created.
SQL> select * from tgt_Table;
ID NAME
---------- --------------------
2 def
1 abc
3 ghi
commit;
There could be better ways to do this, but this seems simple and SQL-oriented. If the Data set is Large, then a PL/SQL solution won't be as performant.
There are at least two options I can think of aside from digging into the security policy, which I don't know much about.
Process the records to merge row by row. Attempt to do the update, if it fails to update then insert, or vise versa, depending on whether you expect most records to need updating or inserting (ie optimize for the most common case that will reduce the number of SQL statements fired), eg:
begin
for row in (select ... from source_table) loop
update table_to_be_merged
if sql%rowcount = 0 then -- no row matched, so need to insert
insert ...
end if;
end loop;
end;
Another option may be to bulk collect the records you want to merge into an array, and then attempted to bulk insert them, catching all the primary key exceptions (I cannot recall the syntax for this right now, but you can get a bulk insert to place all the rows that fail to insert into another array and then process them).
Logically a merge statement has to check for the presence of each records behind the scenes anyway, and I think it is processed quite similarly to the code I posted above. However, merge will always be more efficient than coding it in PLSQL as it will be only 1 SQL call instead of many.

Resources