Merge with select with multiple rows - oracle

I have a query which every time runs, selects the rows of user_triggers which are related to a table(p_table_name_in). I want to run this procedure every day and I want to just insert new rows, not all rows again. but when I install this oackage , I get this error:
ORA-00932 (130: 21): PL / SQL: ORA-00932: Inconsistent data types:
CLOB expected, LONG received (line 31)
and when I try to change TRIGGER_BODY AS BODY_TRIGGER to TO_LOB(TRIGGER_BODY) AS BODY_TRIGGER I get this error:
ORA-00932 (111: 29): PL / SQL: ORA-00932: Inconsistent data types: -
expected, LONG received (line 12)
procedure:
PROCEDURE save_trigger_definitions ( p_table_name_in in VARCHAR2 ) IS
BEGIN
MERGE INTO hot_utils_reload_triggers t1
USING
(
SELECT TRIGGER_NAME ,
TABLE_NAME ,
STATUS ,
DESCRIPTION,
TRIGGER_BODY AS BODY_TRIGGER,
WHEN_CLAUSE
FROM user_triggers
)t2
ON(t2.TABLE_NAME like upper(p_table_name_in))
WHEN MATCHED THEN UPDATE SET
t1.DESCRIPTION = t2.DESCRIPTION,
t1.WHEN_CLAUSE = t2.WHEN_CLAUSE
WHEN NOT MATCHED THEN
INSERT (TRIGGER_NAME,
TABLE_NAME,
STATUS,
DESCRIPTION,
BODY_TRIGGER,
WHEN_CLAUSE)
VALUES (t2.TRIGGER_NAME,
t2.TABLE_NAME,
t2.STATUS,
t2.DESCRIPTION,
t2.BODY_TRIGGER,
t2.WHEN_CLAUSE);
commit;
END save_trigger_definitions;

It's also interesting to me that Oracle does not allow to use TO_LOB within a SELECT or MERGE Statement, while does for INSERT. Thus you can seperately use INSERT and MERGE with only the part containing MATCHED part such as
CREATE OR REPLACE PROCEDURE save_trigger_definitions ( p_table_name_in in VARCHAR2 ) IS
BEGIN
INSERT INTO hot_utils_reload_triggers
(trigger_name,
table_name,
status,
description,
body_trigger,
when_clause)
SELECT trigger_name,
table_name,
status,
description,
TO_LOB(trigger_body),
when_clause
FROM user_triggers
WHERE table_name LIKE UPPER(p_table_name_in)
AND NOT EXISTS ( SELECT 1
FROM hot_utils_reload_triggers
WHERE trigger_name = u.trigger_name
AND table_name = u.table_name
AND status = u.status );
UPDATE hot_utils_reload_triggers h
SET h.description = description, h.when_clause = when_clause
WHERE table_name LIKE UPPER(p_table_name_in);
COMMIT;
END;
/
assuming that you don't want duplicated rows for some columns such as trigger_name,table_name,status, I have added a subquery for them after NOT EXISTS clause.
Ref1
Ref2
Using DBMS_REDEFINITION.START_REDEF_TABLE() might be another alternative for LONG to LOB conversion cases.

Related

Compare differences before insert into oracle table

Could you please tell me how to compare differences between table and my select query and insert those results in separate table? My plan is to create one base table (name RESULT) by using select statement and populate it with current result set. Then next day I would like to create procedure which will going to compare same select with RESULT table, and insert differences into another table called DIFFERENCES.
Any ideas?
Thanks!
You can create the RESULT_TABLE using CTAS as follows:
CREATE TABLE RESULT_TABLE
AS SELECT ... -- YOUR QUERY
Then you can use the following procedure which calculates the difference between your query and data from RESULT_TABLE:
CREATE OR REPLACE PROCEDURE FIND_DIFF
AS
BEGIN
INSERT INTO DIFFERENCES
--data present in the query but not in RESULT_TABLE
(SELECT ... -- YOUR QUERY
MINUS
SELECT * FROM RESULT_TABLE)
UNION
--data present in the RESULT_TABLE but not in the query
(SELECT * FROM RESULT_TABLE
MINUS
SELECT ... );-- YOUR QUERY
END;
/
I have used the UNION and the difference between both of them in a different order using MINUS to insert the deleted data also in the DIFFERENCES table. If this is not the requirement then remove the query after/before the UNION according to your requirement.
-- Create a table with results from the query, and ID as primary key
create table result_t as
select id, col_1, col_2, col_3
from <some-query>;
-- Create a table with new rows, deleted rows or updated rows
create table differences_t as
select id
-- Old values
,b.col_1 as old_col_1
,b.col_2 as old_col_2
,b.col_3 as old_col_3
-- New values
,a.col_1 as new_col_1
,a.col_2 as new_col_2
,a.col_3 as new_col_3
-- Execute the query once again
from <some-query> a
-- Outer join to detect also detect new/deleted rows
full join result_t b using(id)
-- Null aware comparison
where decode(a.col_1, b.col_1, 1, 0) = 0
or decode(a.col_2, b.col_2, 1, 0) = 0
or decode(a.col_3, b.col_3, 1, 0) = 0;

Oracle CLOB column and LAG

I'm facing a problem when I try to use LAG function on CLOB column.
So let's assume we have a table
create table test (
id number primary key,
not_clob varchar2(255),
this_is_clob clob
);
insert into test values (1, 'test1', to_clob('clob1'));
insert into test values (2, 'test2', to_clob('clob2'));
DECLARE
x CLOB := 'C';
BEGIN
FOR i in 1..32767
LOOP
x := x||'C';
END LOOP;
INSERT INTO test(id,not_clob,this_is_clob) values(3,'test3',x);
END;
/
commit;
Now let's do a select using non-clob columns
select id, lag(not_clob) over (order by id) from test;
It works fine as expected, but when I try the same with clob column
select id, lag(this_is_clob) over (order by id) from test;
I get
ORA-00932: inconsistent datatypes: expected - got CLOB
00932. 00000 - "inconsistent datatypes: expected %s got %s"
*Cause:
*Action:
Error at Line: 1 Column: 16
Can you tell me what's the solution of this problem as I couldn't find anything on that.
The documentation says the argument for any analytic function can be any datatype but it seems unrestricted CLOB is not supported.
However, there is a workaround:
select id, lag(dbms_lob.substr(this_is_clob, 4000, 1)) over (order by id)
from test;
This is not the whole CLOB but 4k should be good enough in many cases.
I'm still wondering what is the proper way to overcome the problem
Is upgrading to 12c an option? The problem is nothing to do with CLOB as such, it's the fact that Oracle has a hard limit for strings in SQL of 4000 characters. In 12c we have the option to use extended data types (providing we can persuade our DBAs to turn it on!). Find out more.
Some of the features may not work properly in SQL when using CLOBs(like DISTINCT , ORDER BY GROUP BY etc. Looks like LAG is also one of them but, I couldn't find anywhere in docs.
If your values in the CLOB columns are always less than 4000 characters, you may use TO_CHAR
select id, lag( TO_CHAR(this_is_clob)) over (order by id) from test;
OR
convert it into an equivalent SELF JOIN ( may not be as efficient as LAG )
SELECT a.id,
b.this_is_clob AS lagging
FROM test a
LEFT JOIN test b ON b.id < a.id;
Demo
I know this is an old question, but I think I found an answer which eliminates the need to restrict the CLOB length and wanted to share it. Utilizing CTE and recursive subqueries, we can replicate the lag functionality with CLOB columns.
First, let's take a look at my "original" query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
)
SELECT tt.order_by_col,
tt.clob_col,
LAG(tt.clob_col) OVER (ORDER BY tt.order_by_col)
FROM test_table tt;
As expected, I get the following error:
ORA-00932: inconsistent datatypes: expected - got CLOB
Now, lets look at the modified query:
WITH TEST_TABLE AS
(
SELECT LEVEL ORDER_BY_COL,
TO_CLOB(LEVEL) AS CLOB_COL
FROM DUAL
CONNECT BY LEVEL <= 10
),
initial_pull AS
(
SELECT tt.order_by_col,
LAG(tt.order_by_col) OVER (ORDER BY tt.order_by_col) AS PREV_ROW,
tt.clob_col
FROM test_table tt
),
recursive_subquery (order_by_col, prev_row, clob_col, prev_clob_col) AS
(
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, NULL
FROM initial_pull ip
WHERE ip.prev_row IS NULL
UNION ALL
SELECT ip.order_by_col, ip.prev_row, ip.clob_col, rs.clob_col
FROM initial_pull ip
INNER JOIN recursive_subquery rs ON ip.prev_row = rs.order_by_col
)
SELECT rs.order_by_col, rs.clob_col, rs.prev_clob_col
FROM recursive_subquery rs;
So here is how it works.
I create the TEST_TABLE, this really is only for the example as you should already have this table somewhere in your schema.
I create a CTE of the data I want to pull, plus a LAG function on the primary key (or a unique column) in the table partitioned and ordered in the same way I would have in my original query.
Create a recursive subquery using the initial row as the root and descending row by row joining on the lagged column. Returning both the CLOB column from the current row and the CLOB column from its parent row.

Creating Oracle procedure for selecting recently added data to CLIENT table and if found then add those records to Archive table

I have a table CLIENT and i need to filter it so it shows only recently added records.
SELECT *
FROM Client
WHERE TIMESTAMP >= sysdate -1;
I have to create PL/SQL procedure that inserts those records into Archive table with Newest='y'. Newest is a column in archive table. And remove Newest='y' from old records which are already in archive table.
I am stuck in here
CREATE OR REPLACE PROCEDURE add_to_arch(
arch_ archive%rowtype )
as
begin
SELECT *
FROM Client
WHERE TIMESTAMP >= sysdate -1;
loop
INSERT
INTO archive
(
CLIENT_ID,
NAME,
SURNAME,
PHONE,
VEH_ID,
REG_NO,
MADE_MODEL,
MAKE_YEAR,
WD_ID,
WORK_DESC,
INV_ID,
INV_SERIES,
INV_NUM,
INV_DATE,
INV_PRICE
)
SELECT CL_ID,
CL_NAME,
CL_SURNAME,
CL_PHONE,
VEH_ID,
VEH_REG_NO,
VEH_MODEL,
VEH_MAKE_YEAR,
WD_ID,
WORK_DESC,
INV_ID,
INV_SERIES,
INV_NUM,
INV_DATE,
INV_PRICE
FROM CLIENT, INVOICE, VEHICLE, WORKS, WORKS_DONE
WHERE CL_ID = arch_.Client_ID;
end loop;
END;
put the "SELECT * FROM Client WHERE TIMESTAMP >= sysdate -1;" into a cursor, like:
CURSOR cr_c IS
SELECT *
FROM Client
WHERE TIMESTAMP >= sysdate -1;
Then iterate through cr_c in a FOR loop
(http://www.techonthenet.com/oracle/loops/cursor_for.php)
OR
use trigger, maybe that would be better
(http://www.techonthenet.com/oracle/triggers/after_insert.php)
how about just a merge statement. merge into archive using the client table. match on the ids. if you find a record in the archive table then just update the newest indicator to 'N'. if you don't find a record in the archive table then insert the record with 'Y' as the newest value. don't have your data so you will probably have to play with the statement below but something like this.
MERGE INTO archive a USING
(SELECT client_id col_a, col_b, col_c FROM Client
) c ON (a.client_id = c.id)
WHEN MATCHED THEN
UPDATE SET a.newest= 'N' WHEN NOT MATCHED THEN
INSERT
(
a.client_id,
a.newest,
col_a,
col_b,
col_c
)
VALUES
(
c.id,
'Y',
c.col_a,
c.col_b,
c.col_c
);

PL/SQL (INSERT/UPDATE) unique constraint violated error in trigger due to sequencel.nextval

I want insert/update records to another table (MICL_SUPERVISORS) using Trigger (pl/sql oracle 10g).
When trigger fired it is giving an error as
ORA-00001: unique constraint violated.
I know it happens because I want to add SUPID from sequence
Select micl_sup_id_seq.nextval into nSUPID from dual
And this is happening inside a loop.
SUPID column is primary key in my table( MICL_SUPERVISOR). So I can't drop that constraint.
Once I tried auto incrementing but it take long time and it didn't work well and it is slow. I have thousands of records in this table. I did it as
SELECT MAX((SUP_ID)+1 from micl_sup_id_seq
Due to this error I did a small research and found out we cannot use seq.nextval inside a trigger. So my question is is there any easy, accurate way to achieve this?
Here is the code (it all happening inside if clause else part is working Fine. Pls note that I have use a cursor , inside open cursor all this happen)
CREATE OR REPLACE TRIGGER "c"."INSERT_MICL_SUP_DETAILS"
AFTER INSERT OR UPDATE OF "ID","SUP_EMP_NO","EMP_NO" ON "MIMAX"."EMP"
REFERENCING OLD AS "OLD" NEW AS "NEW" FOR EACH ROW
DECLARE
miclaim_supervisor_count NUMBER;
employee_company_code VARCHAR2(10);
employee_businessunit NUMBER;
projmgr NUMBER;
nsupid NUMBER;
CURSOR projmgrscursor IS
SELECT b.bu_member_id
FROM bu_member b, emp_sub_div s
WHERE s.emp_no = :NEW.emp_no
AND s.sub_division_code = '0345' AND s.div_code = '1010'
AND b.bu_id IN (SELECT bu_id FROM bu_member WHERE bu_member_id = :NEW.emp_no);
BEGIN
DELETE
FROM micl_supervisors
WHERE emp_no = :NEW.emp_no
AND is_ovverridden = 0;
SELECT count(*)
INTO miclaim_supervisor_count
FROM micl_supervisors
WHERE emp_no = :NEW.emp_no
AND is_ovverridden = 1;
SELECT company_code
INTO employee_company_code
FROM employee_master
WHERE emp_no = :NEW.emp_no;
projmgr := 0;
IF (employee_company_code ='SOFT')THEN
OPEN projmgrscursor;
LOOP
FETCH projmgrscursor INTO projmgr;
EXIT WHEN projmgrscursor%notfound;
SELECT micl_sup_id_seq.nextval INTO nsupid FROM dual;
INSERT INTO micl_supervisors (sup_id,assigned_date
, assigned_by_emp_no
, amount_limit
, is_ovverridden
, sup_emp_no
, rtd_emp
, emp_no)
VALUES ( nsupid
, SYSDATE
, :NEW.entryaddedby_emp_no
, 3000
, 0
, projmgr
, NULL
, :NEW.emp_no);
END LOOP;
CLOSE projmgrscursor;
ELSE
IF(miclaim_supervisor_count IS NULL OR miclaim_supervisor_count<1) THEN
INSERT INTO micl_supervisors VALUES (:NEW.ID
, SYSDATE
, :NEW.entryaddedby_emp_no
, 3000
, 0
, :NEW.sup_emp_no
, NULL
, :NEW.emp_no);
END IF;
END IF;
END;
/
If anything unclear ask me I'll explain furthermore about this scenario , I hope anyone will help to solve this problem
What other constraints are present on the table? It's more likely that you're running into a constraint error other the sequence, which you are fixated upon.
Due to this Error I did a small research and found out we cannot use seq.nextval inside a trigger.
I don't know where you read that, but that's absolutely false. I've used seq.nextval for many of my audit triggers/tables, and it works fine.
Query all_constraints (or user_constraints) with table name micl_supervisors - like so
SELECT *
FROM user_constraints
WHERE table_name = 'MICL_SUPERVISORS'
and update the question or check with what data you're trying to insert.

Update or insert based on if employee exist in table

Do want to create Stored procc which updates or inserts into table based on the condition if current line does not exist in table?
This is what I have come up with so far:
PROCEDURE SP_UPDATE_EMPLOYEE
(
SSN VARCHAR2,
NAME VARCHAR2
)
AS
BEGIN
IF EXISTS(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN)
--what ? just carry on to else
ELSE
INSERT INTO pb_mifid (ssn, NAME)
VALUES (SSN, NAME);
END;
Is this the way to achieve this?
This is quite a common pattern. Depending on what version of Oracle you are running, you could use the merge statement (I am not sure what version it appeared in).
create table test_merge (id integer, c2 varchar2(255));
create unique index test_merge_idx1 on test_merge(id);
merge into test_merge t
using (select 1 id, 'foobar' c2 from dual) s
on (t.id = s.id)
when matched then update set c2 = s.c2
when not matched then insert (id, c2)
values (s.id, s.c2);
Merge is intended to merge data from a source table, but you can fake it for individual rows by selecting the data from dual.
If you cannot use merge, then optimize for the most common case. Will the proc usually not find a record and need to insert it, or will it usually need to update an existing record?
If inserting will be most common, code such as the following is probably best:
begin
insert into t (columns)
values ()
exception
when dup_val_on_index then
update t set cols = values
end;
If update is the most common, then turn the procedure around:
begin
update t set cols = values;
if sql%rowcount = 0 then
-- nothing was updated, so the record doesn't exist, insert it.
insert into t (columns)
values ();
end if;
end;
You should not issue a select to check for the row and make the decision based on the result - that means you will always need to run two SQL statements, when you can get away with one most of the time (or always if you use merge). The less SQL statements you use, the better your code will perform.
BEGIN
INSERT INTO pb_mifid (ssn, NAME)
select SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = SSN);
END;
UPDATE:
Attention, you should name your parameter p_ssn(distinguish to the column SSN ), and the query become:
INSERT INTO pb_mifid (ssn, NAME)
select P_SSN, NAME from dual
where not exists(SELECT * FROM tblEMPLOYEE a where a.ssn = P_SSN);
because this allways exists:
SELECT * FROM tblEMPLOYEE a where a.ssn = SSN

Resources