How to deal with sequence in insert from XMLTable? - oracle

I have write a PL/SQL function that takes input in XML format for the
following table:
TABLE: TBL_MEDICAL_CENTER_BILLS
Name Null Type
------------- -------- -------------
MED_RECORDNO NOT NULL NUMBER
MED_EMPID NVARCHAR2(10)
MED_BILL_HEAD NVARCHAR2(20)
MED_DATE DATE
MED_AMOUNT FLOAT(126)
Here is the function code:
FUNCTION save_medical_center_bills(medical_bill_data NVARCHAR2 ) RETURN clob IS ret clob;
xmlData XMLType;
v_code NUMBER;
v_errm VARCHAR2(100);
BEGIN
xmlData:=XMLType(medical_bill_data);
INSERT INTO TBL_MEDICAL_CENTER_BILLS SELECT x.* FROM XMLTABLE('/medical_center_bill'
PASSING xmlData
COLUMNS MED_RECORDNO NUMBER PATH 'MED_RECORDNO' default null,
MED_EMPID NVARCHAR2(11) PATH 'employee_id',
MED_BILL_HEAD NVARCHAR2(20) PATH 'bill_head' ,
MED_DATE DATE PATH 'effective_date',
MED_AMOUNT FLOAT PATH 'bill_amount'
) x;
ret:=to_char(sql%rowcount);
COMMIT;
RETURN '<result><status affectedRow='||ret||'>success</status></result>';
EXCEPTION
WHEN OTHERS THEN
v_code := SQLCODE;
v_errm := SUBSTR(SQLERRM, 1, 100);
DBMS_OUTPUT.PUT_LINE (v_code || ' ' || v_errm);
-- '<result><status>Error</status> <error_message>'|| 'Error Code:' || v_code || ' ' || 'Error Message:' || v_errm ||'</error_message> </result>';
RETURN '<result><status>Error</status> <error_message>'|| 'Error Message:' || v_errm ||'</error_message> </result>';
END save_medical_center_bills;
However, I want to keep table's first column MED_RECORDNO as incrementing sequence (at the moment I am keeping it null since I don't know how to put the sequence in the XMLTable clause) and the rest of the
inputs [MED_EMPID, MED_BILL_HEAD , MED_DATE , MED_AMOUNT] will be taken from the XML passed to the function.
I created a sequence and a trigger to keep this sequence incremented for that table column MED_RECORDNO:
CREATE SEQUENCE MED_RECORDNO_SEQ;
create or replace TRIGGER MED_RECORDNO_TRIGGER
BEFORE INSERT ON TBL_MEDICAL_CENTER_BILLS FOR EACH ROW
WHEN (new.MED_RECORDNO is null)
DECLARE
v_id TBL_MEDICAL_CENTER_BILLS.MED_RECORDNO%TYPE;
BEGIN
SELECT MED_RECORDNO_seq.nextval INTO v_id FROM DUAL;
:new.MED_RECORDNO := v_id;
END;
As you can see, my XMLTable is inserting 4 column values in a 5 column table, because columns MED_RECORDNO will take its value from sequence MED_RECORDNO_SEQ using TRIGGER MED_RECORDNO_TRIGGER.
I don't know any thing about doing this. If you have ever experience such things, then please share your idea.

I sort of hinted at this in an earlier answer. You should specify the names of of the columns in the table you are inserting into; this is good practice even if you are populating all of them, as it will avoid surprises if the table structure changes (or differs between environments), and makes it much easier to spot like having columns or values in the wrong order.
INSERT INTO TBL_MEDICAL_CENTER_BILLS (MED_EMPID, MED_BILL_HEAD, MED_DATE, MED_AMOUNT)
SELECT x.MED_EMPID, x.MED_BILL_HEAD, x.MED_DATE, x.MED_AMOUNT
FROM XMLTABLE('/medical_center_bill'
PASSING xmlData
COLUMNS MED_EMPID NVARCHAR2(11) PATH 'employee_id',
MED_BILL_HEAD NVARCHAR2(20) PATH 'bill_head' ,
MED_DATE DATE PATH 'effective_date',
MED_AMOUNT FLOAT PATH 'bill_amount'
) x;
The insert you have should actually work (if the column order in the table matches); the trigger will still replace the null value you get from the XMLTable with the sequence value. At least, until you make the MED_RECORDNO column not-null, and you probably want to if it's the primary key.
Incidentally, if you're on 11g or higher your trigger can assign the sequence straight to the NEW pseudorecord:
create or replace TRIGGER MED_RECORDNO_TRIGGER
BEFORE INSERT ON TBL_MEDICAL_CENTER_BILLS
FOR EACH ROW
BEGIN
:new.MED_RECORDNO := MED_RECORDNO_seq.nextval;
END;
The when null check implies you sometimes want to allow a value to be specified; that is a bad idea as manually inserted values can clash with sequence values, either giving you duplicates or a unique/primary key exception.

Related

Update of column value within a trigger

Before insert or update of any columns I want to update 1 system column with standard hash MD5 of all table columns, trigger is attached to. My intention is not to tailor this trigger with enumeration of all columns for each trigger and have a function that returns concatenated list of columns per table.
Table DDL:
create table TEST (
id int,
test varchar(100),
"_HASH" varchar(32)
);
Here is my trigger DDL that I would love to work :
CREATE TRIGGER TEST_SYS_HASH_BEFORE_INSERT_OR_UPDATE
BEFORE INSERT OR UPDATE
ON TEST
FOR EACH ROW
DECLARE
var_columns VARCHAR2(10000);
BEGIN
var_columns := FUNC_LISTAGG_EXT(‘TEST');
EXECUTE IMMEDIATE 'SELECT STANDARD_HASH(' || var_columns || ', ''MD5'') from dual'
INTO :new."_HASH";
END;
However this is simply taking headers and set same hash for every row. If I should do this manually , trigger would look like this, what works as I desire, but create it for several tens of tables would be overwhelming
CREATE OR REPLACE TRIGGER TEST_SYS_HASH_BEFORE_INSERT_OR_UPDATE
BEFORE INSERT OR UPDATE
ON TEST
FOR EACH ROW
DECLARE
var_columns VARCHAR(10000);
BEGIN
var_columns := FUNC_LISTAGG_EXT('TEST');
SELECT STANDARD_HASH( :new."ID" || :new."TEST" , 'MD5' )
INTO :new."_HASH";
FROM DUAL;
END;
So my question is whether solution is achievable
Note:
FUNC_LISTAGG_EXT function returns concatenated list of columns from system view

Create a procedure to copy some records from a table to another

I'm trying to create a simple procedure to copy some records from Table1 to Table2.
Table1:
id number PK
operation varchar2(50)
position varchar2(50)
code_operation varchar2(50) FK
Table2:
code_operation varchar2(50) PK
operation varchar2(50)
position varchar2(50)
client_number varchar2(50)
Starting from the client_number I have to copy the associated operation and position from Table1 and insert into operation and position of Table2.
I've tried this code but it doesn't work:
CREATE OR REPLACE PROCEDURE COPY_DATA(
BEGIN
DECLARE P_CLIENT_NUMBER VARCHAR2(50);
DECLARE P_CODE_OPERATION VARCHAR2(50);
DECLARE P_DIVISION VARCHAR2(50);
DECLARE P_POSITION VARCHAR2(50)
SELECT CODE_OPERATION INTO P_CODE_OPERATION FROM TABLE2;
SELECT CLIENT_NUMBER INTO P_CLIENT_NUMBER FROM TABLE2;
SELECT DIVISION INTO P_DIVISION FROM TABLE1;
SELECT POSITION INTO P_POSITION FROM TABLE1;
INSERT INTO TABLE2(DIVISION,POSITION)
WHERE CODE_OPERATION=P_COD_OPERATION;
END
);
I've got this error ERROR PLS-00103: Encountered the symbol “DECLARE” but I don't understand why, plus with this error I don't know if my code is correct or not.
Error you got is easy to fix. PL/SQL block has declare-begin-exception-end structure, which means that you first declare variables (in a stored procedure, you don't use declare keyword; strange enough, for triggers - which as kind of stored as well - you do use it). For example:
create or replace procedure copy_data as
p_client_number table2.client_number%type;
begin
select client_number
into p_client_number
from table2;
exception
when too_many_rows then
raise;
end;
However, without WHERE clause (or an aggregate, such as MIN or MAX), this will raise too_many_rows exception because it'll try to fetch all client numbers into a scalar variable, and that won't work. I included the way which shows how to handle it. What will you really do? Restrict number of rows to 1. Similarly, you'd handle no_data_found or other exceptions.
What you described looks strange to me.
you want to insert into table2 - which is a master table
values you'd insert are in table1 - which is a detail table
Looking at tables, I'd say that it should be vice versa.
insert into table1 (id, operation, position, code_operation)
select seq.nextval,
b.operation,
b.position,
b.code_operation
from table2 b
where b.client_number = 'ABC';
(I presumed that primary key is populated via a sequence.)
It also means that none of variables you declared is necessary. But, the procedure would accept client number as a parameter.
The whole procedure would then be:
create or replace procedure copy_data
(par_client_number in table2.client_number%type)
is
begin
insert into table1 (id, operation, position, code_operation)
select seq.nextval,
b.operation,
b.position,
b.code_operation
from table2 b
where b.client_number = par_client_number;
end;
You'd call it as
begin
copy_data('ABC');
end;
/

PL/SQL : Need to compare data for every field in a table in plsql

I need to create a procedure which will take collection as an input and compare the data with staging table data row by row for every field (approx 50 columns).
Business logic :
whenever a staging table column value will mismatch with the corresponding collection variable value then i need to update 'FAIL' into staging table STATUS column and reason into REASON column for that row.
If matched then need to update 'SUCCESS' in STATUS column.
Payload will be approx 500 rows in each call.
I have created below sample script:
PKG Specification :
CREATE OR REPLACE
PACKAGE process_data
IS
TYPE pass_data_rec
IS
record
(
p_eid employee.eid%type,
p_ename employee.ename%type,
p_salary employee.salary%type,
p_dept employee.dept%type
);
type p_data_tab IS TABLE OF pass_data_rec INDEX BY binary_integer;
PROCEDURE comp_data(inpt_data IN p_data_tab);
END;
PKG Body:
CREATE OR REPLACE
PACKAGE body process_data
IS
PROCEDURE comp_data (inpt_data IN p_data_tab)
IS
status VARCHAR2(10);
reason VARCHAR2(1000);
cnt1 NUMBER;
v_eid employee_copy.eid%type;
v_ename employee_copy.ename%type;
BEGIN
FOR i IN 1..inpt_data.count
LOOP
SELECT ec1.eid,ec1.ename,COUNT(*) over () INTO v_eid,v_ename,cnt1
FROM employee_copy ec1
WHERE ec1.eid = inpt_data(i).p_eid;
IF cnt1 > 0 THEN
IF (v_eid=inpt_data(i).p_eid AND v_ename = inpt_data(i).p_ename) THEN
UPDATE employee_copy SET status = 'SUCCESS' WHERE eid = inpt_data(i).p_eid;
ELSE
UPDATE employee_copy SET status = 'FAIL' WHERE eid = inpt_data(i).p_eid;
END IF;
ELSE
NULL;
END IF;
END LOOP;
COMMIT;
status :='success';
EXCEPTION
WHEN OTHERS THEN
status:= 'fail';
--reason:=sqlerrm;
END;
END;
But in this approach i have below mentioned issues.
Need to declare all local variables for each column value.
Need to compare all variable data using 'and' operator. Not sure whether it is correct way or not because if there are 50 columns then if condition will become very heavy.
IF (v_eid=inpt_data(i).p_eid AND v_ename = inpt_data(i).p_ename) THEN
Need to update REASON column when any column data mismatched (first mismatched column name) for that row, in this approach i am not able to achieve.
Please suggest any other good way to achieve this requirement.
Edit :
There is only one table at my end i.e target table. Input will come from any other source as collection object.
REVISED Answer
You could load the the records into t temp table, but unless you want additional processing it's not necessary. AFAIK there is no way to identify the offending column (first one only) without slugging through column-by-column. However, your other concern having to declare a variable is not necessary. You can declare a single variable defined as %rowtype which gives you access to each column by name.
Looping through an array of data to find the occasional error is just bad (imho) with SQL available to eliminate the good ones in one fell swoop. And it's available here. Even though your input is a array we can use as a table by using the TABLE operator, which allows an array (collection) as though it were a database table. So the MINUS operator can till be employed. The following routine will set the appropriate status and identify the first miss matched column for each entry in the input array. It reverts to your original definition in package spec, but replaces the comp_data procedure.
create or replace package body process_data
is
procedure comp_data (inpt_data in p_data_tab)
is
-- define local array to hold status and reason for ecah.
type status_reason_r is record
( eid employee_copy.eid%type
, status employee_copy.status%type
, reason employee_copy.reason%type
);
type status_reason_t is
table of status_reason_r
index by pls_integer;
status_reason status_reason_t := status_reason_t();
-- define error array to contain the eid for each that have a mismatched column
type error_eids_t is table of employee_copy.eid%type ;
error_eids error_eids_t;
current_matched_indx pls_integer;
/*
Helper function to identify 1st mismatched column in error row.
Here is where we slug our way through each column to find the first column
value mismatch. Note: There is actually validate the column sequence, but
for purpose here we'll proceed in the input data type definition.
*/
function identify_mismatch_column(matched_indx_in pls_integer)
return varchar2
is
employee_copy_row employee_copy%rowtype;
mismatched_column employee_copy.reason%type;
begin
select *
into employee_copy_row
from employee_copy
where employee_copy.eid = inpt_data(matched_indx_in).p_eid;
-- now begins the task of finding the mismatched column.
if employee_copy_row.ename != inpt_data(matched_indx_in).p_ename
then
mismatched_column := 'employee_copy.ename';
elsif employee_copy_row.salary != inpt_data(matched_indx_in).p_salary
then
mismatched_column := 'employee_copy.salary';
elsif employee_copy_row.dept != inpt_data(matched_indx_in).p_dept
then
mismatched_column := 'employee_copy.dept';
-- elsif continue until ALL columns tested
end if;
return mismatched_column;
exception
-- NO_DATA_FOUND is the one error that cannot actually be reported in the customer_copy table.
-- It occurs when an eid exista in the input data but does not exist in customer_copy.
when NO_DATA_FOUND
then
dbms_output.put_line( 'Employee (eid)='
|| inpt_data(matched_indx_in).p_eid
|| ' does not exist in employee_copy table.'
);
return 'employee_copy.eid ID is NOT in table';
end identify_mismatch_column;
/*
Helper function to find specified eid in the initial inpt_data array
Since the resulting array of mismatching eid derive from a select without sort
there is no guarantee the index values actually match. Nor can we sort to build
the error array, as there is no way to know the order of eid in the initial array.
The following helper identifies the index value in the input array for the specified
eid in error.
*/
function match_indx(eid_in employee_copy.eid%type)
return pls_integer
is
l_at pls_integer := 1;
l_searching boolean := true;
begin
while l_at <= inpt_data.count
loop
exit when eid_in = inpt_data(l_at).p_eid;
l_at := l_at + 1;
end loop;
if l_at > inpt_data.count
then
raise_application_error( -20199, 'Internal error: Find index for ' || eid_in ||' not found');
end if;
return l_at;
end match_indx;
-- Main
begin
-- initialize status table for each input enter
-- additionally this results is a status_reason table in a 1:1 with the input array.
for i in 1..inpt_data.count
loop
status_reason(i).eid := inpt_data(i).p_eid;
status_reason(i).status :='SUCCESS';
end loop;
/*
We can assume the majority of data in the input array is valid meaning the columns match.
We'll eliminate all value rows by selecting each and then MINUSing those that do match on
each column. To accomplish this cast the input with TABLE function allowing it's use in SQL.
Following produces an array of eids that have at least 1 column mismatch.
*/
select p_eid
bulk collect into error_eids
from (select p_eid, p_ename, p_salary, p_dept from TABLE(inpt_data)
minus
select eid, ename, salary, dept from employee_copy
) exs;
/*
The error_eids array now contains the eid for each miss matched data item.
Mark the status as failed, then begin the long hard process of identifying
the first column causing the mismatch.
The following loop used the nested functions to slug the way through.
This keeps the main line logic clear.
*/
for i in 1 .. error_eids.count -- if all inpt_data rows match then count is 0, we bypass the enttire loop
loop
current_matched_indx := match_indx(error_eids(i));
status_reason(current_matched_indx).status := 'FAIL';
status_reason(current_matched_indx).reason := identify_mismatch_column(current_matched_indx);
end loop;
-- update employee_copy with appropriate status for each row in the input data.
-- Except for any cid that is in the error eid table but doesn't exist in the customer_copy table.
forall i in inpt_data.first .. inpt_data.last
update employee_copy
set status = status_reason(i).status
, reason = status_reason(i).reason
where eid = inpt_data(i).p_eid;
end comp_data;
end process_data;
There are a couple other techniques used you may want to look into if you are not familiar with them:
Nested Functions. There are 2 functions defined and used in the procedure.
Bulk Processing. That is Bulk Collect and Forall.
Good Luck.
ORIGINAL Answer
It is NOT necessary to compare each column nor build a string by concatenating. As you indicated comparing 50 columns becomes pretty heavy. So let the DBMS do most of the lifting. Using the MINUS operator does exactly what you need.
... the MINUS operator, which returns only unique rows returned by the
first query but not by the second.
Using that this task needs only 2 Updates: 1 to mark "fail", and 1 to mark "success". So try:
create table e( e_id integer
, col1 varchar2(20)
, col2 varchar2(20)
);
create table stage ( e_id integer
, col1 varchar2(20)
, col2 varchar2(20)
, status varchar2(20)
, reason varchar2(20)
);
-- create package spec and body
create or replace package process_data
is
procedure comp_data;
end process_data;
create or replace package body process_data
is
package body process_data
procedure comp_data
is
begin
update stage
set status='failed'
, reason='No matching e row'
where e_id in ( select e_id
from (select e_id, col1, col2 from stage
except
select e_id, col1, col2 from e
) exs
);
update stage
set status='success'
where status is null;
end comp_data;
end process_data;
-- test
-- populate tables
insert into e(e_id, col1, col2)
select (1,'ABC','def') from dual union all
select (2,'No','Not any') from dual union all
select (3,'ok', 'best ever') from dual union all
select (4,'xx','zzzzzz') from dual;
insert into stage(e_id, col1, col2)
select (1,'ABC','def') from dual union all
select (2,'No','Not any more') from dual union all
select (4,'yy', 'zzzzzz') from dual union all
select (5,'no e','nnnnn') from dual;
-- run procedure
begin
process_data.comp_date;
end;
-- check results
select * from stage;
Don't ask. Yes, you to must list every column you wish compared in each of the queries involved in the MINUS operation.
I know the documentation link is old (10gR2), but actually finding Oracle documentation is a royal pain. But the MINUS operator still functions the same in 19c;

Performance issues with Before INSERT Trigger, is taking lot of time compared to calling the Procedure Directly after insert

I have a before INSERT trigger calling my procedure to process the XML in the XMLTYPE fields in the STAGE_TBL and insert the data into PROCESSED_DATA_TBL
I have to go for Before INSERT trigger(I can use Compound Trigger as well but I didnt tried it yet) in order to update the status on STAGE_TBL row based on the outcome from processing the XML.
The issue I am having is my XML can be huge it can have about 100 - 2000 rp_sendRow chunks, if it is huge, then the trigger is taking so much time. I tried with 100 rp_sendRow and it takes about 4 minutes thru trigger.
But if I disable trigger and insert into STAGE_TBL and then call the XML_PROCESS for the newly inserted record using the ID, then its completing(Process XML and insert into PROCESSED_DATA_TBL) in less than a second from SQL Developer.
I cannot use regular SQL Insert huge XML from SQL Developer as there is a 4000 character limit, as the Database is not on my local, I cannot even use the XMLType(bfilename('XMLDIR', 'MY.xml') option so I am using JDBC code to insert huge XML.
I have called the XML_PROCESS directly from JDBC for the same XML and it took less than a second to process and insert into PROCESSED_DATA_TBL
Please let me know why the Trigger is taking time ?
I am using Oracle 11g, SQL Developer 4.1.0.19
--Trigger Code
create or replace TRIGGER STAGE_TRIGGER
BEFORE INSERT ON STAGE_TBL
FOR EACH ROW
DECLARE
ROW_COUNT NUMBER;
PROCESS_STATUS VARCHAR2(1);
STATUS_DESCRIPTION VARCHAR2(300);
BEGIN
XML_PROCESS(:NEW.ID, :NEW.XML_DOCUMENT, PROCESS_STATUS, STATUS_DESCRIPTION, ROW_COUNT);
IF(ROW_COUNT > 0) THEN
:NEW.STATUS := PROCESS_STATUS;
:NEW.STATUS_DATE := SYSDATE;
:NEW.STATUS_DESCRIPTION := STATUS_DESCRIPTION;
:NEW.SHRED_TS := SYSTIMESTAMP;
ELSE--This is to handle 0 records inserted scenario & exception scenarios
:NEW.STATUS := STATUS.ERROR;
:NEW.STATUS_DATE := SYSDATE;
:NEW.STATUS_DESCRIPTION := STATUS_DESCRIPTION;
END IF;
EXCEPTION
WHEN OTHERS THEN
:NEW.STATUS := PROCESS_STATUS;
:NEW.STATUS_DESCRIPTION := STATUS_DESCRIPTION;
NULL;
END STAGE_TRIGGER;
--Stored Procedure
create or replace PROCEDURE XML_PROCESS (ID IN RAW, xData IN XMLTYPE, PROCESS_STATUS OUT VARCHAR2, STATUS_DESCRIPTION OUT VARCHAR2, ROW_COUNT OUT NUMBER) AS
BEGIN
INSERT ALL INTO PROCESSED_DATA_TBL
(ID,
STORE,
SALES_NBR,
UNIT_COST,
ST_FLAG,
ST_DATE,
ST,
START_QTY,
START_VALUE,
START_ON_ORDER,
HAND,
ORDER,
COMMITED,
SALES,
RECEIVE,
VALUE,
COST,
ID_1,
ID_2,
ID_3,
UNIT_PRICE,
EFFECTIVE_DATE,
STATUS,
STATUS_DATE,
STATUS_REASON)
VALUES (ID
,storenbr
,SalesNo
,UnitCost
,StWac
,StDt
,St
,StartQty
,StartValue
,StartOnOrder
,Hand
,Order
,Commit
,Sales
,Rec
,Value
,Id1
,Id2
,Id3
,UnitPrice
,to_Date(EffectiveDate||' '||EffectiveTime, 'YYYY-MM-DD HH24:MI:SS')
,'N'
,SYSDATE
,'XML PROCESS INSERT')
SELECT E.* FROM XMLTABLE('rp_send/rp_sendRow' PASSING xData COLUMNS
store VARCHAR(20) PATH 'store'
,SalesNo VARCHAR(20) PATH 'sales'
,UnitCost NUMBER PATH 'cost'
,StWac VARCHAR(20) PATH 'flag'
,StDt DATE PATH 'st-dt'
,St NUMBER PATH 'st'
,StartQty NUMBER PATH 'qty'
,StartValue NUMBER PATH 'value'
,StartOnOrder NUMBER PATH 'order'
,Hand NUMBER PATH 'hand'
,Order NUMBER PATH 'order'
,Commit NUMBER PATH 'commit'
,Sales NUMBER PATH 'sales'
,Rec NUMBER PATH 'rec'
,Value NUMBER PATH 'val'
,Id1 VARCHAR(30) PATH 'id-1'
,Id2 VARCHAR(30) PATH 'id-2'
,Id3 VARCHAR(30) PATH 'id-3'
,UnitPrice NUMBER PATH 'unit-pr'
,EffectiveDate VARCHAR(30) PATH 'eff-dt'
,EffectiveTime VARCHAR(30) PATH 'eff-tm'
) E;
ROW_COUNT := SQL%ROWCOUNT;
PROCESS_STATUS := STATUS.PROCESSED;
STATUS_DESCRIPTION := ROW_COUNT || ' Rows Successfully Inserted ';
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
BEGIN
ROW_COUNT := 0;
PROCESS_STATUS := STATUS.ERROR;
STATUS_DESCRIPTION := SUBSTR(SQLERRM, 1, 250);
END;
WHEN OTHERS THEN
BEGIN
ROW_COUNT := 0;
PROCESS_STATUS := STATUS.ERROR;
STATUS_DESCRIPTION := SUBSTR(SQLERRM, 1, 250);
END;
END XML_PROCESS;
--Standalone Procedure calling XML_PROCESS
SET DEFINE OFF
DECLARE
ROW_COUNT NUMBER;
PROCESS_STATUS VARCHAR2(1);
STATUS_DESCRIPTION VARCHAR2(300);
V_ID NUMBER;
V_XML XMLTYPE;
BEGIN
SELECT ID, XML_DOCUMENT INTO V_ID, V_XML FROM STAGE_TBL WHERE ID = '7954';
XML_PROCESS(ID, V_XML, PROCESS_STATUS, STATUS_DESCRIPTION, ROW_COUNT);
update STAGE_TBL SET STATUS = PROCESS_STATUS,
STATUS_DATE = SYSDATE,
STATUS_DESCRIPTION = STATUS_DESCRIPTION
WHERE ID = V_ID;
END;
XML
<?xml version = \"1.0\" encoding = \"UTF-8\"?>
<rp_send xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">
<rp_sendRow>
<store>0123</store>
<sales>022399190</sales>
<cost>0.01</cost>
<flag>true</flag>
<st-dt>2013-04-19</st-dt>
<st>146.51</st>
<qty>13.0</qty>
<value>0.0</value>
<order>0.0</order>
<hand>0.0</hand>
<order>0.0</order>
<commit>0.0</commit>
<sales>0.0</sales>
<rec>0.0</rec>
<val>0.0</val>
<id-1/>
<id-2/>
<id-3/>
<unit-pr>13.0</unit-pr>
<eff-dt>2015-06-16</eff-dt>
<eff-tm>09:12:21</eff-tm>
</rp_sendRow>
</rp_send>
There are two many unknown variables to determine the problem, but with this information I see four (edited to include more answers) possible answers:
1) If you are inserting many rows in only one statement (INSERT ... SELECT) the trigger will slow performance.
But your standalone procedure call example operates with only one row (ID = '7954'), so I assume the problem persist with only one tuple insertion. In this case 1) is not the problem.
2) You have some kind of index on STAGE_TBL(XML_DOCUMENT). When the BEFORE INSERT trigger is called the XMLType is not indexed and your trigger calls the procedure with a non-indexed version of XML_DOCUMENT. But in your standalone procedure example, XML_DOCUMENT is inserted and indexed, so the procedure uses the index.
Complex indexes on complex objects can be used by oracle optimizer not only when selecting data from a table, but they can be used when processing the data itself. This means: if you have an index on a particular data it can be used by a procedure that use this data. And Oracle's XMLType are complex objects that can be indexed in many, many ways (see: http://docs.oracle.com/cd/B28359_01/appdev.111/b28369/xdb_indexing.htm#CHDCGACG).
I think that XMLTABLE function is being optimized when XML_DOCUMENT is actually inserted in STAGE_TBL.
You can test it by calling your standalone procedure with a XML_DOCUMENT not extracted from STAGE_TBL (or any table that could index the document). In this case both, trigger and standalone, performances should be similar.
EDITED: You comment that you have tested the second answer and the performance problem persists. So I include a third option:
3) You have included a XML validation check constraint in STAGE_TBL. And this validation is the source of the performance difference. The standalone example does not validate the XML document, but the insert validates it.
You can check if this is what is happening by disabling the trigger. If the insert without the trigger is still slow then the problem is not the trigger, but it is the XML validation.
EDITED: You comment that you have tested the third answer and the performance problem persists. So I include a fourth option:
4) In (https://community.oracle.com/thread/2526907) a performance problem with XMLTable is described when working with big XML documents. They comment that using TABLE(XMLSequence()) approach is better in these cases, because XMLTable creates big intermediate results, and TABLE(XMLSequence()) does not.
So in your INSERT statement change your SELECT from:
SELECT E.* FROM XMLTABLE('rp_send/rp_sendRow' PASSING xData COLUMNS
store VARCHAR(20) PATH 'store'
,SalesNo VARCHAR(20) PATH 'sales'
,UnitCost NUMBER PATH 'cost'
,StWac VARCHAR(20) PATH 'flag'
,StDt DATE PATH 'st-dt'
...
...
,EffectiveTime VARCHAR(30) PATH 'eff-tm'
) E;
To:
SELECT value(e).extract('//store/text()').getStringVal() store,
value(e).extract('//sales/text()').getStringVal() SalesNo,
value(e).extract('//cost/text()').getNumberVal() UnitCost,
value(e).extract('//flag/text()').getStringVal() StWac,
to_date(value(e).extract('//st-dt/text()').getStringVal(),'YYYY-MM-DD') StDt,
...
...
value(e).extract('//eff-tm/text()').getStringVal() EffectiveTime
FROM TABLE(XMLSEQUENCE(EXTRACT(xData, '/rp_send/rp_sendRow'))) e;

How do I convert row into CLOB in the applied trigger after update?

The idea is, I want to clone the record as a CLOB when it is updated.
Why do it in such a way?
There are two different applications A1 and A2, A1 is depended on by A2.
Based on A1 values, calculations are made for values for A2.
The A2 process runs just once per day to calculate the values, but for A1 every field in the TABLE_NAME in question can be altered several times a day and doesn't have a history.
The aim is to create a history which is a CLOB field in a table "NEW_TABLE" of automatic form.
Sorry for my English, but if something is not understandable I can rewrite the question
My Code Here:
CREATE or REPLACE TRIGGER TRIGGER_NAME
AFTER UPDATE
ON TABLE_NAME
FOR EACH ROW
DECLARE
row_record NEW_TABLE%rowtype;
c_xml CLOB;
FUNCTION GetXML(a_tablela varchar2, a_key_1 varchar2, a_key_2 varchar2)
RETURN CLOB
is
x_xml CLOB;
BEGIN
select dbms_xmlgen.getxml('select * from '||a_tablela||' where key_1 = '''||a_key_1||''' and key_2 = '''||a_key_2||'''') into x_xml from dual;
return x_xml;
END;
BEGIN
--** TABLE_NAME Automatically fetches all columns and transforms them to CLOB
c_xml := GetXML('TABLE_NAME', :new.key_1, :new.key_2);
if c_xml is not null then
row_record.TABLE_NAME :=c_xml;
end if;
INSERT INTO NEW_TABLE VALUES row_record;
EXCEPTION
when others then
raise_application_error(-20000,'ERROR: '||to_char(sqlcode));
END;
Now I get error:
ORA-04091: table TABLE_NAME is mutating, trigger/function may not see it.
when I get this record across SELECT statement.
How do I convert row into CLOB in the applied TRIGGER AFTER UPDATE ?
Thanks.
The reason you can't use a select statement is because you're in the trigger, and the table is changing, or 'mutating', as the error says. The only way you can get the data from the row that's being updated here is using new and old:
old.column1
new.column1
Old being the value of the column before the update, new being the value after the update.
Example:
CREATE or REPLACE TRIGGER TRIGGER_NAME
AFTER UPDATE
ON TABLE_NAME
FOR EACH ROW
BEGIN
l_string := 'This is the old value for column 1: ' || old.column1 || '. This is the new value: ' || new.column1;
dbms_output.put_line(l_string);
END;
You won't be able to use dbms_xmlgen because it uses a select statement, which throws the mutating error exception.
I'm not sure I perfectly understand what you're trying to do, but you should be able to build the CLOB yourself just by concatenating yourself with the column names. Like this:
CREATE or REPLACE TRIGGER TRIGGER_NAME
AFTER UPDATE
ON TABLE_NAME
FOR EACH ROW
BEGIN
l_clob := 'Column1 ' || old.column1 || ', Column2 ' || old.column2; --For as many columns as are in the table
--Now you have a clob with all the old values, insert it where you want it
END;
And then go from there. If you really want the XML format you can do that yourself as well, just concatenate the strings together.

Resources