I have a narrow table with the following columns:
<Customer ID> <Field ID> <Value>, all of them are numbers.
I want to reshape this table into the wide format:
<Customer ID> <Field1> <Field2> <Field3> ...
I have a separate dictionary table DIC_FIELDS that translates Field ID into Field Name.
I work on EXADATA server. The narrow table has 2.5 billion records, and we have about 200 fields.
The obvious simple solution below badly fills up all temporary space on our EXADATA server.
create table WIDE_ADS as (
CUSTOMERID
,max(case when FIELDID = 1 then VALUE end) as GENDER
,max(case when FIELDID = 2 then VALUE end) as AGE
,max(case when FIELDID = 3 then VALUE end) as EDUCATION
from NARROW_ADS
group by CUSTOMERID
);
We tried also a cleverer and manual method:
create index index1
on SZEROKI_ADS(CUSTOMERID);
DECLARE
rowidWide rowid;
type tColNames is table of STRING(32000) index by pls_integer ;
arrColNames tColNames;
x_CustomerID number;
strColName varchar2(32);
strColvalue varchar2(32000);
strSQL varchar2(200);
lngCounter pls_integer;
lngFieldID pls_integer;
BEGIN
lngCounter := 0;
-- we pre-load the dictionary arrColNames to speedup lookup.
for DIC_EL in (select * from DIC_FIELDS order by FIELDID) LOOP
lngFieldID := to_number(DIC_EL.FIELDID);
arrColNames(lngFieldID) := DIC_EL.FIELDNAME;
END LOOP;
FOR NARROW_REC IN (SELECT * FROM NARROW_ADS where VALUE is not null ) LOOP
strColName := arrColNames(NARROW_REC.FIELDID);
strColvalue := NARROW_REC.VALUE;
x_IDKlienta := NARROW_REC.CUSTOMERID;
BEGIN
select rowid into rowidWide from WIDE_ADS
where CUSTOMERID = NARROW_REC.CUSTOMERID;
strSQL := 'update :1 set :2 = :3 where rowid = :4';
execute immediate strSQL using WIDE_ADS, strColName, strColvalue, rowidWide;
EXCEPTION
WHEN NO_DATA_FOUND THEN
strSQL :=
'insert into '|| WIDE_ADS ||' (CUSTOMERID, '|| strColName ||')
values
(:1, :2)';
execute immediate strSQL using x_CustomerID, to_number(strColvalue) ;
END;
IF lngCounter=10000 THEN
COMMIT;
lngCounter:=0;
dbms_output.put_line('Clik...');
ELSE
lngCounter:=lngCounter+1;
END IF;
END LOOP;
END;
Although it doesn't take a temp, it fails miserably performance-wise; it processes 10 000 records in 50 sec - that is about 1000 times slower, then expected.
What can we do to speed up the process?
As Lalit comments, try to do it in chunks based on CUSTOMERID.
First, create a index on CUSTOMERID (if it does not exist):
CREATE INDEX INDNARROWADS ON NARROW_ADS(CUSTOMERID);
Second, we are going to create an auxiliary table to compute buckets based on CUSTOMERID (in this example we create 1000 buckets, 1 bucket will represent 1 block insert statement):
CREATE TABLE BUCKETS(MINCUSTOMER, MAXCUSTOMER, BUCKETNUM) AS
SELECT MIN(CUSTOMERID), MAX(CUSTOMERID), BUCKET
FROM (SELECT CUSTOMERID,
WIDTH_BUCKET(CUSTOMERID,
(SELECT MIN(CUSTOMERID) FROM NARROW_ADS),
(SELECT MAX(CUSTOMERID) FROM NARROW_ADS),
1000) BUCKET
FROM NARROW_ADS)
GROUP BY BUCKET;
You can use more/less buckets modifying the fourth argument of WIDTH_BUCKET function.
Third, create the WIDE_ADS table (the structure with no data). You should do it manually (with special attention on storage parameters) but you can also use your own query with a WHERE false condition:
create table WIDE_ADS as select
CUSTOMERID
,max(case when FIELDID = 1 then VALUE end) as GENDER
,max(case when FIELDID = 2 then VALUE end) as AGE
,max(case when FIELDID = 3 then VALUE end) as EDUCATION
from NARROW_ADS
where 1=0;
Fourth, execute your query over each bucket (1 bucket means 1 insert statement):
BEGIN
FOR B IN (SELECT * FROM BUCKETS ORDER BY BUCKETNUM) LOOP
INSERT INTO WIDE_ADS
SELECT
CUSTOMERID
,max(case when FIELDID = 1 then VALUE end) as GENDER
,max(case when FIELDID = 2 then VALUE end) as AGE
,max(case when FIELDID = 3 then VALUE end) as EDUCATION
FROM NARROW_ADS
WHERE CUSTOMERID BETWEEN B.MINCUSTOMER AND B.MAXCUSTOMER
GROUP by CUSTOMERID;
COMMIT;
END LOOP;
END;
And finally, drop auxiliary table (and index if it is not necessary).
Oracle optimizer should use the index on CUSTOMERID to perform an "index range scan" over NARROW_ADS. So, each INSERT should find efficiently its corresponding interval.
Note that WIDTH_BUCKETS creates buckets based on uniform divisions over the specified interval on CUSTOMERID (from min to max values). It does not create buckets based on uniform number of rows. And also note that NARROW_ADS must not be modified while this process is being executed.
As the PL/SQL block executes a COMMIT on each iteration and the loop iterates over buckets using the BUCKETNUM order, you can see how WIDE_ADS grows and which bucket is being processed (retrieving the max CUSTOMERID from WIDE_ADS and find its corresponding bucket on BUCKETS table).
If temporary space usage is to high, then increase the number of buckets (each insert will be smaller).
Related
To illustrate, the following table:
ID
Model
Series
Amount
001
productX
SeriesZ
1000
001
productX
SeriesABC
2000
001
productX
SeriesABC
8000
002
productY
SeriesABC
5000
should be transformed such that each record captures a unique id and the total amount it has contributed to each model-series possible combination.
ID
productX_SeriesZ
productX_SeriesABC
productY_SeriesABC
001
1000
10000
0
002
0
0
5000
Can I use the pivot function to pivot on for each possible combination of values in two columns?
SELECT ID,
SUM( CASE WHEN model = 'productX' and series = 'SeriesZ' THEN amount ELSE 0 END) productX_SeriesZ,
SUM( CASE WHEN model = 'productX' and series = 'SeriesABC' THEN amount ELSE 0 END) productX_SeriesABC,
SUM( CASE WHEN model = 'productY' and series = 'SeriesABC' THEN amount ELSE 0 END) productX_SeriesABC
FROM TABLE
GROUP BY ID;
EDIT:
"Works for this particular case, but what if we have hundreds of models and series?"
You can try this.
DECLARE
TYPE model_rec IS RECORD ( mr mytable.model%type);
TYPE series_rec IS RECORD ( sr mytable.series%type);
TYPE model_tab IS TABLE OF model_rec;
TYPE series_tab IS TABLE OF series_rec;
mv model_tab;
sv serie_tab;
query varchar2(32767) := 'SELECT ID';
BEGIN
SELECT DISTINCT model, series INTO mv, sv FROM mytable WHERE model IS NOT NULL AND series IS NOT NULL;
FOR i IN 1..mv.COUNT
LOOP
query := query||', SUM( CASE WHEN model = '|| DBMS_ASSERT.ENQUOTE(mv(i).mr)
|| ' and series = ' || DBMS_ASSERT.ENQUOTE(sv(i).sr)||' THEN amount ELSE 0 END) ' || mv(i).mr||'_'||sv(i).sr||' '
END LOOP;
query := query || ' FROM mytable GROUP BY id;'
EXECUTE IMMEDIATE query;
END;
This may contain syntactic errors and can be optimized and refactored but this is the basic idea. I am in hurry so wrote it down without testing.
I think below query should work for you -
SELECT *
FROM (SELECT ID
,Model
,Series
FROM YOUR_TABLE)
PIVOT(SUM(AMOUNT) FOR Model IN ('productX' productx, 'productY' producty),
SUM(AMOUNT) FOR Series IN ('SeriesABC' Seriesabc, 'SeriesZ' Seriesz))
I've a question on an after Insert trigger in Oracle 12c.
I have a Dimension table in which DML operations occur via a UI. Whenever there is an Insert or Update, I want to perform Insert or Update in another table, which is called as Rating table.
But the Dimension table is at a lower grain and Rating table is at a higher grain, so I want to insert only unique records in the Rating table.
Is that possible? How?
Thanks for your help in advance.
Existing Trigger Code is as below:-
create or replace TRIGGER RFJVBASE.KPI_COCKPIT_ATTR_AFTER_INS2
AFTER INSERT ON RFJVBASE.DIM_KPI_COCKPIT_ATTRS FOR EACH ROW
DECLARE
v_fiscal_week_start varchar2(50);
v_fiscal_week_month_end varchar2(50);
v_fiscal_week_qtr_end varchar2(50);
v_fiscal_week_semi_end varchar2(50);
v_fiscal_week_year_end varchar2(50);
BEGIN
select rfjvbase.run_dates('1-Jan-1900') into v_fiscal_week_start from dual;
select fiscal_month_end_week_name into v_fiscal_week_month_end from rfjvstg.stg_cockpit_business_dt;
select fiscal_qtr_end_week_name into v_fiscal_week_qtr_end from rfjvstg.stg_cockpit_business_dt;
select fiscal_semi_end_week_name into v_fiscal_week_semi_end from rfjvstg.stg_cockpit_business_dt;
select fiscal_year_end_week_name into v_fiscal_week_year_end from rfjvstg.stg_cockpit_business_dt;
/*Insert record into FCT table*/
IF (:new.KPI_TYPE = 'Quantitative') THEN
INSERT INTO RFJVBASE.FCT_KPI_COCKPIT_RATING
(
PROCESS_KEY_FCT,CURRENT_PROCESS_RATING_KEY,PROCESS_RATING_KEY,CREATED_FISCAL_WEEK,VALID_THROUGH_FISCAL_WEEK,PROCESS_NAME,PROCESS_GROUP,PROCESS,PROCESS_INDICATOR_CLASS,PROCESS_INDICATOR_SEQUENCE,
PERFORMANCE_INDICATOR_NAME,PERF_IND_SUB_LEVEL,UNIT,KPI_TYPE,ORG_UNIT,TOLERANCE_DIRECTION, TOLERANCE,TARGET,TARGET_ENABLE_FLAG,CREATED_DT,CREATED_BY,LAST_UPDATE_DT,LAST_UPDATED_BY,AUDIT_KEY
)
VALUES
(
FCT_KPI_COCKPIT_RATING_SEQ.NEXTVAL,
:new.PROCESS_KEY,
:new.PROCESS_KEY,
v_fiscal_week_start,
case when :new.ANNUAL_FREQUENCY = 1 then v_fiscal_week_year_end
when :new.ANNUAL_FREQUENCY = 2 then v_fiscal_week_semi_end
when :new.ANNUAL_FREQUENCY = 4 then v_fiscal_week_qtr_end
when :new.ANNUAL_FREQUENCY = 12 then v_fiscal_week_month_end
end,
:new.PROCESS_NAME,
:new.PROCESS_GROUP,
:new.PROCESS,
:new.PROCESS_INDICATOR_CLASS,
:new.PROCESS_INDICATOR_SEQUENCE,
:new.PERFORMANCE_INDICATOR_NAME,
:new.PERF_IND_SUB_LEVEL,
:new.UNIT,
:new.KPI_TYPE,
:new.ORG_UNIT,
:new.TOLERANCE_DIRECTION,
:new.TOLERANCE,
:new.TARGET,
:new.TARGET_ENABLE_FLAG,
SYSDATE,
:new.USERNAME,
SYSDATE,
:new.USERNAME,
:new.AUDIT_KEY);
ELSE IF (:new.KPI_TYPE = 'Qualitative') THEN
INSERT INTO RFJVBASE.FCT_KPI_COCKPIT_RATING
(
PROCESS_KEY_FCT,CURRENT_PROCESS_RATING_KEY,PROCESS_RATING_KEY,CREATED_FISCAL_WEEK,VALID_THROUGH_FISCAL_WEEK,PROCESS_NAME,PROCESS_GROUP,PROCESS,PROCESS_INDICATOR_CLASS,PROCESS_INDICATOR_SEQUENCE,
PERFORMANCE_INDICATOR_NAME,PERF_IND_SUB_LEVEL,UNIT,KPI_TYPE,ORG_UNIT,TOLERANCE_DIRECTION, TOLERANCE,TARGET_ENABLE_FLAG,CREATED_DT,CREATED_BY,LAST_UPDATE_DT,LAST_UPDATED_BY,AUDIT_KEY
)
VALUES
(
FCT_KPI_COCKPIT_RATING_SEQ.NEXTVAL,
:new.PROCESS_KEY,
:new.PROCESS_KEY,
v_fiscal_week_start,
case when :new.ANNUAL_FREQUENCY = 1 then v_fiscal_week_year_end
when :new.ANNUAL_FREQUENCY = 2 then v_fiscal_week_semi_end
when :new.ANNUAL_FREQUENCY = 4 then v_fiscal_week_qtr_end
when :new.ANNUAL_FREQUENCY = 12 then v_fiscal_week_month_end
end,
:new.PROCESS_NAME,
:new.PROCESS_GROUP,
:new.PROCESS,
:new.PROCESS_INDICATOR_CLASS,
:new.PROCESS_INDICATOR_SEQUENCE,
:new.PERFORMANCE_INDICATOR_NAME,
:new.PERF_IND_SUB_LEVEL,
:new.UNIT,
:new.KPI_TYPE,
:new.ORG_UNIT,
:new.TOLERANCE_DIRECTION,
:new.TOLERANCE,
:new.TARGET_ENABLE_FLAG,
SYSDATE,
:new.USERNAME,
SYSDATE,
:new.USERNAME,
:new.AUDIT_KEY);
END IF;
END IF;
END;
/
I want to insert only unique records in the Rating table.
Create unique index on the Rating table which won't allow duplicates to be inserted. Let the database do the dirty job, you just sit & relax.
I'm trying to get a total commission for each Sale staff and store it to a function to put it in a procedure, when I put a working select statement (three tables involved), I got Error(9,20): PL/SQL: ORA-00947: not enough values. I think because datatype only returns number in this function, but the table contains other varchar datatypes that cause this problem.
I tried to remove some columns that are varchar2 datatype but the result is not correct.
Below is my fictional code:
create or replace FUNCTION get_total_commission return number
IS
v_total_commission;
--
begin
select
sale_id, sale_acct,sale_name, sum(commission) as total_commission
into v_total from invoice_tbl invoice join commission_tbl commission
on invoice.id = commission.id join sale_tbl sale on sale.id = commssion.id
where invoice.refnr is null;
return to_char(v_total, 'FM99999.00');
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('err: ' ||SQLERRM);
end get_total_commission;
It will be a function that will show the total amount of commission that earns by each Sale staff.
You need to use local variable to which all your four columns in the SELECT list return to. And because of conversion to character type, need to return a string type value instead of numeric for the function.
SQL> SET SERVEROUTPUT ON;
SQL> CREATE OR REPLACE FUNCTION get_total_commission RETURN varchar2 IS
v_total_commission commission_tbl.commission%type;
v_sale_id sale_tbl.sale_id%type;
v_sale_acct sale_tbl.sale_acct%type;
v_sale_name sale_tbl.sale_name%type;
BEGIN
select sale_id, sale_acct, sale_name, sum(commission) as total_commission
into v_sale_id, v_sale_acct, v_sale_name, v_total
from invoice_tbl invoice
join commission_tbl commission
on invoice.id = commission.id
join sale_tbl sale
on sale.id = commssion.id
where invoice.refnr is null;
return to_char(v_total, 'FM99999.00');
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('err: ' || SQLERRM);
END;
the total amount of commission that earns by each Sale staff
Sounds like returning one number shorn of identifying characteristics is not the solution you need. You need a result set. Personally, this seems better fitted to a view than a function but if you want to wrap the query in a function this is how to do it:
-- obviously correct these data types to fit your actual needs
create or replace type commission_t as object(
sale_id varchar2(30)
, acct_id varchar2(30)
, sale_name varchar2(48)
, total_commission_number
);
/
create or replace type commission_nt as table of commission_t;
/
create or replace FUNCTION get_total_commission return commission_nt
IS
return_value commission_nt;
begin
select commission_t(
sale_id, sale_acct,sale_name, sum(commission) )
bulk collect into return_type
from invoice_tbl invoice
join commission_tbl commission on invoice.id = commission.id
join sale_tbl sale on sale.id = commssion.id
where invoice.refnr is null
group by sale_id, sale_acct,sale_name
;
return return_value;
end get_total_commission;
And query it like this:
select * from table (get_total_commission);
There are various rough edges with this. For instance it won't work well if your result set is huge (which obviously depends, but say more than 5000-10000 rows).
If you really just want the total commission for a single sale then you need to restrict the query by SALE_ID - and pass it as a parameter:
create or replace FUNCTION get_total_commission
(p_sale_id in sale.id%type)
return number
IS
v_total_commission number;
--
begin
select
sum(commission) as total_commission
into v_total_commission
from invoice_tbl invoice
join commission_tbl commission on invoice.id = commission.id
join sale_tbl sale on sale.id = commssion.id
where sale.id = p_sale_id
and invoice.refnr is null;
return v_total_commission ;
end get_total_commission;
I'm working in Oracle 12.2.
I've got a complex query the results of which I would like to receive as a CLOB in JSON format. I've looked into json_object, but this means completely rewriting the query.
Is there a way to simply pass the ref cursor or result set and receive a JSON array with each row being a JSON object inside?
My query:
SELECT
*
FROM
(
SELECT
LABEL_USERS.*,
ROWNUM AS RANK ,
14 AS TOTAL
FROM
(
SELECT DISTINCT
SEC_VS_USER_T.USR_ID,
SEC_VS_USER_T.USR_FIRST_NAME,
SEC_VS_USER_T.USR_LAST_NAME,
SEC_USER_ROLE_PRIV_T.ROLE_ID,
SEC_ROLE_DEF_INFO_T.ROLE_NAME,
1 AS IS_LABEL_MANAGER,
LOWER(SEC_VS_USER_T.USR_FIRST_NAME ||' '||SEC_VS_USER_T.USR_LAST_NAME) AS
SEARCH_STRING
FROM
SEC_VS_USER_T,
SEC_USER_ROLE_PRIV_T,
SEC_ROLE_DEF_INFO_T
WHERE
SEC_VS_USER_T.USR_ID = SEC_USER_ROLE_PRIV_T.USR_ID
AND SEC_VS_USER_T.USR_SITE_GRP_ID IS NULL
ORDER BY
UPPER(USR_FIRST_NAME),
UPPER(USR_LAST_NAME)) LABEL_USERS) LABEL_USER_LIST
WHERE
LABEL_USER_LIST.RANK >= 0
AND LABEL_USER_LIST.RANK < 30
I couldn't find a procedure which I could use to generate the JSON, but I was able to use the new 12.2 functions to create the JSON I needed.
SELECT JSON_ARRAYAGG( --Used to aggregate all rows into single scalar value
JSON_OBJECT( --Creating an object for each row
'USR_ID' VALUE USR_ID,
'USR_FIRST_NAME' VALUE USR_FIRST_NAME,
'USR_LAST_NAME' VALUE USR_LAST_NAME,
'IS_LABEL_MANAGER' VALUE IS_LABEL_MANAGER,
'SEARCH_STRING' VALUE SEARCH_STRING,
'USR_ROLES' VALUE USR_ROLES
)returning CLOB) AS JSON --Need to cpecify CLOB, otherwise the result is limited by VARCHARC2(4000)
FROM
(
SELECT * FROM (
SELECT LABEL_USERS.*, ROWNUM AS RANK, 14 AS TOTAL from
(SELECT
SEC_VS_USER_T.USR_ID,
SEC_VS_USER_T.USR_FIRST_NAME,
SEC_VS_USER_T.USR_LAST_NAME,
1 AS IS_LABEL_MANAGER,
LOWER(SEC_VS_USER_T.USR_FIRST_NAME ||' '||SEC_VS_USER_T.USR_LAST_NAME) AS SEARCH_STRING,
(
SELECT --It is much easier to create the JSON here and simply use this column in the outer JSON_OBJECT select
JSON_ARRAYAGG(JSON_OBJECT('ROLE_ID' VALUE ROLE_ID,
'ROLE_NAME' VALUE ROLE_NAME)) AS USR_ROLES
FROM
(
SELECT DISTINCT
prv.ROLE_ID,
def.ROLE_NAME
FROM
SEC_user_ROLE_PRIV_T prv
JOIN
SEC_ROLE_DEF_INFO_T def
ON
prv.ROLE_ID = def.ROLE_ID
ORDER BY
ROLE_ID DESC)) AS USR_ROLES
FROM
SEC_VS_USER_T,
SEC_USER_ROLE_PRIV_T,
SEC_ROLE_DEF_INFO_T
WHERE
SEC_VS_USER_T.USR_ID = SEC_USER_ROLE_PRIV_T.USR_ID
AND SEC_USER_ROLE_PRIV_T.ROLE_PRIV_ID = SEC_ROLE_DEF_INFO_T.ROLE_ID
AND SEC_VS_USER_T.USR_SITE_GRP_ID IS NULL
ORDER BY UPPER(USR_FIRST_NAME),
UPPER(USR_LAST_NAME))LABEL_USERS)) LABEL_USER_LIST
WHERE LABEL_USER_LIST.RANK >= 0--:bv_Min_Rows
AND LABEL_USER_LIST.RANK < 30--:bv_Max_Rows
I am recieving information from a csv file from one department to compare with the same inforation in a different department to check for discrepencies (About 3/4 of a million rows of data with 44 columns in each row). After I have the data in a table, I have a program that will take the data and send reports based on a HQ. I feel like the way I am going about this is not the most efficient. I am using oracle for this comparison.
Here is what I have:
I have a vb.net program that parses the data and inserts it into an extract table
I run a procedure to do a full outer join on the two tables into a new table with the fields in one department prefixed with '_c'
I run another procedure to compare the old/new data and update 2 different tables with detail and summary information. Here is code from inside the procedure:
DECLARE
CURSOR Cur_Comp IS SELECT * FROM T.AEC_CIS_COMP;
BEGIN
FOR compRow in Cur_Comp LOOP
--If service pipe exists in CIS but not in FM and the service pipe has status of retired in CIS, ignore the variance
If(compRow.pipe_num = '' AND cis_status_c = 'R')
continue
END IF
--If there is not a summary record for this HQ in the table for this run, create one
INSERT INTO t.AEC_CIS_SUM (HQ, RUN_DATE)
SELECT compRow.HQ, to_date(sysdate, 'DD/MM/YYYY') from dual WHERE NOT EXISTS
(SELECT null FROM t.AEC_CIS_SUM WHERE HQ = compRow.HQ AND RUN_DATE = to_date(sysdate, 'DD/MM/YYYY'))
-- Check fields and update the tables accordingly
If (compRow.cis_loop <> compRow.cis_loop_c) Then
--Insert information into the details table
INSERT INTO T.AEC_CIS_DET( Fac_id, Pipe_Num, Hq, Address, AutoUpdatedFl,
DateTime, Changed_Field, CIS_Value, FM_Value)
VALUES(compRow.Fac_ID, compRow.Pipe_Num, compRow.Hq, compRow.Street_Num || ' ' || compRow.Street_Name,
'Y', sysdate, 'Cis_Loop', compRow.cis_loop, compRow.cis_loop_c);
-- Update information into the summary table
UPDATE AEC_CIS_SUM
SET cis_loop = cis_loop + 1
WHERE Hq = compRow.Hq
AND Run_Date = to_date(sysdate, 'DD/MM/YYYY')
End If;
END LOOP;
END;
Any suggestions of an easier way of doing this rather than an if statement for all 44 columns of the table? (This is run once a week if it matters)
Update: Just to clarify, there are 88 columns of data (44 of duplicates to compare with one suffixed with _c). One table lists each field in a row that is different so one row can mean 30+ records written in that table. The other table keeps tally of the number of discrepencies for each week.
First of all I believe that your task can be implemented (and should be actually) with staight SQL. No fancy cursors, no loops, just selects, inserts and updates. I would start with unpivotting your source data (it is not clear if you have primary key to join two sets, I guess you do):
Col0_PK Col1 Col2 Col3 Col4
----------------------------------------
Row1_val A B C D
Row2_val E F G H
Above is your source data. Using UNPIVOT clause we convert it to:
Col0_PK Col_Name Col_Value
------------------------------
Row1_val Col1 A
Row1_val Col2 B
Row1_val Col3 C
Row1_val Col4 D
Row2_val Col1 E
Row2_val Col2 F
Row2_val Col3 G
Row2_val Col4 H
I think you get the idea. Say we have table1 with one set of data and the same structured table2 with the second set of data. It is good idea to use index-organized tables.
Next step is comparing rows to each other and storing difference details. Something like:
insert into diff_details(some_service_info_columns_here)
select some_service_info_columns_here_along_with_data_difference
from table1 t1 inner join table2 t2
on t1.Col0_PK = t2.Col0_PK
and t1.Col_name = t2.Col_name
and nvl(t1.Col_value, 'Dummy1') <> nvl(t2.Col_value, 'Dummy2');
And on the last step we update difference summary table:
insert into diff_summary(summary_columns_here)
select diff_row_id, count(*) as diff_count
from diff_details
group by diff_row_id;
It's just rough draft to show my approach, I'm sure there is much more details should be taken into account. To summarize I suggest two things:
UNPIVOT data
Use SQL statements instead of cursors
You have several issues in your code:
If(compRow.pipe_num = '' AND cis_status_c = 'R')
continue
END IF
"cis_status_c" is not declared. Is it a variable or a column in AEC_CIS_COMP?
In case it is a column, just put the condition into the cursor, i.e. SELECT * FROM T.AEC_CIS_COMP WHERE not (compRow.pipe_num = '' AND cis_status_c = 'R')
to_date(sysdate, 'DD/MM/YYYY')
That's nonsense, you convert a date into a date, simply use TRUNC(SYSDATE)
Anyway, I think you can use three single statements instead of a cursor:
INSERT INTO t.AEC_CIS_SUM (HQ, RUN_DATE)
SELECT comp.HQ, trunc(sysdate)
from AEC_CIS_COMP comp
WHERE NOT EXISTS
(SELECT null FROM t.AEC_CIS_SUM WHERE HQ = comp.HQ AND RUN_DATE = trunc(sysdate));
INSERT INTO T.AEC_CIS_DET( Fac_id, Pipe_Num, Hq, Address, AutoUpdatedFl, DateTime, Changed_Field, CIS_Value, FM_Value)
select comp.Fac_ID, comp.Pipe_Num, comp.Hq, comp.Street_Num || ' ' || comp.Street_Name, 'Y', sysdate, 'Cis_Loop', comp.cis_loop, comp.cis_loop_c
from T.AEC_CIS_COMP comp
where comp.cis_loop <> comp.cis_loop_c;
UPDATE AEC_CIS_SUM
SET cis_loop = cis_loop + 1
WHERE Hq IN (Select Hq from T.AEC_CIS_COMP)
AND trunc(Run_Date) = trunc(sysdate);
They are not tested but they should give you a hint how to do it.