Value not updating in column in stored procedure - oracle

I have to perform below operations
Check for common data b/w two columns in table A; abc id = param ;
Query another table b and get some acc id based column abc_id from A
Update the table A column 'param' with this 'acc id'
The stored procedure is like this:
CREATE OR REPLACE PROCEDURE update_value
AS
v_account_id varchar2(20);
v_abc_id A.abc_id%TYPE;
v_cust_type A.cust_type%TYPE;
v_param A.param%TYPE;
cursor c1 is select abc_id , param, cust_type
from A
where abc_id=param;
begin
open c1;
LOOP
fetch c1 into v_abc_id,v_cust_type,v_param;
EXIT WHEN c1%NOTFOUND;
if (v_cust_type=1 or v_cust_type=2) then
select account_id into v_account_id from B where B.abc_id=v_abc_id;
update A
set A.param = acc_id
where A.abc_id = v_abc_id;
end if;
END LOOP;
close c1;
END;
The column param in table is not getting updated

Less likely,
you created a procedure, but never executed it
If you did execute it, a few possible reasons:
cursor didn't return any rows because no abc_id is equal to param which is what, exactly? It is not a parameter (procedure doesn't accept any parameters), so it must be a table column and there are no rows where abc_id = param
if cursor returned some rows, v_cust_type isn't 1 and v_cust_type isn't 2 for any of these rows
if they are, acc_id which - by the way, isn't declared so - if procedure compiles, then acc_id is actually a column in the a table for abc_id = v_abc_id
If procedure completed successfully and you're checking the outcome in another session, you can't see anything because you didn't commit.
Also, why do you select account_id into v_account_id and never use it (the v_account_id, I mean)?
Shortly, your code is full of logical errors, it would be a miracle if it actually did something.

Related

Insert into not working on plsql in oracle

declare
vquery long;
cursor c1 is
select * from temp_name;
begin
for i in c1
loop
vquery :='INSERT INTO ot.temp_new(id)
select '''||i.id||''' from ot.customers';
dbms_output.put_line(i.id);
end loop;
end;
/
Output of select * from temp_name is :
ID
--------------------------------------------------------------------------------
customer_id
1 row selected.
I have customers table which has customer_id column.I want to insert all the customer_id into temp_new table but it is not being inserted. The PLSQL block executes successfully but the temp_new table is empty.
The output of dbms_output.put_line(i.id); is
customer_id
What is wrong there?
The main problem is that you generate a dynamic statement that you never execute; at some point you need to do:
execute immediate vquery;
But there are other problems. If you output the generated vquery string you'll see it contains:
INSERT INTO ot.temp_new(id)
select 'customer_id' from ot.customers
which means that for every row in customers you'll get one row in temp_new with ID set to the same fixed literal 'customer_id'. It's unlikely that's what you want; if customer_id is a column name from customers then it shouldn't be in single quotes.
As #mathguy suggested, long is not a sensible data type to use; you could use a CLOB but only really need a varchar2 here. So something more like this, where I've also switched to use an implicit cursor:
declare
l_stmt varchar2(4000);
begin
for i in (select id from temp_name)
loop
l_stmt := 'INSERT INTO temp_new(id) select '||i.id||' from customers';
dbms_output.put_line(i.id);
dbms_output.put_line(l_stmt);
execute immediate l_stmt;
end loop;
end;
/
db<>fiddle
The loop doesn't really make sense though; if your temp_name table had multiple rows with different column names, you'd try to insert the corresponding values from those columns in the customers table into multiple rows in temp_new, all in the same id column, as shown in this db<>fiddle.
I guess this is the starting point for something more complicated, but still seems a little odd.

PL/SQL : Need to compare data for every field in a table in plsql

I need to create a procedure which will take collection as an input and compare the data with staging table data row by row for every field (approx 50 columns).
Business logic :
whenever a staging table column value will mismatch with the corresponding collection variable value then i need to update 'FAIL' into staging table STATUS column and reason into REASON column for that row.
If matched then need to update 'SUCCESS' in STATUS column.
Payload will be approx 500 rows in each call.
I have created below sample script:
PKG Specification :
CREATE OR REPLACE
PACKAGE process_data
IS
TYPE pass_data_rec
IS
record
(
p_eid employee.eid%type,
p_ename employee.ename%type,
p_salary employee.salary%type,
p_dept employee.dept%type
);
type p_data_tab IS TABLE OF pass_data_rec INDEX BY binary_integer;
PROCEDURE comp_data(inpt_data IN p_data_tab);
END;
PKG Body:
CREATE OR REPLACE
PACKAGE body process_data
IS
PROCEDURE comp_data (inpt_data IN p_data_tab)
IS
status VARCHAR2(10);
reason VARCHAR2(1000);
cnt1 NUMBER;
v_eid employee_copy.eid%type;
v_ename employee_copy.ename%type;
BEGIN
FOR i IN 1..inpt_data.count
LOOP
SELECT ec1.eid,ec1.ename,COUNT(*) over () INTO v_eid,v_ename,cnt1
FROM employee_copy ec1
WHERE ec1.eid = inpt_data(i).p_eid;
IF cnt1 > 0 THEN
IF (v_eid=inpt_data(i).p_eid AND v_ename = inpt_data(i).p_ename) THEN
UPDATE employee_copy SET status = 'SUCCESS' WHERE eid = inpt_data(i).p_eid;
ELSE
UPDATE employee_copy SET status = 'FAIL' WHERE eid = inpt_data(i).p_eid;
END IF;
ELSE
NULL;
END IF;
END LOOP;
COMMIT;
status :='success';
EXCEPTION
WHEN OTHERS THEN
status:= 'fail';
--reason:=sqlerrm;
END;
END;
But in this approach i have below mentioned issues.
Need to declare all local variables for each column value.
Need to compare all variable data using 'and' operator. Not sure whether it is correct way or not because if there are 50 columns then if condition will become very heavy.
IF (v_eid=inpt_data(i).p_eid AND v_ename = inpt_data(i).p_ename) THEN
Need to update REASON column when any column data mismatched (first mismatched column name) for that row, in this approach i am not able to achieve.
Please suggest any other good way to achieve this requirement.
Edit :
There is only one table at my end i.e target table. Input will come from any other source as collection object.
REVISED Answer
You could load the the records into t temp table, but unless you want additional processing it's not necessary. AFAIK there is no way to identify the offending column (first one only) without slugging through column-by-column. However, your other concern having to declare a variable is not necessary. You can declare a single variable defined as %rowtype which gives you access to each column by name.
Looping through an array of data to find the occasional error is just bad (imho) with SQL available to eliminate the good ones in one fell swoop. And it's available here. Even though your input is a array we can use as a table by using the TABLE operator, which allows an array (collection) as though it were a database table. So the MINUS operator can till be employed. The following routine will set the appropriate status and identify the first miss matched column for each entry in the input array. It reverts to your original definition in package spec, but replaces the comp_data procedure.
create or replace package body process_data
is
procedure comp_data (inpt_data in p_data_tab)
is
-- define local array to hold status and reason for ecah.
type status_reason_r is record
( eid employee_copy.eid%type
, status employee_copy.status%type
, reason employee_copy.reason%type
);
type status_reason_t is
table of status_reason_r
index by pls_integer;
status_reason status_reason_t := status_reason_t();
-- define error array to contain the eid for each that have a mismatched column
type error_eids_t is table of employee_copy.eid%type ;
error_eids error_eids_t;
current_matched_indx pls_integer;
/*
Helper function to identify 1st mismatched column in error row.
Here is where we slug our way through each column to find the first column
value mismatch. Note: There is actually validate the column sequence, but
for purpose here we'll proceed in the input data type definition.
*/
function identify_mismatch_column(matched_indx_in pls_integer)
return varchar2
is
employee_copy_row employee_copy%rowtype;
mismatched_column employee_copy.reason%type;
begin
select *
into employee_copy_row
from employee_copy
where employee_copy.eid = inpt_data(matched_indx_in).p_eid;
-- now begins the task of finding the mismatched column.
if employee_copy_row.ename != inpt_data(matched_indx_in).p_ename
then
mismatched_column := 'employee_copy.ename';
elsif employee_copy_row.salary != inpt_data(matched_indx_in).p_salary
then
mismatched_column := 'employee_copy.salary';
elsif employee_copy_row.dept != inpt_data(matched_indx_in).p_dept
then
mismatched_column := 'employee_copy.dept';
-- elsif continue until ALL columns tested
end if;
return mismatched_column;
exception
-- NO_DATA_FOUND is the one error that cannot actually be reported in the customer_copy table.
-- It occurs when an eid exista in the input data but does not exist in customer_copy.
when NO_DATA_FOUND
then
dbms_output.put_line( 'Employee (eid)='
|| inpt_data(matched_indx_in).p_eid
|| ' does not exist in employee_copy table.'
);
return 'employee_copy.eid ID is NOT in table';
end identify_mismatch_column;
/*
Helper function to find specified eid in the initial inpt_data array
Since the resulting array of mismatching eid derive from a select without sort
there is no guarantee the index values actually match. Nor can we sort to build
the error array, as there is no way to know the order of eid in the initial array.
The following helper identifies the index value in the input array for the specified
eid in error.
*/
function match_indx(eid_in employee_copy.eid%type)
return pls_integer
is
l_at pls_integer := 1;
l_searching boolean := true;
begin
while l_at <= inpt_data.count
loop
exit when eid_in = inpt_data(l_at).p_eid;
l_at := l_at + 1;
end loop;
if l_at > inpt_data.count
then
raise_application_error( -20199, 'Internal error: Find index for ' || eid_in ||' not found');
end if;
return l_at;
end match_indx;
-- Main
begin
-- initialize status table for each input enter
-- additionally this results is a status_reason table in a 1:1 with the input array.
for i in 1..inpt_data.count
loop
status_reason(i).eid := inpt_data(i).p_eid;
status_reason(i).status :='SUCCESS';
end loop;
/*
We can assume the majority of data in the input array is valid meaning the columns match.
We'll eliminate all value rows by selecting each and then MINUSing those that do match on
each column. To accomplish this cast the input with TABLE function allowing it's use in SQL.
Following produces an array of eids that have at least 1 column mismatch.
*/
select p_eid
bulk collect into error_eids
from (select p_eid, p_ename, p_salary, p_dept from TABLE(inpt_data)
minus
select eid, ename, salary, dept from employee_copy
) exs;
/*
The error_eids array now contains the eid for each miss matched data item.
Mark the status as failed, then begin the long hard process of identifying
the first column causing the mismatch.
The following loop used the nested functions to slug the way through.
This keeps the main line logic clear.
*/
for i in 1 .. error_eids.count -- if all inpt_data rows match then count is 0, we bypass the enttire loop
loop
current_matched_indx := match_indx(error_eids(i));
status_reason(current_matched_indx).status := 'FAIL';
status_reason(current_matched_indx).reason := identify_mismatch_column(current_matched_indx);
end loop;
-- update employee_copy with appropriate status for each row in the input data.
-- Except for any cid that is in the error eid table but doesn't exist in the customer_copy table.
forall i in inpt_data.first .. inpt_data.last
update employee_copy
set status = status_reason(i).status
, reason = status_reason(i).reason
where eid = inpt_data(i).p_eid;
end comp_data;
end process_data;
There are a couple other techniques used you may want to look into if you are not familiar with them:
Nested Functions. There are 2 functions defined and used in the procedure.
Bulk Processing. That is Bulk Collect and Forall.
Good Luck.
ORIGINAL Answer
It is NOT necessary to compare each column nor build a string by concatenating. As you indicated comparing 50 columns becomes pretty heavy. So let the DBMS do most of the lifting. Using the MINUS operator does exactly what you need.
... the MINUS operator, which returns only unique rows returned by the
first query but not by the second.
Using that this task needs only 2 Updates: 1 to mark "fail", and 1 to mark "success". So try:
create table e( e_id integer
, col1 varchar2(20)
, col2 varchar2(20)
);
create table stage ( e_id integer
, col1 varchar2(20)
, col2 varchar2(20)
, status varchar2(20)
, reason varchar2(20)
);
-- create package spec and body
create or replace package process_data
is
procedure comp_data;
end process_data;
create or replace package body process_data
is
package body process_data
procedure comp_data
is
begin
update stage
set status='failed'
, reason='No matching e row'
where e_id in ( select e_id
from (select e_id, col1, col2 from stage
except
select e_id, col1, col2 from e
) exs
);
update stage
set status='success'
where status is null;
end comp_data;
end process_data;
-- test
-- populate tables
insert into e(e_id, col1, col2)
select (1,'ABC','def') from dual union all
select (2,'No','Not any') from dual union all
select (3,'ok', 'best ever') from dual union all
select (4,'xx','zzzzzz') from dual;
insert into stage(e_id, col1, col2)
select (1,'ABC','def') from dual union all
select (2,'No','Not any more') from dual union all
select (4,'yy', 'zzzzzz') from dual union all
select (5,'no e','nnnnn') from dual;
-- run procedure
begin
process_data.comp_date;
end;
-- check results
select * from stage;
Don't ask. Yes, you to must list every column you wish compared in each of the queries involved in the MINUS operation.
I know the documentation link is old (10gR2), but actually finding Oracle documentation is a royal pain. But the MINUS operator still functions the same in 19c;

Executing a procedure in Oracle

I'm trying to write a procedure that inserts paid invoices from the paid_invoices table into the invoice_archive table. Only those paid invoices that are older than or equal to 31-05-2014 should be transferred.
Here's my procedure:
SQL> create or replace procedure paid_invoice_transfer as
cursor paid is
select *
from paid_invoices
where invoice_total = credit_total + payment_total
and payment_date <= '2014-05-31';
invoice_archive_text paid%rowtype;
begin
for invoice_archive_text in paid loop
dbms_output.put_line(invoice_archive_text.invoice_id);
insert into invoice_archive values invoice_archive_text;
end loop;
end;
/
I'm not sure what to execute at this point:
SQL> set serveroutput on;
SQL> execute paid_invoice_transfer(???);
Now that you know how to execute a procedure without an argument, I would also like to point out the problems with your procedure.
It's not required to define a record variable for looping through the cursor.
invoice_archive_text paid%rowtype
Pl/SQL automatically creates it when you use it in for loop. I will go a step further and ask you to avoid loops to run INSERT. Just use plain INSERT INTO target_table select * from source_table and explicitly specify column names to be safe.
If you want to display the ids that are being inserted through DBMS_OUTPUT.PUT_LINE, store it in a a collection and loop it separately just for display purpose( only if you think display is needed).
I also wanted to show you how to pass a date argument for execution so I will pass payment_date. In your procedure you are wrongly comparing it with a literal string rather than a date. It is bound to fail if NLS_DATE_ parameters don't match with the string.
CREATE OR REPLACE PROCEDURE paid_invoice_transfer ( p_payment_date DATE ) AS
TYPE tab_inv IS TABLE OF paid_invoices.invoice_id%type;
t_tab tab_inv;
BEGIN
SELECT invoice_id BULK COLLECT INTO t_tab
FROM paid_invoices
WHERE invoice_total = credit_total + payment_total
AND payment_date <= p_payment_date;
FOR i IN t_tab.FIRST..t_tab.LAST LOOP
dbms_output.put_line(t_tab(i));
END LOOP;
INSERT INTO invoice_archive ( column1,column2,column3) select col1,col2,col3
FROM paid_invoices
WHERE invoice_total = credit_total + payment_total
AND payment_date <= p_payment_date;
END;
/
set serveroutput on
execute paid_invoice_transfer(DATE '2014-05-31' )

How to have my Oracle PL/SQL block update all entries related to the row_ids referenced by my cursor

I currently have a Oracle stored procedure that is taking data from tables. The problem is I am aggregating /grouping , and I don't want to grab the IDs otherwise that will throw the grouping off. I want to update a column called 'correlated_flag_id' to '1' (done) in the value table after ive inserted the aggregated/grouped result set. I only want to grab IDs that are correlated to the
values that my first cursor grabbed to derive the results. Below is my attempt (which I don't think is correct):
Create or Replace PROCEDURE PROC is
CURSOR c1 is
select sum(v.value_tx) as sum_of_values
, max(v.create_dt) as latest_create_dt
, v.data_date
from value v
group by v.data_date, max(v.create_dt)
BEGIN
Open c1;
LOOP
Fetch c1 into l_var;
insert into value (value_id, value_tx, create_dt, data_date)
values (null, l_var.sum_of_values, l_var.latest_create_dt, l_var.data_Date);
END LOOP;
Close c1;
commit;
--- the bottom is not correct, but i've reached a roadblock
Update value
set correlated_flag_id = 777
where value_id in (select v.value_id from value where trunc(create_dt) <> trunc(sysdate)) (???));
commit;
END PROC;
Thanks in advance and please let me know if there is any more details that I need to provide.
cursor's select is kind of wrong; why are you grouping by a MAX function? It isn't allowed here
switch to cursor FOR loop as it is easier to maintain. You don't have to open a cursor, fetch, exit the loop (which you did not do at all), close the cursor
I'm not sure what VALUE_930 table is doing here, you never mentioned it
your words say "update correlated ID to 1", while code says "update it to 777"
don't commit in a procedure; let caller decide whether it should be done
I'd suggest you to either use a tool which offers code formatting, or format it yourself. Your procedure is not a result of a flood, so don't treat it that way
Finally, here's a suggestion which might (or might not) work as we don't have your tables nor data, but - at least - looks decent.
create or replace procedure proc is
begin
for cur_r in (select v.data_date,
sum(v.value_tx) as sum_of_values,
max(v.create_dt) as latest_create_dt
from value v
group by v.data_date)
loop
insert into value (value_id, value_tx, create_dt, data_date)
values (null, cur_r.sum_of_values, cur_r.latest_create_dt, cur_r.data_date);
update value set
correlated_flag_id = 777
where data_date = cur_r.data_date;
end loop;
end proc;
/

oracle stored procedure - select, update and return a random set of rows

oracle i wish to select few rows at random from a table, update a column in those rows and return them using stored procedure
PROCEDURE getrows(box IN VARCHAR2, row_no IN NUMBER, work_dtls_out OUT dtls_cursor) AS
v_id VARCHAR2(20);
v_workname VARCHAR2(20);
v_status VARCHAR2(20);
v_work_dtls_cursor dtls_cursor;
BEGIN
OPEN v_work_dtls_cursor FOR
SELECT id, workname, status
FROM item
WHERE status IS NULL
AND rownum <= row_no
FOR UPDATE;
LOOP
FETCH v_work_dtls_cursor
INTO v_id ,v_workname,v_status;
UPDATE item
SET status = 'started'
WHERE id=v_id;
EXIT
WHEN v_work_dtls_cursor % NOTFOUND;
END LOOP;
close v_work_dtls_cursor ;
/* I HAVE TO RETURN THE SAME ROWS WHICH I UPDATED NOW.
SINCE CURSOR IS LOOPED THRU, I CANT DO IT. */
END getrows;
PLEASE HELP
Following up on Sjuul Janssen's excellent recommendation:
create type get_rows_row_type as object
(id [item.id%type],
workname [item.workname%type],
status [item.status%type]
)
/
create type get_rows_tab_type as table of get_rows_row_type
/
create function get_rows (box in varchar2, row_no in number)
return get_rows_tab_type pipelined
as
v_work_dtls_cursor dtls_cursor;
l_out_rec get_rows_row_type;
BEGIN
OPEN v_work_dtls_cursor FOR
SELECT id, workname, status
FROM item sample ([ROW SAMPLE PERCENTAGE])
WHERE status IS NULL
AND rownum <= row_no
FOR UPDATE;
LOOP
FETCH v_work_dtls_cursor
INTO l_out_rec.id, l_out_rec.workname, l_outrec.status;
EXIT WHEN v_work_dtls_cursor%NOTFOUND;
UPDATE item
SET status = 'started'
WHERE id=l_out_rec.id;
l_out_rec.id.status := 'started';
PIPE ROW (l_out_rec);
END LOOP;
close v_work_dtls_cursor ;
END;
/
A few notes:
This is untested.
You'll need to replace the bracketed section in the type declarations with appropriate types for your schema.
You'll need to come up with an appropriate value in the SAMPLE clause of the SELECT statement; it might be possible to pass that in as an argument, but that may require using dynamic SQL. However, if your requirement is to get random rows from the table -- which just filtering by ROWNUM will not accomplish -- you'll want to do something like this.
Because you're SELECTing FOR UPDATE, one session can block another. If you're in 11g, you may wish to examine the SKIP LOCKED clause of the SELECT statement, which will enable multiple concurrent sessions to run code like this.
Not sure where you are doing your committing, but based on the code as it stands all you should need to do is SELECT ... FROM ITEM WHERE STATUS='started'
If it is small numbers, you could keep a collection of ROWIDs.
if it is larger, then I'd do an
INSERT into a global temporary table SELECT id FROM item .. AND ROWNUM < n;
UPDATE item SET status = .. WHERE id in (SELECT id FROM global_temp_table);
Then return a cursor of
SELECT ... FROM item WHERE id in (SELECT id FROM global_temp_table);
Maybe this can help you to do what you want?
http://it.toolbox.com/blogs/database-solutions/returning-rows-through-a-table-function-in-oracle-7802
A possible solution:
create type nt_number as table of number;
PROCEDURE getrows(box IN VARCHAR2,
row_no IN NUMBER,
work_dtls_out OUT dtls_cursor) AS
v_item_rows nt_number;
indx number;
cursor cur_work_dtls_cursor is
SELECT id
FROM item
WHERE status IS NULL
AND rownum <= row_no
FOR UPDATE;
BEGIN
open cur_work_dtls_cursor;
fetch cur_work_dtls_cursor bulk collect into nt_number;
for indx in 1 .. item_rows.count loop
UPDATE item
SET status = 'started'
WHERE id=v_item_rows(indx);
END LOOP;
close cur_work_dtls_cursor;
open work_dtls_out for select id, workname, status
from item i, table(v_item_rows) t
where i.id = t.column_value;
END getrows;
If the number of rows is particularly large, the global temporary solution may be better.

Resources