In below program only for first two rows it is working and for other rows it is showing incorrect values in plsql - oracle

Here for the below program i need to print the amt_running_bal from the previous value. but it is not working and showing error. what is the error in the below program.Please provide any solution for this.
DECLARE
total Number := 1000000;
c_cod_acct_no Char;
c_amt_txn Number;
c_cod_drcr Char;
c_amt_running_bal Number;
amt_running_bal Number;
CURSOR c_chnos1 is
SELECT cod_drcr, amt_txn,amt_running_bal FROM chnos1;
BEGIN
OPEN c_chnos1;
FOR k IN 1..2 LOOP
FETCH c_chnos1 into c_cod_drcr,c_amt_txn,c_amt_running_bal;
if c_cod_drcr = 'C' then
total := total + c_amt_txn;
Update chnos1 SET amt_running_bal = total where cod_drcr='C' ;
elsif
c_cod_drcr = 'D' then
total := total - c_amt_txn;
Update chnos1 SET amt_running_bal = total where cod_drcr='D';
else
total := total + c_amt_txn;
Update chnos1 SET amt_running_bal = total where cod_drcr='C';
end if;
END LOOP;
CLOSE c_chnos1;
END;
/

Your query does not work as you limit the loop to k IN 1..2 so it will only read two rows from the cursor and there is no correlation between the row you are reading from the cursor and what you are updating; in fact, you are updating all the rows WHERE cod_drcr = 'C' or WHERE cod_drcr = 'D' and not just the current row. You could fix it by correlating the updates to the current row using the ROWID pseudo-column but it is an inefficient solution to use cursors as it will be slow and generate redo/undo log entries for each iteration of the cursor loop.
Instead, do it all in a single MERGE statement using an analytic SUM and a CASE expression:
MERGE INTO chnos1 dst
USING (
SELECT rowid AS rid,
1000000
+ SUM(
CASE cod_drcr
WHEN 'C' THEN +amt_txn
WHEN 'D' THEN -amt_txn
ELSE 0
END
)
OVER (
-- Use something like this to update each account
-- PARTITION BY cod_acct_no ORDER BY payment_date
-- However, you haven't said how to partition or order the rows so use this
ORDER BY ROWNUM
) AS total
FROM chnos1
) src
ON (dst.ROWID = src.rid)
WHEN MATCHED THEN
UPDATE SET amt_running_bal = src.total;
fiddle

Related

Oracle: Update Every Row in a Table based off an Array

So i'm trying to create some seed data for a database that uses zip codes. I've created an array of 22 arbitrary zip code strings, and i'm trying to loop through the array and update one of the zips to every row in a table. Based on what I read and tried (I'm a 1st year, so I'm probably missing something), this should work, and does when I just output the array value based on the count of the table. this issue is in the row id subquery. When I run it in my console, it doesn't throw any errors, but it never completes and I think it's stuck in an infinite loop. How can I adjust this so that it will update the field and not get stuck?
declare
t_count NUMBER;
TYPE zips IS VARRAY(22) OF CHAR(5);
set_of_zips zips;
i NUMBER;
j NUMBER :=1;
BEGIN
SELECT count(*) INTO t_count FROM T_DATA;
set_of_zips:= zips('72550', '71601', '85920', '85135', '95451', '90021', '99611', '99928', '35213', '60475', '80451', '80023', '59330', '62226', '27127', '28006', '66515', '27620', '66527', '15438', '32601', '00000');
FOR i IN 1 .. t_count LOOP
UPDATE T_DATA
SET T_ZIP=set_of_zips(j)
---
WHERE rowid IN (
SELECT ri FROM (
SELECT rowid AS ri
FROM T_DATA
ORDER BY T_ZIP
)
) = i;
---
j := j + 1;
IF j > 22
THEN
j := 1;
END IF;
END LOOP;
COMMIT;
end;
You don't need PL/SQL for this.
UPDATE t_data
SET t_zip = DECODE(MOD(ROWNUM,22)+1,
1,'72550',
2,'71601',
3,'85920',
4,'85135',
5,'95451',
6,'90021',
7,'99611',
8,'99928',
9,'35213',
10,'60475',
11,'80451',
12,'80023',
13,'59330',
14,'62226',
15,'27127',
16,'28006',
17,'66515',
18,'27620',
19,'66527',
20,'15438',
21,'32601',
22,'00000')

PL/SQL cannot delete multiple rows

I have written simple code using PL/SQL to delete multiple rows from a table, but below code only deletes one row every i trigger it.
DECLARE
i number(2);
BEGIN
FOR i IN 1..4 LOOP
DELETE FROM table_name WHERE rownum = i;
dbms_output.put_line('i is: '|| i);
END LOOP;
END;
Can someone please suggest what is wrong with code?
ROWNUM is the nth row read.
select * from table_name where rownum = 1;
gets you the first row.
select * from table_name where rownum <= 2;
gets you the first two rows.
select * from table_name where rownum = 2;
gets you no rows, because you cannot read a second row without having read a first one.
This said, you'd have to replace
DELETE FROM table_name WHERE rownum = i;
with
DELETE FROM table_name WHERE rownum = 1;
But why would you do this anyway? Why delete arbitrarily picked records? Why use PL/SQL at all, rather than a mere DELETE FROM table_name WHERE rownum <= 4;?
What you need to understand is how Oracle processes ROWNUM. When assigning ROWNUM to a row, Oracle starts at 1 and only only increments the value when a row is selected; that is, when all conditions in the WHERE clause are met. Since our condition requires that ROWNUM is greater than 2 or equal to nth value, no rows are selected and ROWNUM is never incremented beyond 1.
If you really do wanna achieve it using PLSQL anot using SQL query as my friend Throsten has stated then please find a work around below.
I Created a dummy table test_c which holds 1 column (ID with number as its type).
set serveroutput on ;
DECLARE
i number(2);
j number(2);
counter number(10):=0;
BEGIN
FOR i IN 5..11 LOOP
if counter = 0 then
j:=i;
end if;
DELETE FROM test_c WHERE ID = (select id from (select id,rownum as ro from test_c order by id) where ro =j);
dbms_output.put_line('i is: '|| i);
counter:=counter+1;
END LOOP;
END;
Please note that this is not the right way to do it, but will work for your requirement.

Performance issue using while loop in insert statement

I am using a try catch block to catch the data with constrain errors.
eg. if null inserted in not null column or if duplicated record inserted or the type mismatch occurs, all the source records with error should go to error log table and rest of the records should go to destination table.
for this i am using try catch so i can't use bulk insert, hence using row by row insert using While Loop, which is takes forever to run as i have to insert 3000000 records.
is there any way where i can improve performance while loop? so it can insert 3000000 records in minimum time? currently it is taking 2 hours or more :(
Try conducting your insert in batches. For instance do a loop attempting to insert 10,000/1,000/100 records at a time as a bulk insert. If there is an error in the batch, catch it and re-execute that batch as a row by row operation. You will have to play with the batch size and make it small enough so that the majority of batches are processed as bulk inserts and only have to row by row a batch occasionally.
The following demonstrates handling a pile of sample data in batches with "binary search" on the batch size in the event of an error.
set nocount on;
-- Set the processing parameters.
declare #InitialBatchSize as Int = 1024;
declare #BatchSize as Int = #InitialBatchSize;
-- Create some sample data with somewhat random Divisor values.
declare #RowsToProcess as Int = 10000;
declare #SampleData as Table ( Number Int, Divisor Int );
with Digits as ( select Digit from ( values (0), (1), (2), (3), (4), (5), (6), (7), (8), (9) ) as Digits( Digit ) ),
Numbers as (
select ( ( ( Ten_4.Digit * 10 + Ten_3.Digit ) * 10 + Ten_2.Digit ) * 10 + Ten_1.Digit ) * 10 + Ten_0.Digit + 1 as Number
from Digits as Ten_0 cross join Digits as Ten_1 cross join Digits as Ten_2 cross join
Digits as Ten_3 cross join Digits as Ten_4 )
insert into #SampleData
select Number, Abs( Checksum( NewId() ) ) % 1000 as Divisor -- Adjust "1000" to vary the chances of a zero divisor.
from Numbers
where Number < #RowsToProcess;
-- Process the data.
declare #FailedRows as Table ( Number Int, Divisor Int, ErrorMessage NVarChar(2048) );
declare #BitBucket as Table ( Number Int, Divisor Int, Quotient Int );
declare #RowCount as Int = 1; -- Force at least one loop execution.
declare #LastProcessedNumber as Int = 0;
while #RowCount > 0
begin
begin try
-- Subject-to-failure INSERT .
insert into #BitBucket
select top ( #BatchSize ) Number, Divisor, 1 / Divisor as Quotient
from #SampleData
where Number > #LastProcessedNumber
order by Number;
set #RowCount = ##RowCount;
select #LastProcessedNumber = Max( Number ) from #BitBucket;
print 'Processed ' + Cast( #RowCount as VarChar(10) ) + ' rows.';
end try
begin catch
if #BatchSize > 1
begin
-- Try a smaller batch.
set #BatchSize /= 2;
end
else
begin
-- This is a failing row. Log it with the error and reset the batch size.
set #LastProcessedNumber += 1;
print 'Row failed. Row number ' + Cast( #LastProcessedNumber as VarChar(10) ) + ', error: ' + Error_Message() + '.';
insert into #FailedRows
select Number, Divisor, Error_Message()
from #SampleData
where Number = #LastProcessedNumber;
set #BatchSize = #InitialBatchSize;
end
end catch
end;
-- Dump the results.
select * from #FailedRows order by Number;
select * from #SampleData order by Number;
select * from #BitBucket order by Number;

Are there any impact of update statement on for loop statement in oracle?

I have nested for loop which iterates same table. In inner loop I update a column in same table. But in for loop condition I check that updated column and I need to check this column not in the beginning but dynamically, so my for loop iterations will maybe greatly decrease.
Am I doing this correct or is for statement will not see updated column?
declare
control number(1);
dup number(10);
res varchar2(5);--TRUE or FALSE
BEGIN
dup :=0;
control :=0;
FOR aRow IN (SELECT MI_PRINX, geoloc,durum, ROWID FROM ORAHAN where durum=0)
LOOP
FOR bRow IN (SELECT MI_PRINX, geoloc, ROWID FROM ORAHAN WHERE ROWID>aRow.ROWID AND durum=0)
LOOP
BEGIN
--dbms_output.put_line('aRow' || aRow.Mi_Prinx || ' bRow' || bRow.Mi_Prinx);
select SDO_GEOM.RELATE(aRow.geoloc,'anyinteract', bRow.Geoloc,0.02) into res from dual;
if (res='TRUE')
THEN
Insert INTO ORAHANCROSSES values (aRow.MI_PRINX,bRow.MI_PRINX);
UPDATE ORAHAN SET DURUM=1 where rowid=bRow.Rowid;
control :=1;
--dbms_output.put_line(' added');
END IF;
EXCEPTION
WHEN DUP_VAL_ON_INDEX
THEN
dup := dup+1;
--dbms_output.put_line('duplicate');
--continue;
END;
END LOOP;
IF(control =1)
THEN
UPDATE ORAHAN SET DURUM=1 WHERE rowid=aRow.Rowid;
END IF;
control :=0;
END LOOP;
dbms_output.put_line('duplicate: '||dup);
END ;
Note: I use oracle 11g and pl/sql developer
Sorry my english.
Yes, the FOR statement will not see the updated DURUM column because the FOR statement will see all data as they were when the query was started! This is called read consistency and Oracle accomplishes this by using the generated UNDO data. That means it'll have more and more work to do (==run slower) as your FOR loop advances and the base table is updated!
It also means that your implementation will eventually run into a ORA-01555: snapshot too old error when the UNDO tablespace is exhausted.
You'll be probably better off using a SQL MERGE statement which should also run much faster.
e.g.:
Merge Into ORAHANCROSSES C
Using (Select aROW.MI_PRINX aROW_MI_PRIX,
aROW.GEOLOC aROW_GEOLOC,
bROW.MI_PRINX bROW_MI_PRIX,
bROW.GEOLOC bROW_GEOLOC,
SDO_GEOM.RELATE(aRow.geoloc,'anyinteract', bRow.Geoloc,0.02) RES
From ORAHAN aROW,
ORAHAN bROW
Where aROW.ROWID < bROW.ROWID
) Q
On (C.MI_PRIX1 = Q.aROW_MI_PRIX
and C.MI_PRIX2 = Q.bROW_MI_PRIX)
When Matched Then
Delete Where Q.RES = 'FALSE'
When Not Matched Then
Insert Values (Q.aROW_MI_PRIX, Q.bROW_MI_PRIX)
Where Q.RES = 'TRUE'
;
I'm not sure what you're trying to accomplish by ROWID>aRow.ROWID though
To use a certain order (in this case MI_PRINX) use the following technique:
Merge Into ORAHANCROSSES C
Using (With D as (select T.*, ROWNUM RN from (select MI_PRINX, GEOLOC from ORAHAN order by MI_PRINX) T)
Select aROW.MI_PRINX aROW_MI_PRIX,
aROW.GEOLOC aROW_GEOLOC,
bROW.MI_PRINX bROW_MI_PRIX,
bROW.GEOLOC bROW_GEOLOC,
SDO_GEOM.RELATE(aRow.geoloc,'anyinteract', bRow.Geoloc,0.02) RES
From D aROW,
D bROW
Where aROW.RN < bROW.RN
) Q
On (C.MI_PRIX1 = Q.aROW_MI_PRIX
and C.MI_PRIX2 = Q.bROW_MI_PRIX)
When Matched Then
Delete Where Q.RES = 'FALSE'
When Not Matched Then
Insert Values (Q.aROW_MI_PRIX, Q.bROW_MI_PRIX)
Where Q.RES = 'TRUE'
;
In case the query is taking too long, you might select * from v$session_longops where seconds_remaining >0 to find out when it'll be finished.

Increment Multiple Timestamp Values with PL/SQL

I don't know Oracle at all, but I need to write something like this:
MySQL:
SET #serial:=1;
UPDATE table1 SET t = t + INTERVAL (#serial:=#serial+1) SECOND;`
Update and increment a timestamp by one second for all records. How to do this in Oracle?
Question Update:
My wording was not explaining my problem well enough.
I want to have a variable of (TimeStamp).
Then go through all records incrementing this variable with one second every time for a record update.
It should be like this as per my understanding
DECLARE
serial number := 1;
BEGIN
update table1 set t= t + (( serial + rownum - 1 )/86400);
END;
This will do increment like below
1st row -> 1 sec
2nd row -> 2 sec
.
.
nth row -> nsec
though the serial starts from 1
Another way would be
update table1 set t= t + interval '1' second;
Read more about Interval literals
As per your update, it should be
DECLARE
t_update_time date := sysdate;
BEGIN
update table1 set t=t_update_time + interval '1' second;
END;
This snippet assigns current datetime to t_update_time variable, and updates the record with 1 second added to the datetime declared in t_update_time. Change t_update_time assignment accordingly.
Without a PL/SQL switch it should be as
update table1 set t=to_date('21.01.2015 09:00:00','dd.mm.rrrr hh:mi:ss') + interval '1' second;
It's pretty simple actually. Just do:
update table1 set t = t + 1/86400;
After question update, you can do:
DECLARE
t_serial number;
cursor c is select * from table1 for update of t;
cr table1%rowtype;
BEGIN
t_serial := 1;
for cr in c loop
UPDATE table1 SET t = t_serial/86400 WHERE CURRENT OF c;
t_serial := t_serial + 1;
end loop;
END;

Resources