Need suggestion on Job run in Oracle 10g - oracle

I have a job running in ORacle 10g production DB which syncs two DB tables A and B. The job fetches data from table A and inserts into table B. The job runs on a daily basis and for the past few months it started failing in production with the error "Error in retrieving data -54". On checking the store procedure, I could see that the job fails due to locking record issue when other jobs lock the records from table A and our job is not able to process the same. So I started searching for some possible solutions which I have given below.
Change the running time of the job so that it can process records. But this is not gonna help since table A is very critical and always used by production jobs. Also it has real time updates from the users.
Instead of "No WAIT" use "SKIP LOCKED" so that job will skip the locked records and run fine. But problem here is if locked records(This is always negligible compared to the huge production data) are skipped, there will be mismatch in the data in tables A and B for the day. Next day run will clear this problem since the job picks unpicked records of previous days also.But the slight mismatch for the job failed days may cause small problems
Let the job wait till all the records are unlocked and processed. but this again causes problem since we cannot predict how long job will be in waiting state(Long running state).
As of now one possible solution for me is to go with option 2 and ignore the slight deviation between table A and Bs data. Is there any other way in Oracle 10g Db to run the job without failing and long running and process all records. I wish to get some technical guidance on the same.
Thanks
PB

I'd handle the exception (note, you'll have to either initialise your own EXCEPTION or handle OTHERS and inspect the SQLCODE) and track the ids of the rows that were skipped. That way you can retry them once all the available records have been processed.
Something like this:
DECLARE
row_is_locked EXCEPTION;
PRAGMA EXCEPTION_INIT(row_is_locked, -54);
TYPE t_id_type IS VARRAY(1) OF INTEGER;
l_locked_ids t_id_type := t_id_type();
l_row test_table_a%ROWTYPE;
BEGIN
FOR i IN (
SELECT a.id
FROM test_table_a a
)
LOOP
BEGIN
-- Simulating your processing that requires locks
SELECT *
INTO l_row
FROM test_table_a a
WHERE a.id = i.id
FOR UPDATE NOWAIT;
INSERT INTO test_table_b
VALUES l_row;
-- This is on the basis that you're commiting
-- to release the lock on each row after you've
-- processed it; may not be necessary in your case
COMMIT;
EXCEPTION
WHEN row_is_locked THEN
l_locked_ids(l_locked_ids.LAST) := i.id;
l_locked_ids.EXTEND();
END;
END LOOP;
IF l_locked_ids.COUNT > 0 THEN
FOR i IN l_locked_ids.FIRST .. l_locked_ids.LAST LOOP
-- Reconcile the remaining ids here
NULL;
END LOOP;
END IF;
END;

Related

Submitted Oracle job using dbms_job.submit and it failed but I don't know where to look for an error message

We are initiating the rebuilding of many materialized views by using dbms_job.submit to execute a stored procedure that perform the rebuilding. However, I am having trouble trying to figure out how to determine if a submitted job failed. The issue that I am having is that the job is failing but I cannot identify what the issue is. So, I am trying to start out with a simple example, which is probably failing on a permission issue but I don't know where to look for the error message.
I have the following test procedure that I want to initiate using dbms_job.submit.
CREATE OR REPLACE PROCEDURE MYLANID.JUNKPROC
AS
lv_msg varchar2(3000);
BEGIN
INSERT INTO MYLANID.junk_log ( msg ) VALUES ('Hello World' );
commit;
EXCEPTION
WHEN OTHERS THEN
lv_msg := SUBSTR(sqlerrm, 1, 3000);
INSERT INTO MYLANID.junk_log ( msg ) VALUES (lv_msg);
END;
/
Note that this table is used above:
CREATE TABLE MYLANID.JUNK_LOG (
EVENT_TIME TIMESTAMP(6) DEFAULT systimestamp,
MSG VARCHAR2(3000 BYTE))
To submit the above procedure as a job, I execute the following anonymous block.
declare l_jobid binary_integer;
BEGIN
dbms_job.submit(job => l_jobid, what => 'BEGIN MYLANID.JUNKPROC; END;');
DBMS_OUTPUT.PUT_LINE('l_jobid:' || l_jobid);
commit;
END;
I then execute the following SQL...
select * from all_jobs;
...to see one record that represents my submitted job. When I re-query the all_jobs view, I see that this record quickly disappears from the view within a few seconds, presumably when the job completes. All is happy so far. I would like to use the presence of a record in the all_jobs view to determine whether a submitted job is running or has failed. I expect to be able to tell if it failed by looking at the ALL_JOBS.FAILURES column having a non null value > 0.
The problem, probably a permission issue, begins when I switch to another schema and I switch all of the occurrences of the above SQL and replace "MYSCHEMA" with "ANOTHERSCHEMA" that I also have access to. For example, I create the following
Table: ANOTHERSCHEMA.JUNK_LOG
Procedure: ANOTHERSCHEMA.JUNKPROC
I am even able to execute the stored procedure successfully in a query window while logged in as MYSCHEMA:
EXEC ANOTHERSCHEMA.JUNKPROC
However, if I execute the following code to submit a job that involves running the same ANOTHERSCHEMA procedure but by submitting it as a JOB...
declare l_jobid binary_integer;
BEGIN
dbms_job.submit(job => l_jobid, what => 'BEGIN ANOTHERSCHEMA.JUNKPROC; END;');
DBMS_OUTPUT.PUT_LINE('l_jobid:' || l_jobid);
commit;
END;
...then, when I query the jobs ALL_JOBS view...
select * from all_jobs;
...I see that the job has a positive value for the column FAILURE and I have no record of what the error was. This FAILURE count value continues to gradually increment over time as Oracle presumably retries up to 16? times until the job is marked BROKEN in the ALL_JOBS view.
But this is just a simple example and I don't know where to look for the error message that would tell me why the job using ANOTEHRSCHEMA references failed.
Where Do I look for the error log of failed jobs? I'm wondering if this will be somewhere only the DBA can see...
Update:
The above is just a simple test example. In my actual real world situation, my log shows that the job was submitted but I never see anything in USER_JOBS or even DBA_JOBS, which should show everything. I don't understand why the dbms_job.submit procedure would return the job number of the submitted job indicating that it was submitted but no job is visible in the DBA_JOBS view! The job that I did submit should have taken a long time to run, so I don't expect that it completed faster than I could notice.
First off, you probably shouldn't be using dbms_job. That package has been superseded for some time by the dbms_scheduler package which is significantly more powerful and flexible. If you are using Oracle 19c or later, Oracle automatically migrates dbms_job jobs to dbms_scheduler.
If you are using an Oracle version prior to 19c and a dbms_job job fails, the error information is written to the database alert log. That tends to be a bit of a pain to query from SQL particularly if you're not a DBA. You can define an external table that reads the alert log to make it queryable. Assuming you're on 11g, there is a view, x$dbgalertext, that presents the alert log information in a way that you can query it but DBAs generally aren't going to rush to give users permission on x$ tables.
If you use dbms_scheduler instead (or if you are on 19c or later and your dbms_job jobs get converted to dbms_scheduler jobs), errors are written to dba_scheduler_job_run_details. dbms_scheduler in general gives you a lot more logging information than dbms_job does so you can see things like the history of successful runs without needing to add a bunch of instrumentation code to your procedures.

Is there any alternative of skip locked for update in oracle

I have 5 rows in table. And some of the rows are locked in some sessions .
I don't want to generate any error, Just want to wait until any row will become free for further processing
I tired with nowait and skip locked:-
nowait , But there is a problem with nowait. query has written in cursor , when I used "nowait" under cursor , query will return null and control will go out with an error by saying- resource busy
I tried with skip locked with for update-But if table contain 5 rows and all
5 rows are locked then it is giving error.
CURSOR cur_name_test IS
SELECT def.id , def.name
FROM def_map def
WHERE def.id = In_id
FOR UPDATE skip locked;
Why don't use select for update only ? The below is being test locally in plsql developer.
in first sesssion I do the below
SELECT id , name
FROM ex_employee
FOR UPDATE;
in second session i run the below however it hang.
SET serveroutput ON size 2000
/
begin
declare curSOR cur_name_test IS
SELECT id , name
FROM ex_employee
WHERE id = 1
FOR UPDATE ;
begin
for i in cur_name_test loop
dbms_output.put_line('inside cursor');
end loop;
end;
end;
/
commit
/
when I commit in the first session , the lock will be released and the second session will do its work. i guess that you want , infinite wait.
However such locking mechanism (pessimistic locking) can lead to deadlocks if its not managed correctly and carefully ( first session waiting second session , and second session waiting first session).
As for the nowait its normal to have error resource busy, because you are saying for the query don't wait if there are locking. you can instead wait 30, which will wait 30 second then output error but thats not what you want(I guess).
As for skip locked, the select will skip the locked data for example you have 5 rows , and one of them is locked then the select will not read this row . thats why when all the data is locked its throwing error because nothing can be skipped. and I guess is not what you want in your scenario.
This sounds like you need to think about transaction control.
If you are doing work in a transaction then the implication is that that unit of work needs to complete in order to be valid.
What you are saying is that some of the work in my update transaction doesn't need to complete in order for the transaction to be committed.
Not only that but you have two transactions running at the same time performing operations against the same object. In itself that may be valid but
if it is then you really need to go back to the first sentence and think hard about transaction control and process flow and see if there's a way you can have the second transaction only attempt to update rows that aren't being updated in the first transaction.

Translation of DB2 Trigger to Oracle triggers ORA-04091 Error

So I have two tables: JOBS and TASKS.
The TASKS Table has a TAKS_STATUS Column that stores a progression of states (e.g. 'Submitting', 'Submitted', 'Authored', 'Completed').
The JOBS Table has a JOB_STATUS Column which is a roll-up (i.e. minimum state) of the TASK_STATUS Columns in the TASKS Table. The JOBS Table also has a TASK_COUNT value that contains the number of TASKS associated with a Job.
Jobs can have one or more tasks: JOB_ID in each table links them.
In DB2, I have a series of simple triggers to roll-up this state; here's one in the middle:
create or replace trigger JOB_AUTHORED
after update of TASK_STATUS on TASKS
referencing NEW as N
for each row
when (TASK_STATUS = 'Authored')
update JOBS
set JOB_STATUS = 'Authored'
where JOBS.JOB_ID = N.JOB_ID
and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = N.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed'))
This works fine in DB2 because the trigger runs in the same work unit as the triggering event and thus it can see the work unit's uncommitted changes and can count the TASK_STATUS change that just occurred without hitting a row-lock.
Here's the translated trigger in Oracle:
create or replace trigger JOB_AUTHORED
after update of TASK_STATUS on TASKS
for each row
when (NEW.TASK_STATUS = 'Authored')
BEGIN
update JOBS
set JOB_STATUS='Authored'
where JOBS.JOB_ID = :NEW.JOB_ID and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = :NEW.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed'));
END;
In Oracle this fails with:
ORA-04091: table MYSCHEMA.TASKS is mutating, trigger/function may not see it#012ORA-06512: at "MYSCHEMA.JOB_AUTHORED", line 1#012ORA-04088: error during execution of trigger 'MYSCHEMA.JOB_AUTHORED'#012] [query: UPDATE TASKS SET TASK_STATUS=:1 where TASK_ID=:2
Apparently Oracle's Triggers do not run in the same context, can not see the uncommitted triggering updates and thus will never be able to count the number of tasks in specific states which include the triggering row.
I guess I could change the AFTER Trigger to an INSTEAD OF Trigger and update the TASK_STATUS (as well as the JOB_STATUS) within the Trigger (so the Job Update can see the Task Update), but would I run into the same error? Maybe not the first Task Update, but what if the triggering program is updating a bunch of TASKS before committing: what happens when the second Task is updated?
I've also considered removing the Trigger and have a program scan active Jobs for the status of its Tasks, but that seems inelegant.
What is the Best Practice with something like this in Oracle?
The best practice is to avoid triggers if you can.
See these links for an answer why one shoud not use triggers:
http://www.oracle.com/technetwork/testcontent/o58asktom-101055.html
http://rwijk.blogspot.com/2007/09/database-triggers-are-evil.html
Use a procedure (an API) in place of triggers - you can create a package with a few procedures like add_new_job, add_new_task, change_task_status etc., and place all logic in these procedures (checking, changing tasks' states, changing jobs' states etc.) in one place. Easy to understand, easy to maintain, easy to debug and easy to tracking errors.
If you insist on using triggers, then you can create the compound trigger, as Tom Kyte mentioned in the frist link above as workaround, for example:
create or replace TRIGGER JOB_AUTHORED
FOR UPDATE OF TASK_STATUS on TASKS
COMPOUND TRIGGER
TYPE JOB_ID_TABLE_TYPE IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
JOB_ID_TABLE JOB_ID_TABLE_TYPE;
dummy CHAR;
AFTER EACH ROW IS
BEGIN
-- SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE;
JOB_ID_TABLE( :NEW.JOB_ID ) := :NEW.JOB_ID;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
FORALL x IN INDICES OF JOB_ID_TABLE
UPDATE jobs set JOB_STATUS='Authored'
WHERE JOBS.JOB_ID = JOB_ID_TABLE( x )
and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = JOBS.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed')
);
END AFTER STATEMENT;
END JOB_AUTHORED;
but .....
I am not sure if there are no any pitfals in this example we are not aware at this time.
For example one pitfall is in this scenario:
Suposse there are 18 tasks with status Authored
At time X user A runs UPDATE TASK SET status ' Authored' WHERE
task_id = 2. The trigger is fired and sees 18+1 commited tasks with
status Authored
At time X+10ms user B runs UPDATE TASK1 SET
status ' Authored' task_id = 4. The trigger is fired and sees 18+1
commited tasks with status Authored
At time X+20ms user A
commits
At time X+30ms user B commits
At the end we have 21 tasks
with status authored. but job's status have not been changed to
Authored (but should be changed to Authored if number of tasks = 20).
To avoid this trap you can use SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE; in the after each row part of the trigger in order to place a lock at corresponding record in JOBS table in order to serialize access (I am commented it in the example above).
But I'am still not sure if this is correct solution - it may cause in turn some deadlocks in scenarious I cannot imagine and predict at this time.
In short - in Oracle, in triggers you cannot select from table on which trigger is constructed. Otherwise you may/you will get mutating table error.
You have several options:
1) No triggers at all - in my humble opinion this is the best, I would do it this way (but probably topic is wider and we don't know everything).
Create view, which replaces the need of triggers, something like:
create or replace view v_jobs_done as
select * from jobs where not exists
select 1 from tasks
where TASKS.JOB_ID = jobs.JOB_ID
and TASKS.TASK_STATUS not in ('Authored','Completed')
2) Instead of increasing value use decreasing value - so when jobs.tasks_count reaches zero you know everything's done. In this case you have to build/rebuild your other triggers,
3) Proposition close to yours - you can use modern compound trigger - I doubt in performance here, but it works:
create or replace trigger Job_Authored
for update of task_status on tasks compound trigger
type t_ids is table of tasks.job_id%type;
type t_cnts is table of number;
type t_job_counts is table of number index by varchar2(10);
v_ids t_ids;
v_cnts t_cnts;
v_job_counts t_job_counts;
before statement is
begin
select job_id, count(1)
bulk collect into v_ids, v_cnts
from tasks where tasks.task_status in ('Authored','Completed')
group by job_id;
for i in 1..v_ids.count() loop
v_job_counts(v_ids(i)) := v_cnts(i);
end loop;
end before statement;
after each row is
begin
if :new.task_status = 'Authored' then
update jobs set job_status='Authored'
where job_id = :new.job_id
and task_count = v_job_counts(:new.job_id);
end if;
end after each row;
end Job_Authored;
The best practice is to avoid triggers if you can.
See this links for an answer why one shoud not use triggers:
http://www.oracle.com/technetwork/testcontent/o58asktom-101055.html
http://rwijk.blogspot.com/2007/09/database-triggers-are-evil.html
Use a procedure (an API) in place of triggers - you can create a package with a few procedures like add_new_job, add_new_task, change_task_status etc., and place all logic in these procedures (checking, changing tasks' states, changing jobs' states etc.) in one place. Easy to understand, easy to maintain, easy to debug and easy to track errors.
If you insist on using triggers, then you can create the compound trigger, as Tom Kyte mentioned in the frist link above as workaround, for example:
create or replace TRIGGER JOB_AUTHORED
FOR UPDATE OF TASK_STATUS on TASKS
COMPOUND TRIGGER
TYPE JOB_ID_TABLE_TYPE IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
JOB_ID_TABLE JOB_ID_TABLE_TYPE;
dummy CHAR;
AFTER EACH ROW IS
BEGIN
-- SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE;
JOB_ID_TABLE( :NEW.JOB_ID ) := :NEW.JOB_ID;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
FORALL x IN INDICES OF JOB_ID_TABLE
UPDATE jobs set JOB_STATUS='Authored'
WHERE JOBS.JOB_ID = JOB_ID_TABLE( x )
and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = JOBS.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed')
);
END AFTER STATEMENT;
END JOB_AUTHORED;
but .....
I am not sure if there are no any pitfals in this example we are not aware at this time.
For example one pitfall is in this scenario:
Suposse there are 18 tasks with status Authored
At time X user A runs UPDATE TASK SET status ' Authored' WHERE
task_id = 2. The trigger is fired and sees 18+1 commited tasks with
status Authored
At time X+10ms user B runs UPDATE TASK1 SET
status ' Authored' task_id = 4. The trigger is fired and sees 18+1
commited tasks with status Authored
At time X+20ms user A
commits
At time X+30ms user A commits
At the end we have 21 tasks
with status authored. but job's status have not been changed to
Authored (but should be changed to Authored if number of tasks = 20).
To avoid this trap you can use SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE; in the after each row part of the trigger in order to place a lock at corresponding record in JOBS table in order to serialize access (I am commented it in the example above).
But I'am still not sure if this is correct solution - it may cause in turn deadlocks in scenarious I cannot imagine and predict at now.

How to wait until a query returns rows?

On Oracle DB, how to do the below logic (that is "wait until at least one row is returned and return a column value form it"), but without the polling (looping, wasting CPU and possibly I/O) but with some wait/block mechanism?
So when calling the get_one() function it should not return until it can fetch a row from the table matching some conditions.
function get_one()
return number
is
c1 sys_refcursor;
n number;
begin
loop
open c1 for select number_column from t1 where some_conditions;
fetch c1 into n;
if not c1%notfound then return n;
close c1;
dbms_lock.sleep(1); -- this wait reduces load, but is still polling, also it delays reaction time
end loop;
end;
The solution should work for external applications (like application servers with J2EE, .NET and similar), so use of triggers would probably not fit.
There are two oracle db features that can meet these requirements:
Database Change Notification (DCN)
Java - Oracle Database Change Notification
Oracle Advanced Queuing (AQ)
Oracle Advanced Queue In Java
function get_one()
return number
is
n number;
begin
loop
select (select number_column from t1 where some_conditions) into n from dual;
if n is null then
dbms_lock.sleep(1); -- wait 1 second
else
return n;
end if;
end loop;
end;
The DBMS_LOCK package provides an interface to Oracle Lock Management
services. You can request a lock of a specific mode, give it a unique
name recognizable in another procedure in the same or another
instance, change the lock mode, and release it.
You may need some grants to execute this Oracle package
I am not in favour of implementing code that keeps waiting or polling directly on Oracle. That might skew the Oracle statistics like DB Time and wait times.
In order to implement server code that needs to act upon a certain set of rows being created or modified you can resort to:
A schedule job that wakes up in a predetermined interval and query for the rows. If the rows are present, then it call the procedure that act on the new rows.
Triggers
Depending on what it is that is being inserted, you can have a trigger that is called upon the creation of the rows. Beware of the mutant object errors that might arise if you try to modify the original row that has the trigger.
If it is client application that calls "get_one", you might as well have the client application polling it every few seconds based on a timer (no client or DB CPU wasted in between calls).

bulk collect using "for update"

I run into an interesting and unexpected issue when processing records in Oracle (11g) using BULK COLLECT.
The following code was running great, processing through all million plus records with out an issue:
-- Define cursor
cursor My_Data_Cur Is
Select col1
,col2
from My_Table_1;
…
-- Open the cursor
open My_Data_Cur;
-- Loop through all the records in the cursor
loop
-- Read the first group of records
fetch My_Data_Cur
bulk collect into My_Data_Rec
limit 100;
-- Exit when there are no more records to process
Exit when My_Data_Rec.count = 0;
-- Loop through the records in the group
for idx in 1 .. My_Data_Rec.count
loop
… do work here to populate a records to be inserted into My_Table_2 …
end loop;
-- Insert the records into the second table
forall idx in 1 .. My_Data_Rec.count
insert into My_Table_2…;
-- Delete the records just processed from the source table
forall idx in 1 .. My_Data_Rec.count
delete from My_Table_1 …;
commit;
end loop;
Since at the end of processing each group of 100 records (limit 100) we are deleting the records just read and processed, I though it would be a good idea to add the “for update” syntax to the cursor definition so that another process couldn’t update any of the records between the time the data was read and the time the record is deleted.
So, the only thing in the code I changed was…
cursor My_Data_Cur
is
select col1
,col2
from My_Table_1
for update;
When I ran the PL/SQL package after this change, the job only processes 100 records and then terminates. I confirmed this change was causing the issue by removing the “for update” from the cursor and once again the package processed all of the records from the source table.
Any ideas why adding the “for update” clause would cause this change in behavior? Any suggestions on how to get around this issue? I’m going to try starting an exclusive transaction on the table at the beginning of the process, but this isn’t an idea solution because I really don’t want to lock the entire table which processing the data.
Thanks in advance for your help,
Grant
The problem is that you're trying to do a fetch across a commit.
When you open My_Data_Cur with the for update clause, Oracle has to lock every row in the My_Data_1 table before it can return any rows. When you commit, Oracle has to release all those locks (the locks Oracle creates do not span transactions). Since the cursor no longer has the locks that you requested, Oracle has to close the cursor since it can no longer satisfy the for update clause. The second fetch, therefore, must return 0 rows.
The most logical approach would almost always be to remove the commit and do the entire thing in a single transaction. If you really, really, really need separate transactions, you would need to open and close the cursor for every iteration of the loop. Most likely, you'd want to do something to restrict the cursor to only return 100 rows every time it is opened (i.e. a rownum <= 100 clause) so that you wouldn't incur the expense of visiting every row to place the lock and then every row other than the 100 that you processed and deleted to release the lock every time through the loop.
Adding to Justin's Explantion.
You should have seen the below error message.Not sure, if your Exception handler suppressed this.
And the message itself explains a Lot!
For this kind of Updates, it is better to create a shadow copy of the main table, and let the public synonym point to it. While some batch id, creates a private synonym to our main table and perform the batch operations, to keep it simpler for maintenance.
Error report -
ORA-01002: fetch out of sequence
ORA-06512: at line 7
01002. 00000 - "fetch out of sequence"
*Cause: This error means that a fetch has been attempted from a cursor
which is no longer valid. Note that a PL/SQL cursor loop
implicitly does fetches, and thus may also cause this error.
There are a number of possible causes for this error, including:
1) Fetching from a cursor after the last row has been retrieved
and the ORA-1403 error returned.
2) If the cursor has been opened with the FOR UPDATE clause,
fetching after a COMMIT has been issued will return the error.
3) Rebinding any placeholders in the SQL statement, then issuing
a fetch before reexecuting the statement.
*Action: 1) Do not issue a fetch statement after the last row has been
retrieved - there are no more rows to fetch.
2) Do not issue a COMMIT inside a fetch loop for a cursor
that has been opened FOR UPDATE.
3) Reexecute the statement after rebinding, then attempt to
fetch again.
Also, you can change you Logic by Using rowid
An Example for Docs:
DECLARE
-- if "FOR UPDATE OF salary" is included on following line, an error is raised
CURSOR c1 IS SELECT e.*,rowid FROM employees e;
emp_rec employees%ROWTYPE;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO emp_rec; -- FETCH fails on the second iteration with FOR UPDATE
EXIT WHEN c1%NOTFOUND;
IF emp_rec.employee_id = 105 THEN
UPDATE employees SET salary = salary * 1.05 WHERE rowid = emp_rec.rowid;
-- this mimics WHERE CURRENT OF c1
END IF;
COMMIT; -- releases locks
END LOOP;
END;
/
You have to fetch a record row by row!! update it using the ROWID AND COMMIT immediately
. And then proceed to the next row!
But by this, you have to give up the Bulk Binding option.

Resources