ORA-01002 Fetch Out of Sequence with Temporary Table and .Net - oracle

I have a stored proc in Oracle that I am trying to call from a .Net Core app.
The proc loops through a cursor to populate a Global Temporary table, and attempts to send the result back as a ref cursor.
Type ssp_rec_refcur Is Ref Cursor; -- Return ssp_rec;
Procedure temp_table_sel(p_ssp_rec_refcur Out ssp_rec_refcur) Is
Cursor cur_main Is
Select item1
,item2 etc..
From regular_table;
Begin
For c_rec In cur_main Loop
-execute some functions to get supplemtary data based on cursor row
--store values in temp table for ref cursor
Insert Into global_temp_table
Values
(c_rec.item1, c_rec.item2, c_rec.item3 etc...);
End If;
End If;
End Loop;
Open p_ssp_rec_refcur For
Select * From global_temp_table;
Exception
When Others Then
log_error($$plsql_Unit, 'temp_table_sel');
End temp_table_sel;
This works fine when testing on the DB itself, but when I try to execute it from .Net, I am getting the error: ora-01002 fetch out of sequence.
If I put a Commit; command right before the select statement it gets rid of the error, but the table is then empty as it deletes rows on commit;
If I put a Commit after the Select statement, it goes back the error.
How can I read the temporary table rows into a ref cursor without triggering a Fetch Out of Sequence error?

So here's what I think was happening.
The problem is that the temp table was defined with the attribute "ON COMMIT DELETE ROWS;", so when it ran on the DB side it was fine because there is no commit.
However when calling it from .Net, Oracle.ManagedDataAccess.dll does an autocommit at the end of every transaction, which caused the temp table to delete it's rows, thus invalidating the cursor before I could read it.
My work around was set the temp table to "ON COMMIT PRESERVE ROWS;" so that the autocommit didn't delete them and it now reads the cursor as exptected.
I put a change in the proc to get the user's session ID and store that in the temp table as well, then delete from the table by session ID if the user queries again in the same session to avoid any duplicate data since it will not delte automatically after the transaction.

Related

How to make a cursor pick table data change?

I have the following cursor in a procedure :
procedure Run is
Cur Cursor is select * from table where condition;
R Cur%rowtype;
Open Cur;
loop
fetch Cur into R;
exit when Cur%notfound;
-- Run some time consuming operations here
something...
end loop;
Close Cur;
end;
This cursor is run a scheduled job.
Assume when running this cursor there are 100 rows that satisfy the where condition.
If, while the procedure is running, I have a new rows inserted in the table that satisfies the same where condition, Is there any way that cursor picks also these new row please ?
Thanks.
Cheers,
No.
The set of rows the cursor will return is determined at the time the cursor is opened. At that point, Oracle knows the current SCN (system change number) and will return the data as it existed at that point in time.
Depending on the nature of the problem, you could write a loop that just keeps asking for a single row that meets the criteria (assuming your time-consuming operation updates some data so that you know what needs to be processed). Something like
loop
begin
select some_id
into l_some_id
from your_table
where needs_processing = 'Y'
order by some_id
fetch first row only;
exception
when no_data_found
then
l_some_id := null;
end;
exit when l_some_id is null;
some_slow_operation( l_some_id );
end loop;
assuming that some_slow_operation changes the needs_processing flag to N. And assuming that you are using the default read committed transaction isolation level.
You can have commit inside loop so that select query fetches latest records from table in every iteration.
No, a cursor can't do that. The transactions are consistent and your cursor is a snapshot of the data you've extracted.
If you want consistent results you could either:
Lock the table so that there will be no changes,
Use other mechanism e.g. move the logic to a trigger, which will execute on each new piece of data that satisfies your conditions (and bring overhead too so very situational)

Object No longer exists

I have a procedure by which i am returning a cursor.
create or replace procedure pkg_test(cur out sys_refcursor) is
begin
insert into tb_test values(1);
insert into tb_test values(2);
insert into tb_test values(3);
open cur for
select * from tb_test;
delete from tb_test;
commit;
end pkg_test;
This is working fine.
now i have created a global temporary table for some performance issue like below.
create global temporary table tb_test_GTT (deal_id int)
on commit delete rows;
create or replace procedure pkg_test(cur out sys_refcursor) is
begin
insert into tb_test_GTT values(1);
insert into tb_test_GTT values(2);
insert into tb_test_GTT values(3);
open cur for
select * from tb_test_GTT;
delete from tb_test_GTT;
commit;
end pkg_test;
Now when i am trying to fetching the data from cursor i am getting below error:-
ORA-08103: object no longer exists.
I can correct this error by adding on commit preserve rows but i want to know the reason.
After commit your data no longer exists. This is how Temporary Tables works in Oracle.
Cursor is basically reference to table. You cannot return non-existent object, hence error occurs because referenced data is no longer there.
You may consider to return Table Type object since this approach stores data in-memory.
Reference from the Official Documentation says:
A REF CURSOR is a PL/SQL data type whose value is the memory address of a query work area on the database. In essence, a REF CURSOR is a pointer or a handle to a result set on the database.
REF CURSORs have the following characteristics:
A REF CURSOR refers to a memory address on the database. Therefore, the client must be connected to the database during the lifetime of the REF CURSOR in order to access it.
A REF CURSOR involves an additional database round-trip. While the REF CURSOR is returned to the client, the actual data is not returned until the client opens the REF CURSOR and requests the data. Note that data is not be retrieved until the user attempts to read it.

Stored data in Pl-SQL cursor is dynamic or static?

When a cursor declared, is it a static data set or after it declared if a new data entered before start the loop will it picked up for the loop or not ?
Declaring the cursor defines the cursor with a name and the associated SELECT statement. After declaring the cursor, you need to open the cursor to allocate memory for the cursor and makes it ready for fetching the rows returned by the SQL statement into it. For example:
Declaring a Cursor
CURSOR c_customers IS
SELECT id, name, address FROM customers;
Opening a Cursor
OPEN c_customers;
After opening you can access one row at a time by fetching the cursor:
FETCH c_customers INTO c_id, c_name, c_addr;
After fetching the cursor, just close the cursor:
Close c_customers;
So it will not be picked up by the loop.
I have tried in mysql and in mysql it is fetching the data.
I have created a new table and written a procedure. This procedure inserts two records in that newly created empty table, open cursor and select FOUND_ROWS() in record_cnt variable.
FOUND_ROWS() gives count of rows fetched by cursor. In Oracle it is cursor_name%ROWCOUNT.
In Oracle, definitely there will be other syntactical differences but I think behavior would be same and values will be visible to cursor if they are inserted and committed before opening the cursor.
CREATE TABLE my_tab(id int);
DELIMITER $$
CREATE PROCEDURE cursor_test(OUT record_cnt INT)
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE cur1 CURSOR FOR SELECT * FROM my_tab;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
INSERT INTO my_tab VALUES(1),(2);
COMMIT;
OPEN cur1;
SELECT FOUND_ROWS() INTO record_cnt;
CLOSE cur1;
END$$
DELIMITER ;
CALL cursor_test(#rec);
select #rec;
+------+
| #rec |
+------+
| 2 |
+------+
Oracle provides Statement-Level Read Consistency, which guarantees that data returned by a single query is committed and consistent as at the start of the query.
There are some details to do with transaction isolation levels, flashback query and user-defined functions that perform queries, but in general once a query has started (in procedural terms, when a cursor is opened) its results will be true as at that time, regardless of any data changes (committed or otherwise).

bulk collect using "for update"

I run into an interesting and unexpected issue when processing records in Oracle (11g) using BULK COLLECT.
The following code was running great, processing through all million plus records with out an issue:
-- Define cursor
cursor My_Data_Cur Is
Select col1
,col2
from My_Table_1;
…
-- Open the cursor
open My_Data_Cur;
-- Loop through all the records in the cursor
loop
-- Read the first group of records
fetch My_Data_Cur
bulk collect into My_Data_Rec
limit 100;
-- Exit when there are no more records to process
Exit when My_Data_Rec.count = 0;
-- Loop through the records in the group
for idx in 1 .. My_Data_Rec.count
loop
… do work here to populate a records to be inserted into My_Table_2 …
end loop;
-- Insert the records into the second table
forall idx in 1 .. My_Data_Rec.count
insert into My_Table_2…;
-- Delete the records just processed from the source table
forall idx in 1 .. My_Data_Rec.count
delete from My_Table_1 …;
commit;
end loop;
Since at the end of processing each group of 100 records (limit 100) we are deleting the records just read and processed, I though it would be a good idea to add the “for update” syntax to the cursor definition so that another process couldn’t update any of the records between the time the data was read and the time the record is deleted.
So, the only thing in the code I changed was…
cursor My_Data_Cur
is
select col1
,col2
from My_Table_1
for update;
When I ran the PL/SQL package after this change, the job only processes 100 records and then terminates. I confirmed this change was causing the issue by removing the “for update” from the cursor and once again the package processed all of the records from the source table.
Any ideas why adding the “for update” clause would cause this change in behavior? Any suggestions on how to get around this issue? I’m going to try starting an exclusive transaction on the table at the beginning of the process, but this isn’t an idea solution because I really don’t want to lock the entire table which processing the data.
Thanks in advance for your help,
Grant
The problem is that you're trying to do a fetch across a commit.
When you open My_Data_Cur with the for update clause, Oracle has to lock every row in the My_Data_1 table before it can return any rows. When you commit, Oracle has to release all those locks (the locks Oracle creates do not span transactions). Since the cursor no longer has the locks that you requested, Oracle has to close the cursor since it can no longer satisfy the for update clause. The second fetch, therefore, must return 0 rows.
The most logical approach would almost always be to remove the commit and do the entire thing in a single transaction. If you really, really, really need separate transactions, you would need to open and close the cursor for every iteration of the loop. Most likely, you'd want to do something to restrict the cursor to only return 100 rows every time it is opened (i.e. a rownum <= 100 clause) so that you wouldn't incur the expense of visiting every row to place the lock and then every row other than the 100 that you processed and deleted to release the lock every time through the loop.
Adding to Justin's Explantion.
You should have seen the below error message.Not sure, if your Exception handler suppressed this.
And the message itself explains a Lot!
For this kind of Updates, it is better to create a shadow copy of the main table, and let the public synonym point to it. While some batch id, creates a private synonym to our main table and perform the batch operations, to keep it simpler for maintenance.
Error report -
ORA-01002: fetch out of sequence
ORA-06512: at line 7
01002. 00000 - "fetch out of sequence"
*Cause: This error means that a fetch has been attempted from a cursor
which is no longer valid. Note that a PL/SQL cursor loop
implicitly does fetches, and thus may also cause this error.
There are a number of possible causes for this error, including:
1) Fetching from a cursor after the last row has been retrieved
and the ORA-1403 error returned.
2) If the cursor has been opened with the FOR UPDATE clause,
fetching after a COMMIT has been issued will return the error.
3) Rebinding any placeholders in the SQL statement, then issuing
a fetch before reexecuting the statement.
*Action: 1) Do not issue a fetch statement after the last row has been
retrieved - there are no more rows to fetch.
2) Do not issue a COMMIT inside a fetch loop for a cursor
that has been opened FOR UPDATE.
3) Reexecute the statement after rebinding, then attempt to
fetch again.
Also, you can change you Logic by Using rowid
An Example for Docs:
DECLARE
-- if "FOR UPDATE OF salary" is included on following line, an error is raised
CURSOR c1 IS SELECT e.*,rowid FROM employees e;
emp_rec employees%ROWTYPE;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO emp_rec; -- FETCH fails on the second iteration with FOR UPDATE
EXIT WHEN c1%NOTFOUND;
IF emp_rec.employee_id = 105 THEN
UPDATE employees SET salary = salary * 1.05 WHERE rowid = emp_rec.rowid;
-- this mimics WHERE CURRENT OF c1
END IF;
COMMIT; -- releases locks
END LOOP;
END;
/
You have to fetch a record row by row!! update it using the ROWID AND COMMIT immediately
. And then proceed to the next row!
But by this, you have to give up the Bulk Binding option.

Creating table before creating a cursor in Oracle

I have a PL/SQL procedure which creates a temporary table and then extracts the data from this temporary table using cursors, processes the data and then drops the temporary table. However Oracle doesn't allow the usage of cursor if the table doesn't exist in the database.
Please help me handle this.
Your statement is not quite correct. You can use a cursor for pretty much arbitrary queries. See below:
create or replace procedure fooproc
IS
type acursor is ref cursor;
mycur acursor;
mydate date;
BEGIN
execute immediate 'create global temporary table footmp (bar date) on commit delete rows';
execute immediate 'insert into footmp values (SYSDATE)';
open mycur for 'select * from footmp';
loop
fetch mycur into mydate;
exit when mycur%notfound;
dbms_output.put_line(mydate);
end loop;
close mycur;
execute immediate 'drop table footmp';
END fooproc;
/
(More details here - especially this short proc is not safe at all since the table name is fixed and not session-dependent).
It is (quite) a bit ugly, and I'm not suggesting you use that - rather, you should be thinking whether you need that procedure-specific temporary table at all.
See this other article:
DO NOT dynamically create them [temp tables], DO NOT dynamically create them, please -- do NOT dynamically create them.
Couldn't you use a global temporary table? Do you actually need a temporary table at all? (i.e. doesn't using a cursor on the select statement you'd use to fill that table work?)
Or, if you wish to avoid differences between global temporary tables and "regular" permanent tables you may be used to (see Oracle docs on temp table data availability, lifetime etc), simply create the table first (nologging). Assuming nobody else is using this table, your procedure could truncate before/after your processing.

Resources