Object No longer exists - oracle

I have a procedure by which i am returning a cursor.
create or replace procedure pkg_test(cur out sys_refcursor) is
begin
insert into tb_test values(1);
insert into tb_test values(2);
insert into tb_test values(3);
open cur for
select * from tb_test;
delete from tb_test;
commit;
end pkg_test;
This is working fine.
now i have created a global temporary table for some performance issue like below.
create global temporary table tb_test_GTT (deal_id int)
on commit delete rows;
create or replace procedure pkg_test(cur out sys_refcursor) is
begin
insert into tb_test_GTT values(1);
insert into tb_test_GTT values(2);
insert into tb_test_GTT values(3);
open cur for
select * from tb_test_GTT;
delete from tb_test_GTT;
commit;
end pkg_test;
Now when i am trying to fetching the data from cursor i am getting below error:-
ORA-08103: object no longer exists.
I can correct this error by adding on commit preserve rows but i want to know the reason.

After commit your data no longer exists. This is how Temporary Tables works in Oracle.
Cursor is basically reference to table. You cannot return non-existent object, hence error occurs because referenced data is no longer there.
You may consider to return Table Type object since this approach stores data in-memory.
Reference from the Official Documentation says:
A REF CURSOR is a PL/SQL data type whose value is the memory address of a query work area on the database. In essence, a REF CURSOR is a pointer or a handle to a result set on the database.
REF CURSORs have the following characteristics:
A REF CURSOR refers to a memory address on the database. Therefore, the client must be connected to the database during the lifetime of the REF CURSOR in order to access it.
A REF CURSOR involves an additional database round-trip. While the REF CURSOR is returned to the client, the actual data is not returned until the client opens the REF CURSOR and requests the data. Note that data is not be retrieved until the user attempts to read it.

Related

ORA-01002 Fetch Out of Sequence with Temporary Table and .Net

I have a stored proc in Oracle that I am trying to call from a .Net Core app.
The proc loops through a cursor to populate a Global Temporary table, and attempts to send the result back as a ref cursor.
Type ssp_rec_refcur Is Ref Cursor; -- Return ssp_rec;
Procedure temp_table_sel(p_ssp_rec_refcur Out ssp_rec_refcur) Is
Cursor cur_main Is
Select item1
,item2 etc..
From regular_table;
Begin
For c_rec In cur_main Loop
-execute some functions to get supplemtary data based on cursor row
--store values in temp table for ref cursor
Insert Into global_temp_table
Values
(c_rec.item1, c_rec.item2, c_rec.item3 etc...);
End If;
End If;
End Loop;
Open p_ssp_rec_refcur For
Select * From global_temp_table;
Exception
When Others Then
log_error($$plsql_Unit, 'temp_table_sel');
End temp_table_sel;
This works fine when testing on the DB itself, but when I try to execute it from .Net, I am getting the error: ora-01002 fetch out of sequence.
If I put a Commit; command right before the select statement it gets rid of the error, but the table is then empty as it deletes rows on commit;
If I put a Commit after the Select statement, it goes back the error.
How can I read the temporary table rows into a ref cursor without triggering a Fetch Out of Sequence error?
So here's what I think was happening.
The problem is that the temp table was defined with the attribute "ON COMMIT DELETE ROWS;", so when it ran on the DB side it was fine because there is no commit.
However when calling it from .Net, Oracle.ManagedDataAccess.dll does an autocommit at the end of every transaction, which caused the temp table to delete it's rows, thus invalidating the cursor before I could read it.
My work around was set the temp table to "ON COMMIT PRESERVE ROWS;" so that the autocommit didn't delete them and it now reads the cursor as exptected.
I put a change in the proc to get the user's session ID and store that in the temp table as well, then delete from the table by session ID if the user queries again in the same session to avoid any duplicate data since it will not delte automatically after the transaction.

Stored data in Pl-SQL cursor is dynamic or static?

When a cursor declared, is it a static data set or after it declared if a new data entered before start the loop will it picked up for the loop or not ?
Declaring the cursor defines the cursor with a name and the associated SELECT statement. After declaring the cursor, you need to open the cursor to allocate memory for the cursor and makes it ready for fetching the rows returned by the SQL statement into it. For example:
Declaring a Cursor
CURSOR c_customers IS
SELECT id, name, address FROM customers;
Opening a Cursor
OPEN c_customers;
After opening you can access one row at a time by fetching the cursor:
FETCH c_customers INTO c_id, c_name, c_addr;
After fetching the cursor, just close the cursor:
Close c_customers;
So it will not be picked up by the loop.
I have tried in mysql and in mysql it is fetching the data.
I have created a new table and written a procedure. This procedure inserts two records in that newly created empty table, open cursor and select FOUND_ROWS() in record_cnt variable.
FOUND_ROWS() gives count of rows fetched by cursor. In Oracle it is cursor_name%ROWCOUNT.
In Oracle, definitely there will be other syntactical differences but I think behavior would be same and values will be visible to cursor if they are inserted and committed before opening the cursor.
CREATE TABLE my_tab(id int);
DELIMITER $$
CREATE PROCEDURE cursor_test(OUT record_cnt INT)
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE cur1 CURSOR FOR SELECT * FROM my_tab;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
INSERT INTO my_tab VALUES(1),(2);
COMMIT;
OPEN cur1;
SELECT FOUND_ROWS() INTO record_cnt;
CLOSE cur1;
END$$
DELIMITER ;
CALL cursor_test(#rec);
select #rec;
+------+
| #rec |
+------+
| 2 |
+------+
Oracle provides Statement-Level Read Consistency, which guarantees that data returned by a single query is committed and consistent as at the start of the query.
There are some details to do with transaction isolation levels, flashback query and user-defined functions that perform queries, but in general once a query has started (in procedural terms, when a cursor is opened) its results will be true as at that time, regardless of any data changes (committed or otherwise).

Is there any data dictionary object in oracle to record the transaction details of triggers?

I have created trigger TEST_TRIG as below:
CREATE TRIGGER TEST_TRIG
AFTER INSERT ON TEST_TABLE
FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
TEST_PROC();
END;
Procedure TEST_PROC code:
create or replace
PROCEDURE TEST_PROC
AS
BEGIN
EXECUTE IMMEDIATE 'truncate table TEST_FINAL';
INSERT INTO TEST_FINAL select * from TEST_TABLE;
commit;
END;
Initially, I disabled TRIGGER TEST_TRIG and inserted a record into TEST_TABLE and executed PROCEDURE TEST_PROC manually.
Output: I was able to fetch the same record what i inserted into TEST_TABLE from TEST_FINAL.
I flushed those records from both table and enabled the trigger TEST_TRIG.
Now when i inserts and commits the record in TEST_TABLE, I didn't found the record in TEST_FINAL table... I haven't received any error message also!!!
So I want to know whether trigger got fired or not?
I don't think you have fully grasped the implications of AUTONOMOUS_TRANSACTION. Effectively it means the code bounded by the pragma runs in a separate session . So, because of Oracle's read consistent isolation level, the autonomous transaction cannot see any of the data changes generated by the main transaction.
Thus, if TEST_TABLE is empty when you start the trigger will insert no rows into TEST_FINAL, regardless of how many rows you're inserting right now.
So: don't flush both tables. Insert some rows into TEST_TABLE and commit. TEST_FINAL will still be empty. Insert some more rows into TEST_TABLE and, lo! the first set of rows will appear in TEST_FINAL.
Obviously this is not the result you want. So you need to revisit your logic. It really doesn't make sense to truncate TEST_FINAL every time and definitely not FOR EACH ROW. That is Teh Suck! as far as performance goes. Likewise and for the same reason it doesn't make sense to populate the target table with INSERT ... SELECT .
Discarding the TRUNCATE means you don't need the pragma and everything becomes much simpler,
If you want to keep a history of the affected rows use something like this instead:
CREATE TRIGGER TEST_TRIG
AFTER INSERT ON TEST_TABLE
FOR EACH ROW
BEGIN
insert into test_final (col1, col2)
values (:new.col1, :new.col2);
END;
You'll need to change the exact code to fit your exact requirements.

Creating table before creating a cursor in Oracle

I have a PL/SQL procedure which creates a temporary table and then extracts the data from this temporary table using cursors, processes the data and then drops the temporary table. However Oracle doesn't allow the usage of cursor if the table doesn't exist in the database.
Please help me handle this.
Your statement is not quite correct. You can use a cursor for pretty much arbitrary queries. See below:
create or replace procedure fooproc
IS
type acursor is ref cursor;
mycur acursor;
mydate date;
BEGIN
execute immediate 'create global temporary table footmp (bar date) on commit delete rows';
execute immediate 'insert into footmp values (SYSDATE)';
open mycur for 'select * from footmp';
loop
fetch mycur into mydate;
exit when mycur%notfound;
dbms_output.put_line(mydate);
end loop;
close mycur;
execute immediate 'drop table footmp';
END fooproc;
/
(More details here - especially this short proc is not safe at all since the table name is fixed and not session-dependent).
It is (quite) a bit ugly, and I'm not suggesting you use that - rather, you should be thinking whether you need that procedure-specific temporary table at all.
See this other article:
DO NOT dynamically create them [temp tables], DO NOT dynamically create them, please -- do NOT dynamically create them.
Couldn't you use a global temporary table? Do you actually need a temporary table at all? (i.e. doesn't using a cursor on the select statement you'd use to fill that table work?)
Or, if you wish to avoid differences between global temporary tables and "regular" permanent tables you may be used to (see Oracle docs on temp table data availability, lifetime etc), simply create the table first (nologging). Assuming nobody else is using this table, your procedure could truncate before/after your processing.

Moving XML over a DBLink

I am trying to move some data over a dblink and one of the columns is an XMLType column. The code looks like this:
begin
delete from some_schema.some_remote_tab#src_2_trg_dblink;
INSERT INTO some_schema.some_remote_tab#src_2_trg_dblink(id, code, gen_date, xml_data)
SELECT id, code, gen_date, xml_data
FROM local_table;
end;
Oracle returns these errors:
ORA-02055: distributed update operation failed; rollback required
ORA-22804: remote operations not permitted on object tables or user-defined type columns
Some research on ORA-22804 shows that I am probably getting this error because of the XMLType column, but I am not sure how to resolve this.
(Oracle 10g)
We get ORA-22804 because every instance of a Type in our Oracle database has an OID, which is unique within the database. We cannot transfer that OID to another database; this has caused me grief before when trying to import schemas which have User-Defined Types. I hadn't realised that it also affected XMLType, but it is an Object so it is not surprising.
The solution is icky: you will have to unload the XML into text on your local database and then convert it back into XML in the remote database.
I don't have a distributed DB set-up to test this right now, but if you're lucky it may work:
INSERT INTO some_schema.some_remote_tab#src_2_trg_dblink(id, code, gen_date, xml_data)
SELECT id, code, gen_date, xmltype ( xml_data.asClobVal() )
FROM local_table;
If the asClobVal() method doesn't work you may need to use the SQL function XMLSERIALIZE() instead.
XMLSerialize(DOCUMENT xml_data AS CLOB)
If you're really unlucky you won't be able to do this in a single SQL statement, and you'll have to solve it using PL/SQL. To a certain extent this will depend on which version of the database you are using; the more recent the version, the more likely you'll be able to it in SQL rather than PL/SQL.
Try to do this the other way around. That is log into the remote db, create a dblink to the local db, and do an insert like this
INSERT INTO remote_schema.some_remote_tab(id, code, gen_date, xml_data)
SELECT id, code, gen_date, xml_data
FROM local_table#dblink_to_local_db;
Instead Perform a Data PULL.
create the data pull procedure at Remote database B.
create synonyms and provide grants to the dblink user.
Call the Remote procedure from Database A (Source) Perform a commit at Database A(source).
(Meanwhile .. wait for oracle to find some solution to perform the PUSH of XML over dblink in the future)
Create a procedure at Remote site Database B
CREATE OR REPLACE PROCEDURE PR_REMOTE(OP_TOTAL_COUNT OUT NUMBER) IS
BEGIN
INSERT /*+ DRIVING_SITE(src) */
INTO REMOTE_TABLE TGT_B
(XMLDATA_COL)
SELECT SRC.XMLDATA FROM LOCAL_TABLE#TGT2SRC_DBLINK SRC;
OP_TOTAL_COUNT := SQL%ROWCOUNT;
END;
Call the procedure from Database A
DECLARE
V_COUNT NUMBER := 0;
BEGIN
PR_REMOTE(V_COUNT);
COMMIT;
END;
I was facing the same issue with an heterogeneous DB link to SQL server.
Ended up using xmltype.getStringVal() to insert in a VARCHAR column on SQL Server side as the data was under 4000 characters.
There is also xmltype.getClobVal() if over 4000 characters but I haven't tested it.
The "xml->text->xml" chain might be complicated, but could help in some cases (for example when inserting is not on option but updating only).
You can try with "n" peaces of varchar columns (in the destination table or in a differnet one, perheaps in different schema on the remote DB), where "n" is:
ceil(max(dbms_lob.getlength(MyXmlColumn)) / 4000)
Then you can transfer these fragments to remote temporary fields:
insert into RemoteSchema.MyTable(Id, XmlPart1, XmlPart2,...)
(select 1 /*some Id*/,
dbma_lob.substr(MyXmlColumn.getclobval(), 4000, 1),
dbma_lob.substr(MyXmlColumn.getclobval(), 4000, 4001),
...
from LocalSchema.MyTable
XmlType can be re-composed from fragments like this:
create or replace function concat_to_xml(p_id number)
return xmltype
is
xml_lob clob;
xml xmltype;
begin
dbms_lob.createtemporary(xml_lob, true);
for r in (select XmlPart1, XmlPart2, ... from RemoteSchema.MyTable where Id = p_id)
loop
if r.XmlPart1 is not null then
dbms_lob.writeappend(xml_lob, length(r.XmlPart1), r.XmlPart1);
end if;
if r.XmlPart2 is not null then
dbms_lob.writeappend(xml_lob, length(r.XmlPart2), r.XmlPart2);
end if;
...
end loop;
xml := xmltype(xml_lob);
dbms_lob.freetemporary(xml_lob);
return xml;
end;
Finally use the result to update any other table in the remothe schema like:
update RemoteSchema.MyTable2 t2 set t2.MyXmlColumn = concat_to_xml(1 /*some Id*/);

Resources