Global Temporary Table with "On commit delete rows" is not holding any data - oracle

I have a global temporary table (GTT) defined in a creation script using the option to delete rows on commit. I wanted to be able to have different users see their own data in the GTT and not the data of other people's sessions. This worked perfectly in our test environment.
But then, I deployed GTT as part of an update to functionality to a client's database. The client called me up all upset and worried, because the GTT wasn't holding any data any more, and they didn't know why.
Specifically, if someone did:
insert into my_GTT (description) values ('Happy happy joy joy')
the database would respond:
1 row inserted.
However, if the same end user tried:
select * from my_GTT
The database would respond:
0 rows returned.
This issue is happening on the client site, and we can't reproduce it in house. What could be causing this behavior?

ON COMMIT DELETE ROWS = data in one transaction
ON COMMIT PRESERVE ROWS = data in one database session (one user with 2 sessions = 2 session = different content)
If GTT is defined with ON COMMIT DELETE ROWS, it would be empty after any explicit commit or implicit commit (= implicit commit = after any DLL command including for example truncate table, alter index, add partition, modify column, exchange partition):
CREATE GLOBAL TEMPORARY TABLE GTT__TEST (A NUMBER) ON COMMIT DELETE ROWS;
INSERT INTO GTT__TEST VALUES (1);
SELECT * FROM GTT__TEST; -- 1 ROW;
COMMIT; -- commit = delete rows
SELECT * FROM GTT__TEST; -- 0 ROWS;
INSERT INTO GTT__TEST VALUES (1);
SELECT * FROM GTT__TEST; -- 1 ROW;
ALTER TABLE GTT__TEST MODIFY A NOT NULL; -- DLL = commit = delete rows
SELECT * FROM GTT__TEST; -- 0 ROWS
If GTT is defined with ON COMMIT PRESERVE ROWS, it would hold data till end of session:
DROP TABLE GTT__TEST;
CREATE GLOBAL TEMPORARY TABLE GTT__TEST (A NUMBER) ON COMMIT PRESERVE ROWS;
INSERT INTO GTT__TEST VALUES (1);
SELECT * FROM GTT__TEST; -- 1 ROW
COMMIT;
SELECT * FROM GTT__TEST; -- 1 ROW

Do you have some setting turned on in your target environment where each statement is auto-committing?
(My experience is in SQL Server, where such is the default, but I understand in Oracle, the default is to keep the transaction open until an explicit commit. Mind, I haven't touched Oracle since ~2000)

I think Damien is right and there is an autocommit. The only other option I can come up with is some sort of connection pool issue (ie the select is being done from a separate session to the insert)

Related

Oracle. Select data from one session but commit it to another. Is it possible?

Probably I ask for the impossible, but I'll ask anyway.
Is there an easy way to select from one Oracle session and then insert/commit into another?
(I guess, technically it could be done with pl/sql procedure calls and PRAGMA AUTONOMUS Transactions, but it would be a hassle)
I have the following scenario:
I run some heavy calculations and update / insert into some tables.
After the process is completed I would like to 'backup' the results
(create table as select or insert into another temp table) and then rollback my current session without loosing the backups.
Here is desired/expected behavior:
Oracle 11g
insert into TableA (A,B,C) values (1,2,3);
select * from TableA
Result: 1,2,3
create table [in another session] TempA
as select * from TableA [in this session];
rollback;
select * from TableA;
Result null
select * from TempA;
Result 1,2,3
Is this possible?
Is there an easy way to select from one Oracle session and then insert/commit into another?
Create a program in a third-party language (C++, Java, PHP, etc.) that opens two connections to the database; they will have different sessions regardless of whether you connect as different users or both the same user. Read from one connection and write to the other connection.
you can insert your "heavy calculation" into a Oracle temp Table .
CREATE GLOBAL TEMPORARY TABLE HeavyCalc (
id NUMBER,
description VARCHAR2(20)
)
ON COMMIT DELETE ROWS;
the trick is that when you commit the transaction all rows are deleted from temporary table.
Then you first insert data into the temp table, copy the result to you backup table and commit the transaction.

The different 'on commit' setting with Oracle Global Temp Table

Would any one please advise on the mechanism behind the two different settings for the Oracle GTT?
1) on commit preserve rows
2) on commit delete rows
For now I know there 'facts':
a) the records inserted into these 2 types of GTT have different lifecycle.
b) the definition of both types of GTT remains until we drop the GTT <REF>.
However, what I would like to know i whether there is any difference between the 2 types of GTT in terms of the fact b)?
I was told that, for the 'preserve' type of GTT, the table's definition will not only remain but will accumulate by the times of usage (i.e. if there are 10 sessions using the GTT, 10 copies of the table's definition will be created and won't be disappear until we drop the GTT). And if we don't drop the 'preserve' GTT on a regular basis, the SQL statement's performance will become slower and slower.
Please could anyone demystify?
【2018.08.21】
Thanks all for answering the question. Please allow me to refine the question, it is not the table definition of the GTT is being duplicated, but the tablespace being allocated by every sessions using the same GTT that wont be released by the end of session but a dedicated drop of the GTT. Would that be the truth?
You can check the temporary segments associated with the GTTs in v$tempseg_usage:
create global temporary table demo_gtt_preserve (id int) on commit preserve rows;
create global temporary table demo_gtt_delete (id int) on commit delete rows;
insert into demo_gtt_preserve values (1);
insert into demo_gtt_delete values (1);
select s.sql_text, tu.tablespace, tu.contents, tu.segtype, tu.segfile#, tu.segblk#
from v$tempseg_usage tu
join v$sql s on s.sql_id = tu.sql_id_tempseg
where tu.username = user
and tu.segtype = 'DATA'
and tu.session_num = dbms_debug_jdwp.current_session_serial;
Result:
SQL_TEXT TABLESPACE CONTENTS SEGTYP SEGFILE# SEGBLK#
---------------------------------------- ---------- --------- ------ -------- ----------
insert into demo_gtt_delete values (1) TEMP TEMPORARY DATA 401 438528
insert into demo_gtt_preserve values (1) TEMP TEMPORARY DATA 401 438400
Now if you commit and rerun the query, you only get one row:
SQL_TEXT TABLESPACE CONTENTS SEGTYPE SEGFILE# SEGBLK#
---------------------------------------- ---------- --------- ------- -------- ---------
insert into demo_gtt_preserve values (1) TEMP TEMPORARY DATA 401 438400
(Somewhat unhelpfully, v$tempseg_usage identifies the session by session_addr and session_num, which correspond to saddr and serial# in v$session, neither of which are exposed via sys_context. You could extend the query above by joining to v$session and filtering on sid = sys_context('userenv','sid') or audsid = sys_context('userenv','sessionid') if you want to limit it to your own session.)
The only way to clear the remaining entry is to disconnect the session, or drop or truncate the table.
Regarding the performance question, note the way this works: when your session uses a GTT, a completely new temporary segment is created just for you. If other sessions do the same thing, they each get their own separate temporary segments. As those sessions commit or disconnect, the corresponding temporary segments are dropped. There is nothing shared between sessions, because each session has its own separate instance of the temporary table. Therefore, the rumour that if we don't drop the 'preserve' GTT on a regular basis, the SQL statement's performance will become slower and slower doesn't make sense.

Dealing with Oracle ARCHIVELOG during huge updates

It's required to update columns with metadata globally in all rows of all tables in DB. Let's say every table has a column MY_META and the aim is roughly
update ANOTHER_TABLE set MY_META = 'HELLO'
for each table.
Estimated total rows count is 2e9.
Suppose ARCHIVELOG mode is on, so during those updates a lot of extra space is going to be consumed.
The process of updates is planned to be running in production DB simultaneously with business transactions, which should not be lost.
The simplest thing to start with is temporarily installing plenty of hardware for ARCHIVELOG files.
Is there an elegant way to achieve the same goal programmatically or via tuning "secret options"?
Since your clarifications, I understand that:
for each table, you have a value of MY_META that you want to assign to each row as an initial value.
you don't mind dropping MY_META and recreating it.
In this case, I would suggest that you drop MY_META and recreate it with a default value. If you do this, Oracle will not update every record with your initial value and the amount of redo generated will be minimal.
The downside is that the ALTER TABLE commands, though very fast, will briefly lock each table as you process it. It will also invalidate packages that depend on the MY_META column.
Here is a walkthrough of the approach, with comments:
-- Set up your current state
DROP TABLE my_big_table;
CREATE TABLE my_big_table (a number, my_meta varchar2(30) not null);
INSERT INTO my_big_table (a, my_meta) SELECT rownum, 'GARBAGE' FROM dual CONNECT BY rownum <= 100000;
-- Drop the my_meta column and replace it with one having a default value
ALTER TABLE my_big_table DROP COLUMN my_meta;
ALTER TABLE my_big_table ADD my_meta varchar2(30) default 'INITIAL_VAL_FOR_TABLE' not null;
-- Look at my table -- you will see every row has your initial value
SELECT * FROM my_big_table;
-- Update some data
update my_big_table set my_meta = 'UPDATED_VALUE' WHERE a <= 15;
-- Look at my table -- you will see every row has your initial value except the first 15.
SELECT * FROM my_big_table order by a;

"table definition changed" despite restore point creation after table create/alter

FLASHBACK TABLE to a restore point fails when that restore point was created immediately after a table change. The below code only works if there is a sleep between certain steps.
SQL> DROP TABLE TEST_TABLE;
Table dropped.
SQL> CREATE TABLE TEST_TABLE AS SELECT 1 A FROM DUAL;
Table created.
SQL> ALTER TABLE TEST_TABLE ENABLE ROW MOVEMENT;
Table altered.
SQL> --Sleep required here to prevent error on flashback.
SQL> DROP RESTORE POINT TEST_RESTORE_POINT;
Restore point dropped.
SQL> CREATE RESTORE POINT TEST_RESTORE_POINT;
Restore point created.
SQL> FLASHBACK TABLE TEST_TABLE TO RESTORE POINT TEST_RESTORE_POINT;
FLASHBACK TABLE TEST_TABLE TO RESTORE POINT TEST_RESTORE_POINT
*
ERROR at line 1:
ORA-01466: unable to read data - table definition has changed
Why is a delay required and is there a way to eliminate it?
This oddity might be caused by SMON process which is responsible to keep track between SCNs and timestamps which flashback query relies upon. There is a mapping table SYS.SMON_SCN_TIME where every 5 minutes a new record is inserted SMON.
Internally during the FLASHBACK TABLE executes a command INSERT /*+ APPEND */ into SYS_TEMP_FBT SELECT /*+ FBTSCAN FULL(S) PARALLEL(S, DEFAULT) */ :1, :2, :3, rowid, SYS_FBT_INSDEL FROM "<schema>."TEST_TABLE" as of SCN :4 S (notice a table SYS_TEMP_FBT is created in the same schema) which uses this mapping.
Up to Oracle 10.2 you needed to wait up to whole 5 minutes to succeed with FLASHBACK query on a new/altered object. In 11.1 the TIM_SCN_MAP column was introduced to make the mapping more fine grained. Maximum of 100 mappings is stored in one value which makes roughly 3 seconds precision in timestamp to SCN mapping.
I tried many things but I don't think you can do anything about it but wait around 3 seconds to avoid the error because this is handled asynchronously by background process without any user control.

Savepoints in Oracle global temporary tables

I read that savepoints in Oracle global temporary tables delete all the data, but when I tested on Oracle 11g they worked like heap tables. Can anybody explain?
insert into table_1 values('one');
insert into table_1 values('two');
savepoint f1;
insert into table_1 values('three');
insert into table_1 values('four');
rollback to f1;
-- the records in table are 2 records just like heap tables, but I read that
-- savepoints in GTT truncates all the data
Where did you read this? I suspect not in the Oracle SQL Reference. So the explanation is simple: the author of that assertion hadn't tested the behaviour of global temporary tables. Either that or you were reading a description of some other SQL implementation, such as DerbyDB.
For the sake of completeness, let's rule out the role of transaction or session scope. Here are two global temporary tables:
create global temporary table gtt1
( col1 varchar2(30) )
ON COMMIT PRESERVE ROWS
/
create global temporary table gtt2
( col1 varchar2(30) )
ON COMMIT DELETE ROWS
/
Let's run your experiment for the one with session scope:
SQL> insert into gtt1 values('one');
1 row created.
SQL> insert into gtt1 values('two');
1 row created.
SQL> savepoint f1;
Savepoint created.
SQL> insert into gtt1 values('three');
1 row created.
SQL> insert into gtt1 values('four');
1 row created.
SQL> rollback to f1;
Rollback complete.
SQL> select * from gtt1;
COL1
------------------------------
one
two
SQL>
Same result for the table with transaction scope:
SQL> insert into gtt2 values('five');
1 row created.
SQL> insert into gtt2 values('six');
1 row created.
SQL> savepoint f2;
Savepoint created.
SQL> insert into gtt2 values('seven');
1 row created.
SQL> insert into gtt2 values('eight');
1 row created.
SQL> rollback to f2;
Rollback complete.
SQL> select * from gtt2;
COL1
------------------------------
five
six
SQL>
Actually this is not surprising. The official Oracle documentation states:
"The temporary table definition persists in the same way as the definitions of regular tables"
Basically they are heap tables. The differences are:
the scope (visibility) of the data
the tablespace used to persist data (global temporary tables write to a temporary tablespace).
I think you misunderstand - if you rollback to a savepoint then Oracle should undo all the work done after the savepoint (while still keeping any uncommitted work that was done prior to the savepoint).
For a temporary table, Oracle lazily allocates storage (a temporary segment for your session) when you put stuff in, and that when the data is done with (either at the end of the session, or at the end of the transaction, depending on the type) it can just deallocate the storage rather than individually deleting the rows, rather like what happens when you TRUNCATE a normal table.
I was interested to find out what happened if you had a savepoint before any data was put in, and rolled back to that savepoint - would Oracle deallocate the storage or would it keep the storage and delete the rows from within it?
It turns out the former - it behaves like a truncate.
SAVEPOINT f0;
SELECT * FROM v$tempseg_usage; -- Should show nothing for your session
insert into table_1 values('one');
insert into table_1 values('two');
SELECT * FROM v$tempseg_usage; -- Should show a DATA row for your session
savepoint f1;
insert into table_1 values('three');
insert into table_1 values('four');
rollback to f1; -- Undo three and four but preserve one and two
SELECT * FROM v$tempseg_usage; -- Still shows a DATA row for your session
rollback to f0; -- Undo all the inserts
SELECT * FROM v$tempseg_usage; -- row for your session has gone
The reason this matters is that when you do a normal delete - rather than a truncate - then any full scan of the table will still have to sift through all the data blocks to see if they have any data in. DML against an empty table can potentially incur a lot of I/O if the table had a lot of data in it at some time before!
I am trying to speed up some code that is doing exactly that - it isshoving some stuff into a temporary table as a scratchpad, partly so it canjoin to a permanent table, and returning a result to its caller. The temporary table is only for the benefit of this routine, so it's safe to clear it down at the end of the routine, but it might be called many times within a parent transaction, so I can't truncate (TRUNCATE is DDL and so commits the transaction) but I can't not clear it down either, or invocations within the same transaction will pick up one anothers' rows. Clearing down by a DELETE is causing quite a bit of overhead, especially as there is no index on the table and so selects against it will always full scan.
The option I am exploring is to have a SAVEPOINT at the start of the routine, do my temporary work, and then roll back to the savepoint just before it returns the result. Another option might be to put the routine inside an autonomous transaction, but it would mean porting C code to a PL/SQL stored procedure, and wouldn't work anyway if the temporary table needs to be joined to uncommitted data inserted by the caller.
Note that I did my research in 12c - there have been some improvements to temporary tables in this release (see https://oracle-base.com/articles/misc/temporary-tables) but I don't think that affects the behaviour wrt savepoints.

Resources