We have a stored procedure in Oracle that uses global temporary tables. In most of our other stored procedures, first thing we do is delete data from global temporary tables. However, in few of the stored procedures we do not have the delete's.
Are there any other options other than adding the delete statements? Can something be done on the Server side to forcefully delete data from those temporary tables when that SP is ran?
the GTT's are defined with ON COMMIT PRESERVE ROWS;
I think your title is misleading: the problem is not "getting data from different session", it is re-using the same session. Terminating a session always flushes a temporary table:
SQL> conn apc
Enter password:
Connected.
SQL> create global temporary table tmp_23 (username varchar2(30))
2 on commit preserve rows
3 /
Table created.
SQL> insert into tmp_23 values (user)
2 /
1 row created.
SQL> commit
2 /
Commit complete.
SQL> select * from tmp_23
2 /
USERNAME
------------------------------
APC
SQL> conn apc
Enter password:
Connected.
SQL> select * from tmp_23
2 /
no rows selected
SQL>
From within a session there is no way to flush a temporary table which has PRESERVE ROWS except by deletion of truncation. There is no way to annotate a stored procedure in the manner you suggest. So I'm afraid that if you are experiencing the problem as you describe it you will have to bite the bullet and add the DELETE (or TRUNCATE) calls to your procedures. Or define the tables with DELETE ROWS; but that probably won't suit your processing.
Incidentally, it seems like you are using temporary tables quite heavily. This is unusual in Oracle systems, because temporary tables are relatively expensive objects (all those writes to disk) and there is normally a more performant way approaching things: e.g. caching data in PL/SQL collections or just using SQL. It is common for developers coming from a non-Oracle background - especially SQL Server - to overuse temporary tables because they are used to that way of working.
Related
Probably I ask for the impossible, but I'll ask anyway.
Is there an easy way to select from one Oracle session and then insert/commit into another?
(I guess, technically it could be done with pl/sql procedure calls and PRAGMA AUTONOMUS Transactions, but it would be a hassle)
I have the following scenario:
I run some heavy calculations and update / insert into some tables.
After the process is completed I would like to 'backup' the results
(create table as select or insert into another temp table) and then rollback my current session without loosing the backups.
Here is desired/expected behavior:
Oracle 11g
insert into TableA (A,B,C) values (1,2,3);
select * from TableA
Result: 1,2,3
create table [in another session] TempA
as select * from TableA [in this session];
rollback;
select * from TableA;
Result null
select * from TempA;
Result 1,2,3
Is this possible?
Is there an easy way to select from one Oracle session and then insert/commit into another?
Create a program in a third-party language (C++, Java, PHP, etc.) that opens two connections to the database; they will have different sessions regardless of whether you connect as different users or both the same user. Read from one connection and write to the other connection.
you can insert your "heavy calculation" into a Oracle temp Table .
CREATE GLOBAL TEMPORARY TABLE HeavyCalc (
id NUMBER,
description VARCHAR2(20)
)
ON COMMIT DELETE ROWS;
the trick is that when you commit the transaction all rows are deleted from temporary table.
Then you first insert data into the temp table, copy the result to you backup table and commit the transaction.
Forgive me to ask a silly question.
Would a temporary table be dropped automatically in Oracle (12c)?
Yesterday I have executed the following DDL to create a temporary table:
Create global temporary table my_1st_t_table on commit preserve rows as
select
*
from
other_table
where
selected_col = 'T';
After that I have executed following statements:
commit;
select count(*) from my_1st_t_table;
Yesterday, the last select statement returned 2000 rows.
After that I disconnected my VPN and also switched off my client laptop.
Today I rerun the last select statement after restarted my computer and reconnected to the VPN.
It returned 0 rows. So this means the table was still there but just all rows being deleted after my session.
However, may I ask when will my temporary table be dropped?
Thanks in advance!
A temporary table in Oracle is much different than a temp table in other database platforms such as MS SQL Server, and the "temporary" nomenclature invariably leads to confusion.
In Oracle, a temporary table just like other tables, and does not get "dropped". However, the rows in the table only exist within the context of the session that inserted the rows. Once the session is terminated, assuming the session did not delete the rows, Oracle will delete the rows in the table for that session.
So bottom line, the data is temporary, the table structure is permanent, until the table is dropped.
I'm using a procedure which creates a table using a heavy query (running ~1 hr). Query is something like 'select * from table', and the columns in the table can change.
Oftentimes it turns out that there is no free space in schema to create the table, so I get an exception, the time is spent in vain and I need to do the same calculations once again.
The error I get:
ORA-01536: space quota exceeded for tablespace
ORA-06512: at "UPDATE_REPORT", line 37
What I would want to do:
- Store query's results in temporary segment in a cursor;
- Try to create table using cursor. In case of exception (not enough space), drop a special space-holding table to free table space in schema;
- Try to create the table again from cursor.
I tried to solve this using dynamic SQL, but it leads to overcomplications while the problem seems to have a simple solution. And the main problem I faced is that there is no evident way to create a table using cursor.
Is there any simple solution I somehow missed out? Maybe cursor are the wrong way to work this out?
Two things I can think of:
let the database do the dirty job
a.k.a. enjoy your DBA job, simply by looking at Oracle administering itself
how? let tablespace autoextend
grant unlimited quota on that tablespace to user
Here's how (connected as a privileged user):
SQL> select file_name, tablespace_name From dba_data_files;
FILE_NAME TABLESPACE_NAME
------------------------------------------------ -----------------------------
C:\ORACLEXE\APP\ORACLE\ORADATA\XE\USERS.DBF USERS
C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSAUX.DBF SYSAUX
C:\ORACLEXE\APP\ORACLE\ORADATA\XE\UNDOTBS1.DBF UNDOTBS1
C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSTEM.DBF SYSTEM
SQL> alter database datafile 'C:\ORACLEXE\APP\ORACLE\ORADATA\XE\USERS.DBF'
2 autoextend on
3 maxsize unlimited;
Database altered.
SQL> alter user scott quota unlimited on users;
User altered.
SQL>
What I would want to do: - Store query's results in temporary segment in a cursor; - Try to create table using cursor. In case of exception (not enough space), drop a special space-holding table to free table space in schema; - Try to create the table again from cursor.
Don't go through the trouble. Just tell Oracle not to die because of space issues.
You can make your session "resumable" so that, rather than giving you an error when you run out of space, Oracle will suspend your session until the problem is corrected (and then continue on automatically).
Assuming you have all the permissions you need (notably, GRANT RESUMABLE TO yourschema), you enable it like this:
alter session enable resumable timeout 1800 name 'your process name, can be anything';
The 1800 number is in seconds, giving your DBA's 30 minutes to fix the problem before your session times out. The "my process" will show up in V$RESUMABLE for use in queries and automated alerts.
Your DBAs can monitor V$RESUMABLE and/or you can create a schema-level database trigger on the AFTER SUSPEND event to fire off an e-mail to them when they need to jump in.
I agree with #OldProgrammer - this is a design problem. Dropping and re-creating tables is an anti-pattern.
It's not clear what exact problem you're trying to solve, but a more sensible approach might be:
Create the table once. Use INITIAL EXTENT parameter to ensure that the table has all the space it needs (you may have to run some queries to establish what that figure should be).
Check with your DBA that the tablespace is set to AUTOEXTEND, in case there are rogue result sets which exceed your calculations. Maybe negotiate more tablespace quota for the schema owner.
Populate the table once.
Before subsequent runs, TRUNCATE the table using the REUSE STORAGE clause to hold on to the assigned space.
Would any one please advise on the mechanism behind the two different settings for the Oracle GTT?
1) on commit preserve rows
2) on commit delete rows
For now I know there 'facts':
a) the records inserted into these 2 types of GTT have different lifecycle.
b) the definition of both types of GTT remains until we drop the GTT <REF>.
However, what I would like to know i whether there is any difference between the 2 types of GTT in terms of the fact b)?
I was told that, for the 'preserve' type of GTT, the table's definition will not only remain but will accumulate by the times of usage (i.e. if there are 10 sessions using the GTT, 10 copies of the table's definition will be created and won't be disappear until we drop the GTT). And if we don't drop the 'preserve' GTT on a regular basis, the SQL statement's performance will become slower and slower.
Please could anyone demystify?
【2018.08.21】
Thanks all for answering the question. Please allow me to refine the question, it is not the table definition of the GTT is being duplicated, but the tablespace being allocated by every sessions using the same GTT that wont be released by the end of session but a dedicated drop of the GTT. Would that be the truth?
You can check the temporary segments associated with the GTTs in v$tempseg_usage:
create global temporary table demo_gtt_preserve (id int) on commit preserve rows;
create global temporary table demo_gtt_delete (id int) on commit delete rows;
insert into demo_gtt_preserve values (1);
insert into demo_gtt_delete values (1);
select s.sql_text, tu.tablespace, tu.contents, tu.segtype, tu.segfile#, tu.segblk#
from v$tempseg_usage tu
join v$sql s on s.sql_id = tu.sql_id_tempseg
where tu.username = user
and tu.segtype = 'DATA'
and tu.session_num = dbms_debug_jdwp.current_session_serial;
Result:
SQL_TEXT TABLESPACE CONTENTS SEGTYP SEGFILE# SEGBLK#
---------------------------------------- ---------- --------- ------ -------- ----------
insert into demo_gtt_delete values (1) TEMP TEMPORARY DATA 401 438528
insert into demo_gtt_preserve values (1) TEMP TEMPORARY DATA 401 438400
Now if you commit and rerun the query, you only get one row:
SQL_TEXT TABLESPACE CONTENTS SEGTYPE SEGFILE# SEGBLK#
---------------------------------------- ---------- --------- ------- -------- ---------
insert into demo_gtt_preserve values (1) TEMP TEMPORARY DATA 401 438400
(Somewhat unhelpfully, v$tempseg_usage identifies the session by session_addr and session_num, which correspond to saddr and serial# in v$session, neither of which are exposed via sys_context. You could extend the query above by joining to v$session and filtering on sid = sys_context('userenv','sid') or audsid = sys_context('userenv','sessionid') if you want to limit it to your own session.)
The only way to clear the remaining entry is to disconnect the session, or drop or truncate the table.
Regarding the performance question, note the way this works: when your session uses a GTT, a completely new temporary segment is created just for you. If other sessions do the same thing, they each get their own separate temporary segments. As those sessions commit or disconnect, the corresponding temporary segments are dropped. There is nothing shared between sessions, because each session has its own separate instance of the temporary table. Therefore, the rumour that if we don't drop the 'preserve' GTT on a regular basis, the SQL statement's performance will become slower and slower doesn't make sense.
I have read that one should not analyze a temp table, as it screws up the table statistics for others. What about an index? If I put an index on the table for the duration of my program, can other programs using the table be affected by that index?
Does an index affect my process, and all other processes using the table?
or Does it affect my process alone?
None of the responses have been authoritative, so I am offering said bribe.
Does an index effect my process, and all other processes using the table? or Does it effect my process alone?
I'm assuming we are talking of GLOBAL TEMPORARY tables.
Think of a temporary table as of multiple tables that are created and dropped by each process on the fly from a template stored in the system dictionary.
In Oracle, DML of a temporary table affects all processes, while data contained in the table will affect only one process that uses them.
Data in a temporary table is visible only inside the session scope. It uses TEMPORARY TABLESPACE to store both data and possible indexes.
DML for a temporary table (i. e. its layout, including column names and indexes) is visible to everybody with sufficient privileges.
This means that existence of the index will affect your process as well as other processes using the table in sense that any process that modifies data in the temporary table will also have to modify the index.
Data contained in the table (and in the index too), on the contrary, will affect only the process that created them, and will not even be visible to other processes.
IF you want one process to use the index and another one not to use it, do the following:
Create two temporary tables with same column layout
Index on one of them
Use indexed or non-indexed table depending on the process
I assume you're referring to true Oracle temporary tables and not just a regular table created temporarily and then dropped. Yes, it is safe to create indexes on the temp tables and they will be used according to the same rules as a regular tables and indexes.
[Edit]
I see you've refined your question, and here's a somewhat refined answer:
From:
Oracle® Database Administrator's Guide
10g Release 2 (10.2)
Part Number B14231-02
"Indexes can be created on temporary tables. They are also temporary and the data in the index has the same session or transaction scope as the data in the underlying table."
If you need the index for efficient processing during the scope of the transaction then I would imagine you'll have to explicitly hint it in the query because the statistics will show no rows for the table.
You're asking about two different things, indexes and statistics.
For indexes, yes, you can create indexes on the temp tables, they will be maintained as per usual.
For statistics, I recommend that you explicitly set the stats of the table to represent the average size of the table when queried. If you just let oracle gather stats by itself, the stats process isn't going to find anything in the tables (since by definition, the data in the table is local to your transaction), so it will return inaccurate results.
e.g. you can do:
exec dbms_stats.set_table_stats(user, 'my_temp_table', numrows=>10, numblks=>4)
Another tip is that if the size of the temporary table varies greatly, and within your transaction, you know how many rows are in the temp table, you can help out the optimizer by giving it that information. I find this helps out a lot if you are joining from the temp table to regular tables.
e.g., if you know the temp table has about 100 rows in it, you can:
SELECT /*+ CARDINALITY(my_temp_table 100) */ * FROM my_temp_table
Well, I tried it out and the index was visible and used by the second session. Creating a new global temporary table for your data would be safer if you really need an index.
You are also unable to create an index while any other session is accessing the table.
Here's the test case I ran:
--first session
create global temporary table index_test (val number(15))
on commit preserve rows;
create unique index idx_val on index_test(val);
--second session
insert into index_test select rownum from all_tables;
select * from index_test where val=1;
You can also use the dynamic sampling hint (10g):
select /*+ DYNAMIC_SAMPLING (3) */ val
from index_test
where val = 1;
See Ask Tom
You cannot create an index on a temporary table while it is used by another session, so answer is: No, it cannot affect any other process, because it is not possible.
An existing Index affects only your current session, because for any other session the temporary table appears empty, so it cannot access any index values.
Session 1:
SQL> create global temporary table index_test (val number(15)) on commit preserve rows;
Table created.
SQL> insert into index_test values (1);
1 row created.
SQL> commit;
Commit complete.
SQL>
Session 2 (while session 1 is still connected):
SQL> create unique index idx_val on index_test(val);
create unique index idx_val on index_test(val)
*
ERROR at line 1:
ORA-14452: attempt to create, alter or drop an index on temporary table already in use
SQL>
Back to session 1:
SQL> delete from index_test;
1 row deleted.
SQL> commit;
Commit complete.
SQL>
Session 2:
SQL> create unique index idx_val on index_test(val);
create unique index idx_val on index_test(val)
*
ERROR at line 1:
ORA-14452: attempt to create, alter or drop an index on temporary table already in use
SQL>
still failing, you first have to disconnect session 1 or table has to be truncated.
Session 1:
SQL> truncate table index_test;
Table truncated.
SQL>
Now you can create the index in Session 2:
SQL> create unique index idx_val on index_test(val);
Index created.
SQL>
This index of course will be used by any session.