"table definition changed" despite restore point creation after table create/alter - oracle

FLASHBACK TABLE to a restore point fails when that restore point was created immediately after a table change. The below code only works if there is a sleep between certain steps.
SQL> DROP TABLE TEST_TABLE;
Table dropped.
SQL> CREATE TABLE TEST_TABLE AS SELECT 1 A FROM DUAL;
Table created.
SQL> ALTER TABLE TEST_TABLE ENABLE ROW MOVEMENT;
Table altered.
SQL> --Sleep required here to prevent error on flashback.
SQL> DROP RESTORE POINT TEST_RESTORE_POINT;
Restore point dropped.
SQL> CREATE RESTORE POINT TEST_RESTORE_POINT;
Restore point created.
SQL> FLASHBACK TABLE TEST_TABLE TO RESTORE POINT TEST_RESTORE_POINT;
FLASHBACK TABLE TEST_TABLE TO RESTORE POINT TEST_RESTORE_POINT
*
ERROR at line 1:
ORA-01466: unable to read data - table definition has changed
Why is a delay required and is there a way to eliminate it?

This oddity might be caused by SMON process which is responsible to keep track between SCNs and timestamps which flashback query relies upon. There is a mapping table SYS.SMON_SCN_TIME where every 5 minutes a new record is inserted SMON.
Internally during the FLASHBACK TABLE executes a command INSERT /*+ APPEND */ into SYS_TEMP_FBT SELECT /*+ FBTSCAN FULL(S) PARALLEL(S, DEFAULT) */ :1, :2, :3, rowid, SYS_FBT_INSDEL FROM "<schema>."TEST_TABLE" as of SCN :4 S (notice a table SYS_TEMP_FBT is created in the same schema) which uses this mapping.
Up to Oracle 10.2 you needed to wait up to whole 5 minutes to succeed with FLASHBACK query on a new/altered object. In 11.1 the TIM_SCN_MAP column was introduced to make the mapping more fine grained. Maximum of 100 mappings is stored in one value which makes roughly 3 seconds precision in timestamp to SCN mapping.
I tried many things but I don't think you can do anything about it but wait around 3 seconds to avoid the error because this is handled asynchronously by background process without any user control.

Related

Does dropping a table drop its dependent trigger? [duplicate]

I have one table backup on which I had applied one trigger upd_trig. Now, I dropped my table and then I checked, whether all the associated trigger/index will also been dropped or will remain there.
As I found some discussion here,and they said Trigger/Index all will be dropped,once we drop our table. But, it seems, trigger still exist. Can anyone explain, what exactly happens, when we drop the table
SQL> drop table backup;
Table dropped.
SQL> select text from user_source;
TEXT
----------------------------------------------------------------------------------------------------
TRIGGER
"BIN$Dg5j/bf6Rq6ugyN5ELwQkw==$0" BEFORE UPDATE ON backup FOR EACH ROW
BEGIN
INSERT INTO BACKUP VALUES(USER,:OLD.ENAME,SYSDATE);
END;
9 rows selected.
SQL> select count(*) from user_triggers;
COUNT(*)
----------
1
SQL> select trigger_name from user_triggers;
TRIGGER_NAME
------------------------------
BIN$Dg5j/bf6Rq6ugyN5ELwQkw==$0
The table has been dropped, but it is in the recycle bin, from which it can be recovered using the flashback commands (flashback ... before drop]. The name showing as BIN$... is a bit of a giveaway. The trigger is also showing with a BIN$... name, indicating that it is in the recycle bin too, and any indexes will be too.
You can empty the recycle bin to permenantly remove the objects in it. To drop a table immediately, without it going to the recycle bin, you can add the keyword purge to the drop command, as explained in the documentation. That will also drop any indexes and triggers immediately.
If it wasn't dropped automatically, then the trigger would be irrelevent anyway, since you couldn't perform any DML on the dropped table, so it could never fire. That's if the table the trigger is against is dropped. Your trigger is weird, it's inserting into the same table. Normally you'd have a trigger on one table insert into your backup table (well, for one use of triggers). In that case, dropping the backup table would invalidate the trigger on the live table, but would not drop it. Only dropping the live table would drop the trigger on the live table.

(Oracle): Columns in partially dropped state

I use Oracle 10g. Since months I have the following error regarding a table:
ORA-12986: columns in partially dropped state. Submit ALTER TABLE DROP COLUMNS CONTINUE
The statement ALTER TABLE DROP COLUMNS CONTINUE fails for overtime.
I have no DBA privileges on this database.
What could I do? Drop & recreate the table?
It's a massive table with million of records.
What I tried:
Once upon a time, I made the following command to set some columns
in unused state:
ALTER TABLE hr.admin_emp SET UNUSED (hiredate, mgr);
Then, I gave the following command:
ALTER TABLE hr.admin DROP UNUSED columns;
The system hangs up, the operation is too long, so it faults.
Now the table hr.admin has two columns in partially dropped state,
and I can't go neither forward, nor backward.
I don't understand why this happened.
I made the following steps, the system hangs up at STAGE TWO:
STAGE ONE
============
SQL> select * from user_unused_col_tabs;
TABLE_NAME COUNT
----------- ----------
TEMP 1
STAGE TWO
============
SQL> alter table temp drop unused columns;
Table altered.
STAGE THREE
=============
SQL> select * from user_unused_col_tabs;
no rows selected
Checkpoint 500 option
I am trying again with the following statement:
ALTER TABLE MYUSER.MYTABLE DROP COLUMNS CONTINUE CHECKPOINT 500;
Could the CHECKPOINT 500 option help me?
We have given for about twelve consecutive times the command:
ALTER TABLE MYUSER.MYTABLE DROP COLUMNS CONTINUE CHECKPOINT 250;
The statement was automatically killed every 48 hours, this is the reason because we had to launch it several times.
About 500 hours of elaboration to definitely drop the columns in partially dropped state...!!
It is confirmed that CHECKPOINT 250 makes a "commit", so at the next launch of the same command, you start from the point of stop.

Improve DELETE query Oracle

I have a query to delete some records from a table, but take too much time.
The table is use it in a stored procedure to match another table.
Every time that the SP is executed the table is truncated and filled with 2 or 3 millions of records depending of the received parameters.
The table doesn't have any FK or constraints
The query to delete the records that I am using is:
DELETE FROM TABLE1
WHERE (fecha,hora_ini,origen,destino,tipo,valor,rowsm1) IN (
SELECT fecha_t,hora_t,origen_t,destino_t,tipo,valor,id_t
FROM TABLE2)
I try to decrease the time in execute the query creating an index based in the same columns of the query
CREATE INDEX smb1 ON table1 (fecha,hora_ini,origen,destino,tipo,valor,rowsm1);
And the query take more time to execute.
How can improve the performance of this "DELETE" query.
UPDATE
EXPLAIN PLAN OUTPUT
DELETE TABLE1
TABLE ACCESS TABLE1
TABLE ACCESS FULL TABLE1
TABLE ACCESS FULL TABLE2
TABLE ACCESS FULL TABLE2
The index you created looks like a quite big index:
CREATE INDEX smb1
ON table1 (fecha,hora_ini,origen,destino,tipo,valor,rowsm1);
Sure, this depends on the amount of data but generally I would rather look for one or two selective columns - if possible.
Don't forget, that the index data has to be read as well and if it doesn't help to speed up the query, you even loose performance.
This might for instance happen, if the table is very small, because the database reads data block by block (I think it was about 8K). A small table can be read in one step - no need to use an index here.
Or, if more or less all records are selected. In this case the table has to be read anyway.
If you want to speed up the query you should create the same index (with a good selectivity) on table2. This way the EXPLAIN PLAN will look somewhat lie this:
DELETE STATEMENT
DELETE
NESTED LOOPS SEMI
INDEX FULL SCAN
INDEX RANGE SCAN
You can switch off logging and delete the rows,
Here is an example, you can do it 2 ways,
1.) Physically chaning the table to Nologging
2.) Using Nologging hint in the delete statement.
1.) First approach
both testemp and testemp2 are same tables with same data while testemp takes over a minute , testemp2 takes only 1 second
SQL> delete from testemp;
14336 rows deleted.
Elapsed: 00:01:04.12
SQL>
SQL>
SQL> alter table testemp2 nologging;
Table altered.
Elapsed: 00:00:02.86
SQL>
SQL> delete from testemp2;
14336 rows deleted.
Elapsed: 00:00:01.26
SQL>
The table needs to be put back to logging only when we physically change the table using "Alter" command, if you are using as hint not required please see the example below
2.) Second approach
SQL> set timing on;
SQL> delete from testemp2;
14336 rows deleted.
Elapsed: 00:00:01.51
Deleting data after reinserting same data into table now with nologging;
SQL> delete /*+NOLOGGING*/ from testemp2;
14336 rows deleted.
Elapsed: 00:00:00.28
SQL> select logging from user_Tables where table_name='TESTEMP2';
LOG
---
YES

Savepoints in Oracle global temporary tables

I read that savepoints in Oracle global temporary tables delete all the data, but when I tested on Oracle 11g they worked like heap tables. Can anybody explain?
insert into table_1 values('one');
insert into table_1 values('two');
savepoint f1;
insert into table_1 values('three');
insert into table_1 values('four');
rollback to f1;
-- the records in table are 2 records just like heap tables, but I read that
-- savepoints in GTT truncates all the data
Where did you read this? I suspect not in the Oracle SQL Reference. So the explanation is simple: the author of that assertion hadn't tested the behaviour of global temporary tables. Either that or you were reading a description of some other SQL implementation, such as DerbyDB.
For the sake of completeness, let's rule out the role of transaction or session scope. Here are two global temporary tables:
create global temporary table gtt1
( col1 varchar2(30) )
ON COMMIT PRESERVE ROWS
/
create global temporary table gtt2
( col1 varchar2(30) )
ON COMMIT DELETE ROWS
/
Let's run your experiment for the one with session scope:
SQL> insert into gtt1 values('one');
1 row created.
SQL> insert into gtt1 values('two');
1 row created.
SQL> savepoint f1;
Savepoint created.
SQL> insert into gtt1 values('three');
1 row created.
SQL> insert into gtt1 values('four');
1 row created.
SQL> rollback to f1;
Rollback complete.
SQL> select * from gtt1;
COL1
------------------------------
one
two
SQL>
Same result for the table with transaction scope:
SQL> insert into gtt2 values('five');
1 row created.
SQL> insert into gtt2 values('six');
1 row created.
SQL> savepoint f2;
Savepoint created.
SQL> insert into gtt2 values('seven');
1 row created.
SQL> insert into gtt2 values('eight');
1 row created.
SQL> rollback to f2;
Rollback complete.
SQL> select * from gtt2;
COL1
------------------------------
five
six
SQL>
Actually this is not surprising. The official Oracle documentation states:
"The temporary table definition persists in the same way as the definitions of regular tables"
Basically they are heap tables. The differences are:
the scope (visibility) of the data
the tablespace used to persist data (global temporary tables write to a temporary tablespace).
I think you misunderstand - if you rollback to a savepoint then Oracle should undo all the work done after the savepoint (while still keeping any uncommitted work that was done prior to the savepoint).
For a temporary table, Oracle lazily allocates storage (a temporary segment for your session) when you put stuff in, and that when the data is done with (either at the end of the session, or at the end of the transaction, depending on the type) it can just deallocate the storage rather than individually deleting the rows, rather like what happens when you TRUNCATE a normal table.
I was interested to find out what happened if you had a savepoint before any data was put in, and rolled back to that savepoint - would Oracle deallocate the storage or would it keep the storage and delete the rows from within it?
It turns out the former - it behaves like a truncate.
SAVEPOINT f0;
SELECT * FROM v$tempseg_usage; -- Should show nothing for your session
insert into table_1 values('one');
insert into table_1 values('two');
SELECT * FROM v$tempseg_usage; -- Should show a DATA row for your session
savepoint f1;
insert into table_1 values('three');
insert into table_1 values('four');
rollback to f1; -- Undo three and four but preserve one and two
SELECT * FROM v$tempseg_usage; -- Still shows a DATA row for your session
rollback to f0; -- Undo all the inserts
SELECT * FROM v$tempseg_usage; -- row for your session has gone
The reason this matters is that when you do a normal delete - rather than a truncate - then any full scan of the table will still have to sift through all the data blocks to see if they have any data in. DML against an empty table can potentially incur a lot of I/O if the table had a lot of data in it at some time before!
I am trying to speed up some code that is doing exactly that - it isshoving some stuff into a temporary table as a scratchpad, partly so it canjoin to a permanent table, and returning a result to its caller. The temporary table is only for the benefit of this routine, so it's safe to clear it down at the end of the routine, but it might be called many times within a parent transaction, so I can't truncate (TRUNCATE is DDL and so commits the transaction) but I can't not clear it down either, or invocations within the same transaction will pick up one anothers' rows. Clearing down by a DELETE is causing quite a bit of overhead, especially as there is no index on the table and so selects against it will always full scan.
The option I am exploring is to have a SAVEPOINT at the start of the routine, do my temporary work, and then roll back to the savepoint just before it returns the result. Another option might be to put the routine inside an autonomous transaction, but it would mean porting C code to a PL/SQL stored procedure, and wouldn't work anyway if the temporary table needs to be joined to uncommitted data inserted by the caller.
Note that I did my research in 12c - there have been some improvements to temporary tables in this release (see https://oracle-base.com/articles/misc/temporary-tables) but I don't think that affects the behaviour wrt savepoints.

Global Temporary Table with "On commit delete rows" is not holding any data

I have a global temporary table (GTT) defined in a creation script using the option to delete rows on commit. I wanted to be able to have different users see their own data in the GTT and not the data of other people's sessions. This worked perfectly in our test environment.
But then, I deployed GTT as part of an update to functionality to a client's database. The client called me up all upset and worried, because the GTT wasn't holding any data any more, and they didn't know why.
Specifically, if someone did:
insert into my_GTT (description) values ('Happy happy joy joy')
the database would respond:
1 row inserted.
However, if the same end user tried:
select * from my_GTT
The database would respond:
0 rows returned.
This issue is happening on the client site, and we can't reproduce it in house. What could be causing this behavior?
ON COMMIT DELETE ROWS = data in one transaction
ON COMMIT PRESERVE ROWS = data in one database session (one user with 2 sessions = 2 session = different content)
If GTT is defined with ON COMMIT DELETE ROWS, it would be empty after any explicit commit or implicit commit (= implicit commit = after any DLL command including for example truncate table, alter index, add partition, modify column, exchange partition):
CREATE GLOBAL TEMPORARY TABLE GTT__TEST (A NUMBER) ON COMMIT DELETE ROWS;
INSERT INTO GTT__TEST VALUES (1);
SELECT * FROM GTT__TEST; -- 1 ROW;
COMMIT; -- commit = delete rows
SELECT * FROM GTT__TEST; -- 0 ROWS;
INSERT INTO GTT__TEST VALUES (1);
SELECT * FROM GTT__TEST; -- 1 ROW;
ALTER TABLE GTT__TEST MODIFY A NOT NULL; -- DLL = commit = delete rows
SELECT * FROM GTT__TEST; -- 0 ROWS
If GTT is defined with ON COMMIT PRESERVE ROWS, it would hold data till end of session:
DROP TABLE GTT__TEST;
CREATE GLOBAL TEMPORARY TABLE GTT__TEST (A NUMBER) ON COMMIT PRESERVE ROWS;
INSERT INTO GTT__TEST VALUES (1);
SELECT * FROM GTT__TEST; -- 1 ROW
COMMIT;
SELECT * FROM GTT__TEST; -- 1 ROW
Do you have some setting turned on in your target environment where each statement is auto-committing?
(My experience is in SQL Server, where such is the default, but I understand in Oracle, the default is to keep the transaction open until an explicit commit. Mind, I haven't touched Oracle since ~2000)
I think Damien is right and there is an autocommit. The only other option I can come up with is some sort of connection pool issue (ie the select is being done from a separate session to the insert)

Resources