Suppose in one table 1000 records are there randomly some are records are deleted in oracle. how to find deleted records.tell the query to find the deleted records.
Thanks
Look into flashback query. You can find the state of the table in a certain date/time range if still in UNDO.
You can use flashback.
If you remember the date when the table was in your database you can do like an exemple :
FLASHBACK TABLE table_name TO TIMESTAMP
TO_TIMESTAMP('2017-06-13 22:30:00', 'YYYY-MM-DD HH:MI:SS');
Or if you know the SCN number :
FLASHBACK TABLE Table_name TO SCN 123456;
Or you can check recyclebin
I hope i helped you
There is no query to find deleted records. If there is an identifying column populated by a monotonically incrementing sequence it might be possible to find the missing values. But that would be a best guess, not a guaranteed set.
The correct solution is to put auditing or journalling in place, so that a shadow history of the table is maintained elsewhere.
Related
I have queries that take an existing large table and build tables off of them for reporting. The problem is that the source tables are 60-80MM+ records and it takes a long time to recreate. I'd like to be able to identify which records are new so I can build just add the new records to the reporting tables.
To me, the best way to identify this is to have an identity column. Is there any significant cost to creating this and adding it to the table?
Separately, is it possible to create a materialized view that takes data from one of these tables but add a sequence as part of the materialized view? That is, something like
create materialized view some_materialized_view as
select somesequence.nextval, source_table.*
from source_table?
You can add a sequence based column to your table, but as Gary suggests I wouldn't do that.
The task you are about to solve is so common that other solutions have been already implemented.
The first built-in option that comes to mind is the system change number SCN, a kind of Oracle internal clock. By default, tables are set up to record the SCN of the whole (usually 8K) block, containing usually many rows, but you can set a table to keep a record of the SCN that changed every row. Then you can track the columns that are new or change and have not been copied to your reporting tables.
CREATE TABLE t (c1 NUMBER) ROWDEPENDENCIES;
INSERT INTO t VALUES (1);
COMMIT;
SELECT c1, ora_rowscn FROM t;
Secondly, I would think of adding a date column. With 60-80 mio rows I wouldn't do this with ALTER TABLE xxx ADD (d DATE DEFAULT SYSDATE), but with rename, create as select, drop:
CREATE TABLE t AS SELECT * FROM all_objects;
RENAME t TO told;
CREATE TABLE t AS SELECT sysdate AS d, told.* FROM told;
ALTER TABLE t MODIFY d DATE DEFAULT SYSDATE;
DROP TABLE told;
Thirdly, I would read up on materialized views. I never had the chance to use this a work, but in theory, you should be able to set up a materialized view log on your 80 m table that records changes and updates dependent materialized views.
And forthly, I'd look into partitioning your large table on the (newly introduced) date column, so that identifying the new rows will become faster. That sadly depends on your version and Oracle license, though.
I didn't design the DB so don't judge me on this.
I have a log table that is receiving A LOT of entries. I only need to keep a day or so on this this log table. My initial thought was:
In a single transaction:
1. rename the log table
2. create the original log table from the renamed log table
3. commit the trx and life goes on
The second time this happens I drop the renamed table and do it all over again. This will run as an Oracle job once a day.
The original question:
Would anyone know if I specify a table space name in table #1 like so:
create table "my_user"."first_table" (pkid number, full_name varchar2(50)) nologging tablespace "my_custom_tablespace";
Then I do something like:
create table second_table as select * from first_table where 1=2 -- because I only want the structure
Will my second_table be in the same table_space?
Thanks in advance for your help.
If you are on Enterprise Edition with partitioning, then a simpler solution is to go with an interval partitioned table, with one partition per day. Then truncate the partitions when you don't need them.
If not, then go with two tables, a synonym to point to the 'current' one that is being inserted into, and a view that selects from a union of the two tables. The nightly job would truncate the 'old' table and switch the synonym to make it the 'new' one.
I'm working with an application that has a large amount of outdated data clogging up a table in my databank. Ideally, I'd want to delete all entries in the table whose reference date is too old:
delete outdatedTable where referenceDate < :deletionCutoffDate
If this statement were to be run, it would take ages to complete, so I'd rather break it up into chunks with the following:
delete outdatedTable where referenceData < :deletionCutoffDate and rownum <= 10000
In testing, this works suprisingly slowly. The following query, however, runs dramatically faster:
delete outdatedTable where rownum <= 10000
I've been reading through multiple blogs and similar questions on StackOverflow, but I haven't yet found a straightforward description of how/whether using rownum affects the Oracle optimizer when there are other Where clauses in the query. In my case, it seems to me as if Oracle checks
referenceData < :deletionCutoffDate
on every single row, executes a massive Select on all matching rows, and only then filters out the top 10000 rows to return. Is this in fact the case? If so, is there any clever way to make Oracle stop checking the Where clause as soon as it's found enough matching rows?
How about a different approach without so much DML on the table. As a permanent solution for future you could go for table partitioning.
Create a new table with required partition(s).
Move ONLY the required rows from your existing table to the new partitioned table.
Once the new table is populated, add the required constraints and indexes.
Drop the old table.
In future, you would just need to DROP the old partitions.
CTAS(create table as select) is another way, however, if you want to have a new table with partition, you would have to go for exchange partition concept.
First of all, you should read about SQL statement's execution plan and learn how to explain in. It will help you to find answers on such questions.
Generally, one single delete is more effective than several chunked. It's main disadvantage is extremal using of undo tablespace.
If you wish to delete most rows of table, much faster way usially a trick:
create table new_table as select * from old_table where date >= :date_limit;
drop table old_table;
rename table new_table to old_table;
... recreate indexes and other stuff ...
If you wish to do it more than once, partitioning is a much better way. If table partitioned by date, you can select actual date quickly and you can drop partion with outdated data in milliseconds.
At last, paritioning if a way to dismiss 'deleting outdated records' at all. Sometimes we need old data, and it's sad if we delete it by own hands. With paritioning you can archive outdated partitions outside of the database, but connects them when you need to access old data.
This is an old request, but I'd like to show another approach (also using partitions).
Depending on what you consider old, you could create corresponding partitions (optimally exactly two; one current, one old; but you could just as well make more), e.g.:
PARTITION BY LIST ( mod(referenceDate,2) )
(
PARTITION year_odd VALUES (1),
PARTITION year_even VALUES (0)
);
This could as well be months (Jan, Feb, ... Dec), decades (XX0X, XX1X, ... XX9X), half years (first_half, second_half), etc. Anything circular.
Then whenever you want to get rid of old data, truncate:
ALTER TABLE mytable TRUNCATE PARTITION year_even;
delete from your_table
where PK not in
(select PK from your_table where rounum<=...) -- these records you want to leave
Is this possible to recover the deleted rows from oracle table? My data is stored in a table MANUAL_TRANSACTIONS. Schema name is CCO.I have accidentally deleted some 500 Thousands rows in a table and did the commit too. Now I want to recover them.I am using Oracle 11g R2.Thanks
You can recover the details using Oracle Flashback Query.
You could query the contents of the table as of a time before the deletion to find out what data had been lost, and, if appropriate, re-insert the lost data in the database.
Here's the sample query:
select * from MANUAL_TRANSACTION as of timestamp to_timestamp('28-APR-2014 12:30:00', 'DD-MON-YYYY HH:MI:SS') where ' clause based on your deleted data';
Source: http://docs.oracle.com/cd/B19306_01/backup.102/b14192/flashptr002.htm
answers are already given just what i learned form above .
FLASHBACK can only be done by DBA( I guess ) but we can use below query
Insert into MANUAL_TRANSACTIONS
(SELECT * FROM MANUAL_TRANSACTIONS AS OF
TIMESTAMP TO_TIMESTAMP('2018-07-23 06:41:59', 'YYYY-MM-DD HH:MI:SS'));
or you can go for this query for one day records
Insert into MANUAL_TRANSACTIONS
(SELECT * FROM MANUAL_TRANSACTIONS AS OF
TIMESTAMP TO_TIMESTAMP('2018-07-23', 'YYYY-MM-DD'));
select * from MY_TABLE as of timestamp to_timestamp('04-MAY-2017 12:30:00', 'DD-MON-YYYY HH:MI:SS') where ID=1822904; --- 12Hr Clock
Above query works for me. You can even look for 24Hr timeframe using below query
select * from MY_TABLE as of timestamp to_timestamp('04-MAY-2017 13:30:00', 'DD-MON-YYYY HH24:MI:SS') where ID=1822904;
Yes, you can, use flash back query.
Using Oracle Flashback Query (SELECT AS OF)
This assumes that the undo tablespace was big enough, with enough undo retention. If the undo is already freed, you might need to perform a restore and recovery, in a clone database and copy the data to the original database. Also check TSPITR, TableSpace Point In Time Recovery. This is only possible if your database runs in archivelog mode and has a backup available.
If you have backup and Oracle 12c you could use Table Point In Time Recovery (PITR):
RECOVER TABLE 'SCHEMA'.'TAB_NAME'
UNTIL TIME xxxxyyy
AUXILIARY DESTINATION '/u01/aux'
REMAP TABLE 'SCHEMA'.'TAB_NAME':'TAB_NAME_PREV';
Your data at that point in time will be available:
SELECT * FROM SCHEMA.TAB_NAME_PREV;
INSERT INTO TABLE_NAME(SELECT * FROM TABLE_NAME AS OF TIMESTAMP(SYSDATE - 4/24)
I know this is too late for the answer, after long search about how to recovery and restore tables in oracle I finally found a good way to restore by using restore point, according to Pro Oracle Database 12C Administration book, before any action into your table you could use restore point by using following lines:
CREATE RESTORE POINT <your_key_point_name>;
for recovery table with restore point you can use :
FLASHBACK TABLE <[your_schema.]your_table_name> TO RESTORE POINT <your_key_point_name>;
beside this all of above answers "about recovering using FLASHBACK" forgot to consider two key points:
for using FLASHBACK recycle bin mode must be enabled
before any row recovery using FLASHBACK , row movement in your table must be enabled (with ALTER TABLE <[your_schema.]your_table_name> enable ROW MOVEMENT). According to oracle documents link:
Before you can use Flashback Table, you must ensure that row movement is enabled on the table to be flashed back, or returned to a previous state.
FLASHBACK TABLE <TABLE_NAME> TO TIMESTAMP(TO_DATE('27-APR-2014 23:59:59','DD-MON-YYYY HH24: MI: SS'));
Restores the data in the table to the given time(provided the table was not truncated).
In your case:
FLASHBACK TABLE MANUAL_TRANSACTIONS TO TIMESTAMP(TO_DATE('27-APR-2014 23:59:59','DD-MON-YYYY HH24: MI: SS'));
Use this query,
Insert into MANUAL_TRANSACTIONS
(SELECT * FROM MANUAL_TRANSACTIONS AS OF
TIMESTAMP TO_TIMESTAMP('2014-04-27 11:59:59 PM', 'YYYY-MM-DD HH:MI:SS PM'))
There are some options:
Flashback Query as
create table before_delete as select * from Table as of TIMESTAMP XX;
Logminer if Oracle supplement log is enabled , you can get undo sql for your delete statement
-- switch again logfile to get a minimal redo activity
alter system switch logfile;
-- mine the last written archived log
exec dbms_logmnr.add_logfile('archivelog/redologfile', options => dbms_logmnr.new);
exec dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog);
select operation, sql_redo from v$logmnr_contents where seg_name = 'EMP';
Oracle PRM-DUL will be last option. Even deleted row piece in Oracle block is always just marked row flag with deleted mask, the row piece still can be read via scan Oracle data block . PRM-DUL can scan the whole table , find out every record/row piece marked deleted and write out to flat file.
what you may try is :
flashback query, available from oracle 10g , may failed with ora-01555 snapshot too old
redo logminer , mine redo and may find undo sql
prm-dul tool ( a commercial recovery tool for oracle), which can scan oracle block and find even deleted row piece
I am trying to execute the following statement on a table containing 10,000 rows but the query is executing forever.
delete from Table_A where col1 in ('A','B','C') and col2 in ('K','L','M') and col3 in ('H','R',D')
Please can anyone assist!
Thanks
A
It looks as if another session has locked one of the rows you'd like to delete.
Is somebody else working on the same table (with transactions that last more than a few seconds)? Or do you have another tool or session open where you haven't committed your changes?
Update:
Another problem are foreign keys that aren't properly index: If other tables have a foreign key to the table where you want to delete the rows, and if the foreign key column in those tables isn't indexed, then Oracle will try to lock those tables. This could be the cause. If this is the case, index those columns.
Another possible reason for a database to hang is if the archive log destination is full.
Query the V$SESSION_WAIT and V$SESSION_EVENT views to see what your session is waiting for.