After TRUNCATE TABLE, how can I recover the data? - oracle

I need to recover data truncated in Oracle table.
There are any folder in Linux where stores the truncated data?
Is there any table which store the information of table after truncating?
I am not DBA.

If you have not backed up the table (for example, by using RMAN, EXPDP or EXP) or created a RESTORE POINT then your data is lost.
From the Oracle documentation:
Caution:
You cannot roll back a TRUNCATE TABLE statement, nor can you use a FLASHBACK TABLE statement to retrieve the contents of a table that has been truncated.
You can check if you have an RMAN backup by logging into RMAN (rather than into the database) and using the LIST command.
You can check if you have a restore point (from a database user with the appropriate permissions) using:
SELECT name,
guarantee_flashback_database,
pdb_restore_point,
clean_pdb_restore_point,
pdb_incarnation#,
storage_size
FROM v$restore_point;
You are looking for a restore point where guarantee_flashback_database is YES.
(Assuming that the RESTORE POINT was created after the table was created and populated.)
Note:
If you restore from a backup or to a restore point then all changes made since that backup or creating the restore point will be lost.
To answer your additional questions:
[Are] there are any folder in Linux where stores the truncated data?
No
Is there any table which store the information of table after truncating?
No

Related

SQL to limit the types of tables returned when connecting to Oracle Database in Power BI

I have got a connection to an Oracle Database set up in Power BI.
This is working fine, apart from the fact it brings back 9500+ tables that start with "BIN".
Is there some SQL code I can put in the SQL statement section when connecting to the Oracle Database that limits the tables that it returns to ignore any table that begins with 'BIN'?
Tables starting with BIN$ are tables that have been dropped but not purged and are in Oracle's "recycle bin".
The simplest method of not showing them is, if they are no longer required, to PURGE (delete) the tables from the recycle bin and then you will not see them as they will not exist.
You can use (documentation link):
PURGE TABLE BIN$0+xyzabcdefghi123 to get rid of an individual tables with that name (or you can use the original name of the table).
PURGE TABLESPACE tablespace_name USER username; to get rid of all recycled tables in a table space belonging to a single user.
PURGE TABLESPACE tablespace_name; to get rid of all recycled tables in a table space.
PURGE RECYCLEBIN; to get rid of all of the current user's recycled tables.
PURGE DBA_RECYCLEBIN; to get rid of everything in the recycle bin (assuming you have SYSDBA privileges).
Before purging tables you should make sure that they are really not required as you would need to restore from backups to bring them back.

Oracle DB Create Table as Copy Vs. Merging Data into Empty Table

My question is related to Oracle DB performance and ideally finding the better method of the two paths when creating a backup table
Create a new table as a copy of an existing
Merging data to an existing (empty table - The two tables are identical)
If it is a small table, it doesn't matter - both will be fast. Though, CTAS (create table as select) is probably the most usual way to create a "copy" of existing table.
If a table is very large, I don't know how it (CTAS) compares to merge; you should test it.
However, a backup table? Are you sure that's the right way to backup a table? I'd rather think of a proper (RMAN) database backup, or - at least - export (using Export Data Pump) into a file that resides in a filesystem (and can be stored elsewhere, e.g. onto an external hard disk drive, DVD and similar (does anyone use tapes any more? We do)).
Because, if database breaks down, along with your "original" table, that "backup" table will be lost as well.

DDL sync informatica

I have a question: I have a table (say tableA) in a database (say dbA) and I need to mirror tableA as another table (say tableB) in another database (say dbB).
I know this can be done via (materialised) view or via informatica. But by problem is that I need to sync DDL as well. For example if a column is added in tableA, the column should automatically reflect in tableB.
Can this be done anyway directly via oracle or Informatica.
(or I will have to write a procedure to sync table on basis of all_tab_cols).
Yes, you could:
create another database as a logical standby database with Data Guard
use Oracle Streams
I would use (2) if you just need a single table in the other database or (1) if you need an entire schema (or more).

benefits of temporary table in oracle over an ordinary table

I came accross creating the temporary table in oracle. But could not understand the best use of this.
Can someone help me to understand what is the features and benefits of using a temporary table in Oracle (create temporary table temp_table) over an ordinary table (create table temp_table)
)
From the concepts guide:
A temporary table definition persists in the same way as a permanent
table definition, but the data exists only for the duration of a
transaction or session. Temporary tables are useful in applications
where a result set must be held temporarily, perhaps because the
result is constructed by running multiple operations.
And:
Data in a temporary table is private to the session, which means that
each session can only see and modify its own data.
So one aspect is that the data is private to your session. Which is also true of uncommitted data in a permanent table, but with a temporary table the data can persist and yet stay private across a commit (based on the on commit clause on creation).
Another aspect is that they use temporary segments, which means you generate much less redo and undo overhead using a temporary table than you would if you put the same data temporarily into a permanent table, optionally updated it, and then removed it when you'd finished with it. You also avoid contention and locking issues if more than one session needs its own version of the temporary data.
Given below are some points why and when we should temporary table :-
1)Temporary tables are created for storing the data in a tabular form, for easy retrieval of it when needed, with in that particular session.
2)It also add a security purpose of keeping the data available only for that particular session.
3) When a code goes long and a lot of cursors are opened it better to put the data in a temporary table so that it can be easily fetched at the time needed.

Attempting to use SQL-Developer to analyze a system table dump created with 'exp'

I'm attempting to recover the data from a specific table that exists in a system table dump I performed earlier. I would like to append the rows existing in the dump to any rows that may exist in the active table. The problem is, it's likely that the name of the table in the dump is not the same as what exists in the database currently (They're dynamically created with a prefix of ARC_TREND_). In addition, I don't know the name of the table as it exists in the dump, I was hoping to use SQL Developer to analyze the dump file as I can recognize the correct table by it's columns and it's existing rows.
While i'm going on blind faith that SQL Developer can work with my dump file, when attempting to open it, i'm getting a Java Heap OutOfMemory exception raised. I've adjusted the maximum heap size from 640m to 1024m in both sqldeveloper.bat and in sqldeveloper.conf, but to no avail.
Can someone recommend a course of action for me to take to recover the data from a table which exists in a exp created dump file? A graphical tool would be nice, but I'm no stranger to the command line. I need to analyze the tables that exist in the dump in order to pick the correct one out. Then I assume I can use imp TABLE= to bring it back into the active instance. It likely won't match the existing table name, so I will use SQL Developer to copy the rows from the imported table to the table where I need them to be.
The dump was taken from a Linux server running 10g, and will be imported to (the same server & database instance, upgraded) an 11g instance of the same database.
Thanks
Since you're referring to imp rather than impdp, I assume this wasn't exported with data pump. Either way, I doubt you'll get anything useful through SQL Developer.
Fortunately most of what you're trying to do is quite easy from the command line; just run imp with the INDEXFILE parameter, which will give you a text file containing all the table (commented out with REM) and index creation commands. From that you should be able to spot the table from its column names.
You can't really see any row data though, so if there's more than one possible match you might need to import several tables and inspect the data in them in the database to see which one you really want.

Resources