Truncate and Insert - insert

I am connecting to oracle using an ETL tool.The operation what I am doing is truncating an existing table and inserting records into that table from a different table. This is working fine for 15 to 20 cycles of job run. After that my job got stuck in the portion where its inserting record.Is there anything wrong which I am doing here. Please find the query I am using below.Could some one help on this, from the previous experience.
truncate table TABLE1;
insert into TABLE1 select * from TABLE_SRC where TYPE in('MP','DA')
and ID in(select ID from TABLE_SRC where TYPE in('MP','DA') and FLAG='Y');
commit;

I believe the table is going in lock situation.
Check with dba’s .
Select * from dba_lock ;

Related

Oracle. Select data from one session but commit it to another. Is it possible?

Probably I ask for the impossible, but I'll ask anyway.
Is there an easy way to select from one Oracle session and then insert/commit into another?
(I guess, technically it could be done with pl/sql procedure calls and PRAGMA AUTONOMUS Transactions, but it would be a hassle)
I have the following scenario:
I run some heavy calculations and update / insert into some tables.
After the process is completed I would like to 'backup' the results
(create table as select or insert into another temp table) and then rollback my current session without loosing the backups.
Here is desired/expected behavior:
Oracle 11g
insert into TableA (A,B,C) values (1,2,3);
select * from TableA
Result: 1,2,3
create table [in another session] TempA
as select * from TableA [in this session];
rollback;
select * from TableA;
Result null
select * from TempA;
Result 1,2,3
Is this possible?
Is there an easy way to select from one Oracle session and then insert/commit into another?
Create a program in a third-party language (C++, Java, PHP, etc.) that opens two connections to the database; they will have different sessions regardless of whether you connect as different users or both the same user. Read from one connection and write to the other connection.
you can insert your "heavy calculation" into a Oracle temp Table .
CREATE GLOBAL TEMPORARY TABLE HeavyCalc (
id NUMBER,
description VARCHAR2(20)
)
ON COMMIT DELETE ROWS;
the trick is that when you commit the transaction all rows are deleted from temporary table.
Then you first insert data into the temp table, copy the result to you backup table and commit the transaction.

Inserted Data into a table is not showing up when selected in oracle

I've inserted a row in a table in oracle using SQL Developer and committed too. But when selected the data it is not showing up. Did I go wrong somewhere ?
Below is the how I inserted and selected.
insert into table1 (seqn,pname,city,country,irank,icode,idate,imatch)
values('1234','ABCD','NY','USA','1','XYZ',sysdate,'9999');
The output I get is 1 rows inserted and then I commit;.
Now when I do select the data, nothing is showing up.
select count(*) from table1; is fecthing me 0 rows.
Please help me out with where I am going wrong.
Note: All are of datatype varchar2() and the date is of DATE datatype. I've other columns in the table which can accept NULL data.
I've figured it out by myself and wondering how I missed that silly thing. There is a trigger enabled on the table such that whenever I insert the data in TABLE1, the trigger moves it to another table and deletes the data from TABLE1.
Note: I figured it long back but posting it now just in case if someone else is missing this thing and looking for an answer.
Take care to insert into the CORRECT SCHEME!!!
In my case I was inserting table in another scheme:
insert into table1 -- this inserts into some default scheme
insert into correct_scheme.table1 -- this inserts into the correct scheme

Oracle Create Table as Select * from Another_Table same table space

I didn't design the DB so don't judge me on this.
I have a log table that is receiving A LOT of entries. I only need to keep a day or so on this this log table. My initial thought was:
In a single transaction:
1. rename the log table
2. create the original log table from the renamed log table
3. commit the trx and life goes on
The second time this happens I drop the renamed table and do it all over again. This will run as an Oracle job once a day.
The original question:
Would anyone know if I specify a table space name in table #1 like so:
create table "my_user"."first_table" (pkid number, full_name varchar2(50)) nologging tablespace "my_custom_tablespace";
Then I do something like:
create table second_table as select * from first_table where 1=2 -- because I only want the structure
Will my second_table be in the same table_space?
Thanks in advance for your help.
If you are on Enterprise Edition with partitioning, then a simpler solution is to go with an interval partitioned table, with one partition per day. Then truncate the partitions when you don't need them.
If not, then go with two tables, a synonym to point to the 'current' one that is being inserted into, and a view that selects from a union of the two tables. The nightly job would truncate the 'old' table and switch the synonym to make it the 'new' one.

Recovering deleted rows from oracle table

Is this possible to recover the deleted rows from oracle table? My data is stored in a table MANUAL_TRANSACTIONS. Schema name is CCO.I have accidentally deleted some 500 Thousands rows in a table and did the commit too. Now I want to recover them.I am using Oracle 11g R2.Thanks
You can recover the details using Oracle Flashback Query.
You could query the contents of the table as of a time before the deletion to find out what data had been lost, and, if appropriate, re-insert the lost data in the database.
Here's the sample query:
select * from MANUAL_TRANSACTION as of timestamp to_timestamp('28-APR-2014 12:30:00', 'DD-MON-YYYY HH:MI:SS') where ' clause based on your deleted data';
Source: http://docs.oracle.com/cd/B19306_01/backup.102/b14192/flashptr002.htm
answers are already given just what i learned form above .
FLASHBACK can only be done by DBA( I guess ) but we can use below query
Insert into MANUAL_TRANSACTIONS
(SELECT * FROM MANUAL_TRANSACTIONS AS OF
TIMESTAMP TO_TIMESTAMP('2018-07-23 06:41:59', 'YYYY-MM-DD HH:MI:SS'));
or you can go for this query for one day records
Insert into MANUAL_TRANSACTIONS
(SELECT * FROM MANUAL_TRANSACTIONS AS OF
TIMESTAMP TO_TIMESTAMP('2018-07-23', 'YYYY-MM-DD'));
select * from MY_TABLE as of timestamp to_timestamp('04-MAY-2017 12:30:00', 'DD-MON-YYYY HH:MI:SS') where ID=1822904; --- 12Hr Clock
Above query works for me. You can even look for 24Hr timeframe using below query
select * from MY_TABLE as of timestamp to_timestamp('04-MAY-2017 13:30:00', 'DD-MON-YYYY HH24:MI:SS') where ID=1822904;
Yes, you can, use flash back query.
Using Oracle Flashback Query (SELECT AS OF)
This assumes that the undo tablespace was big enough, with enough undo retention. If the undo is already freed, you might need to perform a restore and recovery, in a clone database and copy the data to the original database. Also check TSPITR, TableSpace Point In Time Recovery. This is only possible if your database runs in archivelog mode and has a backup available.
If you have backup and Oracle 12c you could use Table Point In Time Recovery (PITR):
RECOVER TABLE 'SCHEMA'.'TAB_NAME'
UNTIL TIME xxxxyyy
AUXILIARY DESTINATION '/u01/aux'
REMAP TABLE 'SCHEMA'.'TAB_NAME':'TAB_NAME_PREV';
Your data at that point in time will be available:
SELECT * FROM SCHEMA.TAB_NAME_PREV;
INSERT INTO TABLE_NAME(SELECT * FROM TABLE_NAME AS OF TIMESTAMP(SYSDATE - 4/24)
I know this is too late for the answer, after long search about how to recovery and restore tables in oracle I finally found a good way to restore by using restore point, according to Pro Oracle Database 12C Administration book, before any action into your table you could use restore point by using following lines:
CREATE RESTORE POINT <your_key_point_name>;
for recovery table with restore point you can use :
FLASHBACK TABLE <[your_schema.]your_table_name> TO RESTORE POINT <your_key_point_name>;
beside this all of above answers "about recovering using FLASHBACK" forgot to consider two key points:
for using FLASHBACK recycle bin mode must be enabled
before any row recovery using FLASHBACK , row movement in your table must be enabled (with ALTER TABLE <[your_schema.]your_table_name> enable ROW MOVEMENT). According to oracle documents link:
Before you can use Flashback Table, you must ensure that row movement is enabled on the table to be flashed back, or returned to a previous state.
FLASHBACK TABLE <TABLE_NAME> TO TIMESTAMP(TO_DATE('27-APR-2014 23:59:59','DD-MON-YYYY HH24: MI: SS'));
Restores the data in the table to the given time(provided the table was not truncated).
In your case:
FLASHBACK TABLE MANUAL_TRANSACTIONS TO TIMESTAMP(TO_DATE('27-APR-2014 23:59:59','DD-MON-YYYY HH24: MI: SS'));
Use this query,
Insert into MANUAL_TRANSACTIONS
(SELECT * FROM MANUAL_TRANSACTIONS AS OF
TIMESTAMP TO_TIMESTAMP('2014-04-27 11:59:59 PM', 'YYYY-MM-DD HH:MI:SS PM'))
There are some options:
Flashback Query as
create table before_delete as select * from Table as of TIMESTAMP XX;
Logminer if Oracle supplement log is enabled , you can get undo sql for your delete statement
-- switch again logfile to get a minimal redo activity
alter system switch logfile;
-- mine the last written archived log
exec dbms_logmnr.add_logfile('archivelog/redologfile', options => dbms_logmnr.new);
exec dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog);
select operation, sql_redo from v$logmnr_contents where seg_name = 'EMP';
Oracle PRM-DUL will be last option. Even deleted row piece in Oracle block is always just marked row flag with deleted mask, the row piece still can be read via scan Oracle data block . PRM-DUL can scan the whole table , find out every record/row piece marked deleted and write out to flat file.
what you may try is :
flashback query, available from oracle 10g , may failed with ora-01555 snapshot too old
redo logminer , mine redo and may find undo sql
prm-dul tool ( a commercial recovery tool for oracle), which can scan oracle block and find even deleted row piece

Data loading in Oracle

I am facing problem in loading data. I have to copy 800,000 rows from one table to another in Oracle database.
I tried for 10,000 rows first but the time it took is not satisfactory. I tried using the "BULK COLLECT" and "INSERT INTO SELECT" clause but for both the cases response time is around 35 minutes. This is not the desired response I'm looking for.
Does anyone have any suggestions?
Anirban,
Using an "INSERT INTO SELECT" is the fastest way to populate your table. You may want to extend it with one or two of these hints:
APPEND: to use direct path loading, circumventing the buffer cache
PARALLEL: to use parallel processing if your system has multiple cpu's and this is a one-time operation or an operation that takes place at a time when it doesn't matter that one "selfish" process consumes more resources.
Just using the append hint on my laptop copies 800,000 very small rows below 5 seconds:
SQL> create table one_table (id,name)
2 as
3 select level, 'name' || to_char(level)
4 from dual
5 connect by level <= 800000
6 /
Tabel is aangemaakt.
SQL> create table another_table as select * from one_table where 1=0
2 /
Tabel is aangemaakt.
SQL> select count(*) from another_table
2 /
COUNT(*)
----------
0
1 rij is geselecteerd.
SQL> set timing on
SQL> insert /*+ append */ into another_table select * from one_table
2 /
800000 rijen zijn aangemaakt.
Verstreken: 00:00:04.76
You mention that this operation takes 35 minutes in your case. Can you post some more details, so we can see what exactly is taking 35 minutes?
Regards,
Rob.
I would agree with Rob. Insert into () select is the fastest way to do this.
What exactly do you need to do? If you're trying to do a table rename by copying to a new table and then deleting the old, you might be better off doing a table rename:
alter table
table
rename to
someothertable;
INSERT INTO SELECT is the fastest way to do it.
If possible/necessary, disable all indexes on the target table first.
If you have no existing data in the target table, you can also try CREATE AS SELECT.
As with the above, I would recommend the Insert INTO ... AS select .... or CREATE TABLE ... AS SELECT ... as the fastest way to copy a large volume of data between two tables.
You want to look up the direct-load insert in your oracle documentation. This adds two items to your statements: parallel and nologging. Repeat the tests but do the following:
CREATE TABLE Table2 AS SELECT * FROM Table1 where 1=2;
ALTER TABLE Table2 NOLOGGING;
ALTER TABLE TABLE2 PARALLEL (10);
ALTER TABLE TABLE1 PARALLEL (10);
ALTER SESSION ENABLE PARALLEL DML;
INSERT INTO TABLE2 SELECT * FROM Table 1;
COMMIT;
ALTER TABLE 2 LOGGING:
This turns off the rollback logging for inserts into the table. If the system crashes, there's not recovery and you can't do a rollback on the transaction. The PARALLEL uses N worker thread to copy the data in blocks. You'll have to experiment with the number of parallel worker threads to get best results on your system.
Is the table you are copying to the same structure as the other table? Does it have data or are you creating a new one? Can you use exp/imp? Exp can be give a query to limit what it exports and then imported into the db. What is the total size of the table you are copying from? If you are copying most of the data from one table to a second, can you instead copy the full table using exp/imp and then remove the unwanted rows which would be less than copying.
try to drop all indexes/constraints on your destination table and then re-create them after data load.
use /*+NOLOGGING*/ hint in case you use NOARCHIVELOG mode, or consider to do the backup right after the operation.

Resources