Archiving data in oracle - oracle

i am trying to Archive my data from one table to another.Please find below my requirement.
I have a table A and another table B.
I need to find all the records from A which is less than a particular date
After identifying the records ,i need to move the Records to table B
Once the data is moved to Table B,I need to delete those records from table A.
I am planning to use a Stored procedure with the number of days to archive as parameter.
Now i need to check for the errors while inserting in the table A and should not delete those records in table B and also if the records is successfully inserted in Table A and if fails in the deletion of Table B.Then i need to rollback the record inserted in Table A.
I need to archive on a daily basis and there will at least a million records to archive.
I started with the coding by using the forall and save exceptions but struck with the logic .
Can anyone help me with this logic.

First of all, I'm doubt if such 'archieving' is a good idea. It seems like transferring soup from one plate to another using teaspoon. There are better decisions exist for almost every task, say, using partitioning and maybe exchange partition.
But if you immovably wish to do this, you should write something like this:
procedure Move_Many_Records is
begin
savepoint MMR;
insert /*+ APPEND */ into TARGET (fields)
select fields from SOURCE where {condition};
delete from SOURCE
where id in (select id from TARGET);
savepoint MMR;
exception
when others then
rollback to savepoint MMR;
My_Alerts.Shit_Happens('Failed to move records!');
raise;
end;

Related

get values from another table on a trigger

So, i am a begginer on ORACLE and realy would apreciate your help.
I have 3 tables, EMPLOEES, PERSONAL_DATA and RECORDS. I want to create an UPDATE TRIGGER that when fires takes the old values of EMPLOOES finds the personal data of that updating emplooe on the PERSONAL_DATA table with the OLD id and insert all of that data( the OLD of EMPLOOES and the one fetched from PERSONAL_DATA) into the RECORDS table. I been triying to use the SELECT sentence to fetch information from the table PERSONAL_DATA, but the compiler throws me an error.

Oracle Create Table as Select * from Another_Table same table space

I didn't design the DB so don't judge me on this.
I have a log table that is receiving A LOT of entries. I only need to keep a day or so on this this log table. My initial thought was:
In a single transaction:
1. rename the log table
2. create the original log table from the renamed log table
3. commit the trx and life goes on
The second time this happens I drop the renamed table and do it all over again. This will run as an Oracle job once a day.
The original question:
Would anyone know if I specify a table space name in table #1 like so:
create table "my_user"."first_table" (pkid number, full_name varchar2(50)) nologging tablespace "my_custom_tablespace";
Then I do something like:
create table second_table as select * from first_table where 1=2 -- because I only want the structure
Will my second_table be in the same table_space?
Thanks in advance for your help.
If you are on Enterprise Edition with partitioning, then a simpler solution is to go with an interval partitioned table, with one partition per day. Then truncate the partitions when you don't need them.
If not, then go with two tables, a synonym to point to the 'current' one that is being inserted into, and a view that selects from a union of the two tables. The nightly job would truncate the 'old' table and switch the synonym to make it the 'new' one.

Oracle 11g Deleting large amount of data without generating archive logs

I need to delete a large amount of data from my database on a regular basis. The process generates huge volume of archive logs. We had a database crash at one point because there was no storage space available on archive destination. How can I avoid generation of logs while I delete data?
The data to be deleted is already marked as inactive in the database. Application code ignores inactive data. I do not need the ability to rollback the operation.
I cannot partition the data in such a way that inactive data falls in one partition that can be dropped. I have to delete the data with delete statements.
I can ask DBAs to set certain configuration at table level/schema level/tablespace level/server level if needed.
I am using Oracle 11g.
What proportion of the data on the table would be deleted, what volume? Are there any referential integrity constraints to manage or is this table childless?
Depending on the answers , you might consider:
"CREATE TABLE keep_data UNRECOVERABLE AS SELECT * FROM ... WHERE
[keep condition]"
Then drop the original table
Then rename keep_table to original table
Rebuild the indexes (again with unrecoverable to prevent redo),constraints etc.
The problem with this approach is it's a multi-step DDL, process, which you will have a job to make fault tolerant and reversible.
A safer option might be to use data-pump to:
Data-pump expdp to extract the "Keep" data
TRUNCATE the table
Data-pump impdp import of data from step 1, with direct-path
At this point I suggest you read the Oracle manual on Data Pump, particularly the section on Direct Path Loads to be sure this will work for you.
MY preferred option would be partitioning.
Of course, the best way would be TenG solution (CTAS, drop and rename table) but it seems it's impossible for you.
Your only problem is the amount of archive logs and database crash problem. In this case, maybe you could partition your delete statement (for example per 10.000 rows).
Something like:
declare
e number;
i number
begin
select count(*) from myTable where [delete condition];
f :=trunc(e/10000)+1;
for i in 1.. f
loop
delete from myTable where [delete condition] and rownum<=10000;
commit;
dbms_lock.sleep(600); -- purge old archive if it's possible
end loop;
end;
After this operation, you should reorganize your table which is surely fragmented.
Alter the table to set NOLOGGING, delete the rows, then turn logging back on.

Inserting data in a column avoiding duplicates

Lets say i have a query which is fetching col1 after joining multiple tables. I want to insert values of that col1 in a table which is on remote db i.e. i would be using dblink to do that.
Now that col1 would be fetched from 4-5 different db's. There is chances that a value1 fetch from db1 would b in db2 as well. How can i avoid duplicates ?
In my remote db, I have created col1 a primary key. so when inserting, an error would be thrown if there is a duplicate key, end result failing rest of the process. Which i don't want to. I was thiking about 2 approaches
Write a PLSQL script, For each value, determine if value already exists or not. If it doesn't then insert.
Write a PLSQL script and insert and catch the duplicate key exception. The exception would be ignore and it will keep inserting (it doesn't sound that good).
Which approach would you prefer? Is there anything else i can do ?
I would use the MERGE statement and WHEN NOT MATCHED THEN INSERT.
The same merger could also update but it doesn't have to, just leave the update part out.
The different databases can have duplicate primary keys but that doesn't mean the records are duplicates. The actual data may be different in each case. Or the records may represent the same real world thing but at different statuses, Don't know, you haven't provided enough explanation.
The point is, you need much more analysis of why duplicate records can exist and probably a more sophisticated approach to handling collisions. Do you need to take all records (in which case you need a synthetic key)? Or do you take only one instance (so how do you decide precedence)? Other scenarios may exist.
In any case, MERGE or PL/SQL loops are likely to be too crude a solution.
First off, I would suggest that your target database drive all of these inserts because inserting/updating across a database link can create some locking issues and further complicate things especially with multiple databases attempting to access and perform DML on the same table. However if that isn't possible the solutions below will work.
I would fix your primary key problem by including a table look-up on the target table for each row.
INSERT INTO customer#dblink.oracle.com cust
(emp_name,
emp_id)
VALUES
(SELECT
cust.employee_name,
cust.employee_id --primary_key
FROM
source_table st
WHERE NOT EXISTS
(SELECT 1
FROM customer#dblink.oracle.com cust
WHERE cust.employee_id = st.emp_id));
Again, I would not recommend DML transactions across database links unless absolutely necessary as you can sometimes have weird locking behavior.
A PL/SQL procedure or anonymous PL/SQL block could be used to create a bulk processing solution as follows:
CREATE OR REPLACE PROCEDURE send_unique_data
AS
TYPE tab_cust IS TABLE OF customer#dblink.oracle.com%ROWTYPE
INDEX BY PLS_INTEGER;
t_records tab_cust;
BEGIN
SELECT
cust.employee_name,
cust.employee_id --primary_key
BULK COLLECT
INTO t_records
FROM source_table;
FORALL i IN t_records.FIRST...t_records.LAST SAVE EXCEPTIONS
INSERT INTO customer#dblink.oracle.com
VALUES t_records(i);
END send_unique_data;
You can also call the system SQL%BULKEXCEPTIONS collection in case you want to do anything with the records that threw exceptions (such as unique_constraint violations). Be warned that this solution will cause the target table to suffers from performance issues if there are lots of duplicate data attempting to be inserted.

INSERT trigger for inserting record in same table

I have a trigger that is fire on inserting a new record in table in that i want to insert new record in the same table.
My trigger is :
create or replace trigger inst_table
after insert on test_table referencing new as new old as old
for each row
declare
df_name varchar2(500);
df_desc varchar2(2000);
begin
df_name := :new.name;
df_desc := :new.description;
if inserting then
FOR item IN (SELECT pid FROM tbl2 where pid not in(1))
LOOP
insert into test_table (name,description,pid) values(df_name,df_desc,item.pid);
END LOOP;
end if;
end;
its give a error like
ORA-04091: table TEST_TABLE is mutating, trigger/function may not see it
i think it is preventing me to insert into same table.
so how can i insert this new record in to same table.
Note :- I am using Oracle as database
Mutation happens any time you have a row-level trigger that modifies the table that you're triggering on. The problem, is that Oracle can't know how to behave. You insert a row, the trigger itself inserts a row into the same table, and Oracle gets confused, cause, those inserts into the table due to the trigger, are they subject to the trigger action too?
The solution is a three-step process.
1.) Statement level before trigger that instantiates a package that will keep track of the rows being inserted.
2.) Row-level before or after trigger that saves that row info into the package variables that were instantiated in the previous step.
3.) Statement level after trigger that inserts into the table, all the rows that are saved in the package variable.
An example of this can be found here:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551198119097816936
Hope that helps.
I'd say that you should look at any way OTHER than triggers to achieve this. As mentioned in the answer from Mark Bobak, the trigger is inserting a row and then for each row inserted by the trigger, that then needs to call the trigger to insert more rows.
I'd look at either writing a stored procedure to create the insert or just insert via a sub-query rather than by values.
Triggers can be used to solve simple problems but when solving more complicated problems they will just cause headaches.
It would be worth reading through the answers to this duplicate question posted by APC and also these this article from Tom Kyte. BTW, the article is also referenced in the duplicate question but the link is now out of date.
Although after complaining about how bad triggers are, here is another solution.
Maybe you need to look at having two tables. Insert the data into the test_table table as you currently do. But instead of having the trigger insert additional rows into the test_table table, have a detail table with the data. The trigger can then insert all the required rows into the detail table.
You may again encounter the mutating trigger error if you have a delete cascade foreign key relationship between the two tables so it might be best to avoid that.

Resources