trigger performance over different cases - performance

I have a table that collects logs of every check_in. I want to pick 4 columns to later process them. I have placed a trigger on every insert and columns are copied. I am confused over which way to do it. For best PERFORMANCE. Below are some different ways I figured out. Assuming my destination table is indexed.
ALTER trigger GetCheckIn_HA_Transit
on tableXyz
AFTER INSERT
as
declare #cardName nvarchar(max)
declare #checkIn datetime
declare #direction varchar(30)
declare #terminal varchar(30)
select
#direction = str_direction,
#terminal = TERMINAL,
#cardName = card_number,
#checkIn = transit_date
from tableXyz
GO
INSERT INTO logs(direction,terminal,cardName,checkIn)
VALUES
(#direction,#terminal,#cardName,#checkIn)
end
Another way i found was without declaration
ALTER trigger GetCheckIn_HA_Transit
on tableXyz
AFTER INSERT
as
GO
INSERT INTO
logs(direction,terminal,cardName,checkIn)
SELECT(STR_DIRECTION,TERMINAL,card_number,transit_date)FROM tableXyz
And is copying data from one table to another table is better in terms of performance than to copy from one database table to another database table over same server ??
Assuming every second we have an insert that will trigger our Trigger.

Related

Pseudorecord best practice when trigger fires but value should never change

Consider the following tables and trigger, is there a need to specify :new.employee_id in the insert statement or is this generally considered best practice. The trigger will only fire on the salary column of the table and the employee_id should not be affected. Is the :new.employee_id syntax just good practice when creating triggers or unnecessary? Could there be a potential issue if this is not added?
Link to Oracle HR SCHEMA employees table
Employees Table HR Schema
CREATE TABLE salary_log (
whodidit VARCHAR2(25), whendidit timestamp,
oldsalary NUMBER,
newsalary NUMBER,
emp_affected NUMBER);
CREATE OR REPLACE TRIGGER saltrig
AFTER INSERT OR UPDATE OF salary ON employees
FOR EACH ROW
BEGIN
INSERT INTO salary_log
VALUES(user,sysdate, :old.salary, :new.salary, :new.employee_id);
END;
If you were to NOT include it:
You would have to alter your trigger code to explicitly name the columns you are supplying values for, and
Your log table would show that someone's salary changed, who did it and what the old and new values are, but you would not know whose salary was changed. The data in that column would be null.
Is the :new.employee_id syntax just good practice when creating triggers or unnecessary?
If you want this value in your log table, then it is required.

Creating a back up table trigger in Oracle

I have a table A which is constantly updated ( insert statements ) by one application. I want to create another table B which consists only few columns of table A. For this I thought of creating a trigger which triggered after insert on Table A, but I don't know how should I write the insert statement inside the trigger. I am not a database expert, so may be I am missing something simple. Please help.
Here is my trigger code:
CREATE OR REPLACE TRIGGER "MSG_INSERT_TRIGGER"
AFTER
insert on "ACTIVEMQ_MSGS"
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
begin
execute immediate 'truncate table MSG_BACKUP';
insert into MSG_BACKUP select * from ACTIVEMQ_MSGS;
COMMIT;
end;
This does not seem to be good idea: every time a new record gets inserted into Table A, delete everything from table B and copy all records from table A. This will be a huge performance issue, once there is many records in table A.
Wouldn't it be enough to create a view on the desired columns of table A?
If not, that is, you still want to "log" all inserts into an other table, then here you go. (I suppose you want to copy the following fields: f1, f2, f3)
CREATE OR REPLACE TRIGGER TR_AI_ACTIVEMQ_MSGS
AFTER INSERT ON ACTIVEMQ_MSGS
FOR EACH ROW
BEGIN
INSERT INTO MSG_BACKUP (f1, f2, f3)
VALUES (:new.f1, :new.f2, :new.f3);
END TR_AI_ACTIVEMQ_MSGS;
Inside an Oracle trigger the record values are available in de :new pseudo record. Your insert statement may look like this:
insert into B (column1, column2)
values (:new.tableA_col1,:new.tableA_col2)

Conditional Insert or Update in Oracle

I have one table in oracle where data gets inserted from some third party. I want to populate master tables from that table. So, what will be the best way performance wise using collection.
E.g. Suppose, the table into which data will get populated from third party is 'EMP_TMP'.
Now I want to populate 'EMPLOYEE' master table through procedure which will get populated from EMP_TMP Table.
Here again there is one condition like IF SAME EMPID (this is not primary key) EXISTS then we have to UPDATE FULL TABLE which consists of SAME EMPID ELSE we have INSERT NEW RECORD.
[Note: Here EMPID is VARCHAR2 and EMPNO will be primary key where we will use SEQUENCE]
I think here merge will not perform much better performancewise since we cant use collection in MERGE statement.
Well, if performance is your primary consideration, and you don't like MERGE, then how about this (run as script, single transaction):
delete from EMPLOYEE where emp_id IN (
select emp_id from EMP_TMP);
insert into EMPLOYEE
select * from EMP_TMP;
commit;
Obviously not the "safest" approach (and as written assumes exact same table definitions and you have the rollback), but should be fast (you could also mess with IN vs EXISTS etc). And I couldn't quite understand your post if emp_id or emp_no was the common key in these 2 tables, but use whichever makes sense in your situation.
Create a procedure, you need to be using PL/SQL.
Do an update first then test sql%rowcount.
If it is 0, no updates where done and you have to do an insert instead.
I think that this is fairly efficient.
pseudo code
Update table;
if sql%rowcount = 0 then
//get new sequence number
insert into table;
END IF;
COMMIT;
HTH
Harv

How do I UPDATE a large table in oracle pl/sql in batches to avoid running out of undospace?

I have a very large table (5mm records). I'm trying to obfuscate the table's VARCHAR2 columns with random alphanumerics for every record on the table. My procedure executes successfully on smaller datasets, but it will eventually be used on a remote db whose settings I can't control, so I'd like to EXECUTE the UPDATE statement in batches to avoid running out of undospace.
Is there some kind of option I can enable, or a standard way to do the update in chunks?
I'll add that there won't be any distinguishing features of the records that haven't been obfuscated so my one thought of using rownum in a loop won't work (I think).
If you are going to update every row in a table, you are better off doing a Create Table As Select, then drop/truncate the original table and re-append with the new data. If you've got the partitioning option, you can create your new table as a table with a single partition and simply swap it with EXCHANGE PARTITION.
Inserts require a LOT less undo and a direct path insert with nologging (/+APPEND/ hint) won't generate much redo either.
With either mechanism, there would probably sill be 'forensic' evidence of the old values (eg preserved in undo or in "available" space allocated to the table due to row movement).
The following is untested, but should work:
declare
l_fetchsize number := 10000;
cursor cur_getrows is
select rowid, random_function(my_column)
from my_table;
type rowid_tbl_type is table of urowid;
type my_column_tbl_type is table of my_table.my_column%type;
rowid_tbl rowid_tbl_type;
my_column_tbl my_column_tbl_type;
begin
open cur_getrows;
loop
fetch cur_getrows bulk collect
into rowid_tbl, my_column_tbl
limit l_fetchsize;
exit when rowid_tbl.count = 0;
forall i in rowid_tbl.first..rowid_tbl.last
update my_table
set my_column = my_column_tbl(i)
where rowid = rowid_tbl(i);
commit;
end loop;
close cur_getrows;
end;
/
This isn't optimally efficient -- a single update would be -- but it'll do smaller, user-tunable batches, using ROWID.
I do this by mapping the primary key to an integer (mod n), and then perform the update for each x, where 0 <= x < n.
For example, maybe you are unlucky and the primary key is a string. You can hash it with your favorite hash function, and break it into three partitions:
UPDATE myTable SET a=doMyUpdate(a) WHERE MOD(ORA_HASH(ID), 3)=0
UPDATE myTable SET a=doMyUpdate(a) WHERE MOD(ORA_HASH(ID), 3)=1
UPDATE myTable SET a=doMyUpdate(a) WHERE MOD(ORA_HASH(ID), 3)=2
You may have more partitions, and may want to put this into a loop (with some commits).
If I had to update millions of records I would probably opt to NOT update.
I would more likely create a temp table and then insert data from old table since insert doesnt take up a lot of redo space and takes less undo.
CREATE TABLE new_table as select <do the update "here"> from old_table;
index new_table
grant on new table
add constraints on new_table
etc on new_table
drop table old_table
rename new_table to old_table;
you can do that using parallel query, with nologging on most operations generating very
little redo and no undo at all -- in a fraction of the time it would take to update the
data.

Insert into oracle database

Hi I have a database with loads of columns and I want to insert couple of records for testing, now in order to insert something into that database I'd have to write large query .. is it possible to do something like this
INSERT INTO table (SELECT FROM table WHERE id='5') .. I try to insert the row with ID 5 but I think this will create a problem because it will try to duplicate a record, is it possible to change this ID 5 to let say 1000 then I'd be able to insert data without writing complex query and while doing so avoiding replication of data .. tnx
In PL/SQL you can do something like this:
declare
l_rec table%rowtype;
begin
select * into l_rec from table where id='5';
l_rec.id := 1000;
insert into table values l_rec;
end;
If you have a trigger on the table to handle the primary key from a sequence (:NEW.id = seq_sequence.NEXTVAL) then you should be able to do:
INSERT INTO table
(SELECT columns_needed FROM table WHERE whatever)
This will allow you to add in many rows at one (the number being limited by the WHERE clause). You'll need to select the columns that are required by the table to be not null or not having default values. Beware of any unique constraints as well.
Otherwise you'll be looking at PL/SQL or some other form of script to insert multiple rows.
For each column that has no default value or you want to insert the values other than default, you will need to provide the explicit name and value.
You only can use an implicit list (*) if you want to select all columns and insert them as they are.
Since you are changing the PRIMARY KEY, you need to enumerate.
However, you can create a before update trigger and change the value of the PRIMARY KEY in this trigger.
Note that the trigger cannot reference the table itself, so you will need to provide some other way to get the unique number (like a sequence):
CREATE TRIGGER trg_mytable_bi BEFORE INSERT ON mytable FOR EACH ROW
BEGIN
:NEW.id := s_mytable.nextval;
END;
This way you can use the asterisk but it will always replace the value of the PRIMARY KEY.

Resources