Executing triggers in Oracle for copying the old values to a Mirror table - oracle

We are trying to copy the current row of a table to mirror table by using a trigger before delete / update. Below is the working query
BEFORE UPDATE OR DELETE
ON CurrentTable FOR EACH ROW
BEGIN
INSERT INTO MirrorTable
( EMPFIRSTNAME,
EMPLASTNAME,
CELLNO,
SALARY
)
VALUES
( :old.EMPFIRSTNAME,
:old.EMPLASTNAME,
:old.CELLNO,
:old.SALARY
);
END;
But the problem is we have more than 50 coulmns in the current table and dont want to mention all those column names. Is there a way to select all coulmns like
:old.*
SELECT * INTO MirrorTable FROM CurrentTable
Any suggestions would be helpful.
Thanks,

Realistically, no. You'll need to list all the columns.
You could, of course, dynamically generate the trigger code pulling the column names from DBA_TAB_COLUMNS. But that is going to be dramatically more work than simply typing in 50 column names.
If your table happens to be an object table, :new would be an instance of that object so you could insert that. But it would be rather rare to have an object table.

If your 'current' and 'mirror' tables have EXACTLY the same structure you may be able to use something like
INSERT INTO MirrorTable
SELECT *
FROM CurrentTable
WHERE CurrentTable.primary_key_column = :old.primary_key_column
Honestly, I think that this is a poor choice and wouldn't do it, but it's a more-or-less free world and you're free (more or less :-) to make your own choices.
Share and enjoy.

For what it's worth, I've been writing the same stuff and used this to generate the code:
SQL> set pagesize 0
SQL> select ':old.'||COLUMN_NAME||',' from all_tab_columns where table_name='BIGTABLE' and owner='BOB';
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
...
If you feed all columns, no need to mention them twice (and you may use NULL for empty columns):
INSERT INTO bigtable VALUES (
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
NULL,
NULL);
people writing tables with that many columns should have no desserts ;-)

Related

How can I alter a table in Oracle while adding a column, which is of the same type as a column from a different table

I have two tables : table1 and table2.
There is a customerId field in table2 of data type Varchar2(30), how can I alter table1 to add the customerId field of the same data type as table2 using %type.
I tried the below code but no luck.
alter table table2
add customer_id table1.CUSTOMER_ID%type;
is it possible to alter using %type? Will this work. Please advise.
If it does not work, shall I do it manually by stating
alter table table2
add customer_id varchar2(30);
alter table table2
add customer_id table1.CUSTOMER_ID%type;
%type is a PL/SQL construct. We use it to define local variables in a program which are based on table columns. It does not work in SQL.
"if the data type of customerId changes, we have to manually change everywhere, instead if it is a copy, we just need to change at one place, "
This is not how Oracle (and most other if not all) databases work. They are engines for storing and retrieving data. They make this easy by enforcing strong data-typing and by making it hard to lose data carelessly. The rigour of the data dictionary is there to protect us from our lazy selves.
As a thought experiment, consider the impact on table2.customer_id if we did any one of the following:
alter table table1 modify customer_id not null;
alter table table1 modify customer_id number(6,0); -- from number(9,0)
alter table table1 modify customer_id number(6,0); -- from varchar2(6)
alter table table1 drop column customer_id;
All of these are possible real-life cases. For any of them ,the state of data in table2.customer_id could cause the statement to fail on table2 (even though they would succeed on table1). Is that desirable? Almost certainly yes. But it now means we cannot change table1, which greatly reduces the utility of having a template column.
"i thought is more of a good practice."
The best practice is to get it right first time. Obviously that's not always possible, because circumstances evolve over time. We need to accept there will be change, and the good practice for handling change is to run an impact assessment: if we change table1.customer_id what else might be affected? What else will need to change after that? What about all the program code which uses these columns?
Data management is hard, but it's hard for a good reason. Unlike source code, databases have state. Changing state is expensive, and reverting to a previous state even more so. Changing the datatype of a column means changing the state of all the data in that column. This is not something which should be done lightly.
So. Do proper analysis. Have a decent data model. Understand your data structures. These are good practices.
This is not an answer (mathguy told it to you already), but a comment is a little bit "short" for what I'd like to say.
While attending HrOUG conference, I saw a man wearing a T-shirt saying
Thank you for spending months in coding & saving us days in planing
In other words, carefully choose CUSTOMERID data type. If you are selling products to 13 customers today, don't set it to NUMBER(2) because (if your company develops and becomes prosperous), you'll soon be selling products to thousands of customers. Will you first alter it (and its all dependant column data types, as well as all its appearances in your application(s)) to NUMBER(3), and then to NUMBER(4), etc.? Think about the future!
Similarly, at the same conference, there were guys who said that they have tables with 570 columns. Gosh! 5-7-0! What are they doing with such a tables? Their answer was: "We pay Oracle a lot of money. It allows us to create tables with 1000 columns, and we are going to use every single one of them." The audience was kind of puzzled (hint: normalization?), but hey - it's their choice.
Yes, I noticed that you chose a VARCHAR2 data type for that ID column. (I'm not saying that it is wrong, but I, somehow, prefer numbers over strings for such purposes.) So, what do you think? Will 30 characters be enough? How much would it cost if you set it to 50 characters? Or 100? They won't take any additional space on a disk. If there is 'A234' in your VARCHAR2(100 BYTE) column, it'll take only 4 bytes on a disk. Memory is a different story, as Oracle will pre-allocate space when you use such a variable in your PL/SQL code, so you might end up in wasting space unnecessarily. Adding more RAM? Sure, it is an option, but it costs money.
Therefore, once again - design your data model carefully and you should be OK, following the supported ALTER TABLE syntax.
Note : Use this with caution.
Use this only after you have read all other answers and still think you want it that way.
To use a DDL Trigger. The below is just a sample which considers customer_id as NUMBER type.For VARCHAR2, DATE etc, you need a generic way to construct the DDL. Refer Issue in dynamic table creation
CREATE OR REPLACE TRIGGER trg_alter_table1
AFTER ALTER
ON SCHEMA
WHEN (ORA_DICT_OBJ_TYPE = 'TABLE' AND ORA_DICT_OBJ_NAME = 'TABLE1')
DECLARE
v_ddl VARCHAR2 (200);
BEGIN
SELECT 'ALTER TABLE TABLE2 MODIFY '
|| column_name
|| ' '
|| data_type
|| '('
|| data_precision
|| ')'
INTO v_ddl
FROM user_tab_columns
WHERE table_name = 'TABLE1' AND column_name = 'CUSTOMER_ID';
EXECUTE IMMEDIATE v_ddl;
END;
/

trigger insert and update oracle error

Friend, I have question about cascade trigger.
I have 2 tables, table data that has 3 attributes (id_data, sum, and id_tool), and table tool that has 3 attributes (id_tool, name, sum_total). table data and tool are joined using id_tool.
I want create trigger for update info sum_total. So , if I inserting on table data, sum_total on table tool where tool.id_tool = data.id_tool will updating too.
I create this trigger, but error ora-04090.
create or replace trigger aft_ins_tool
after insert on data
for each row
declare
v_stok number;
v_jum number;
begin
select sum into v_jum
from data
where id_data= :new.id_data;
select sum_total into v_stok
from tool
where id_tool=
(select id_tool
from data
where id_data= :new.id_data);
if inserting then
v_stok := v_stok + v_jum;
update tool
set sum_total=v_stok
where id_tool=
(select id_tool
from data
where id_data= :new.id_data);
end if;
end;
/
please give me opinion.
Thanks.
The ora-04090 indicates that you already have an AFTER INSERT ... FOR EACH ROW trigger on that table. Oracle doesn't like that, because the order in which the triggers fire is unpredictable, which may lead to unpredictable results, and Oracle really doesn't like those.
So, your first step is to merge the two sets of code into a single trigger. Then the real fun begins.
Presumably there is only one row in data matching the current value of id_data (if not your data model is rally messed up and there's no hope for your situation). Anyway, that means the current row already gives you access to the values of :new.sum and :new.id_tool. So you don't need those queries on the data table: removing those selects will remove the possibility of "mutating table" errors.
As a general observation, maintaining aggregate or summary tables like this is generally a bad idea. Usually it is better just to query the information when it is needed. If you really have huge volumes of data then you should use a materialized view to maintain the summary, rather than hand-rolling something.

Stored Procedure: Cursor is bad?

I read somewhere that 99% of time you don't need to use a cursor.
But I can't think of any other way beside using a cursor in this following situation.
Select t.flag
From Dual t;
Let's say this return 4 rows of either 'Y' or 'N'. I want the procedure to trigger something if it finds 'Y'. I usually declare a cursor and loop until %NOTFOUND. Please tell me if there is a better way.
Also, if you have any idea, when is the best time to use a cursor?
EDIT: Instead of inserting the flags, what if I want to do "If 'Y' then trigger something"?
Your case definitely falls into the 99%.
You can easily do the conditional insert using insert into ... select.... It's just a matter or making a select that returns the result that you want to insert.
If you want to insert one record for each 'Y' then use a query with where flag = 'Y'. If you only want to insert a single record depending on whether there are at least one 'Y', then you can add distinct to the query.
A cursor is useful when you make something more complicated. I for example use a cursor when need to insert or update records in one table, and also for each record insert or update one or more records into several other tables.
Something like this:
INSERT INTO TBL_FLAG (col)
SELECT ID FROM Dual where flag = 'Y'
You will usually see a performance gain when using set based instead of procedural operations because most modern DBMS are setup to perform set based operations. You can read more here.
well the example doesnt quite make sense..
but you can always write an insert as select statement instead of what i think you are describing
Cursors are best to use when an column value form one table will be used repeatedly in multiple queries on different tables.
Suppose the values of id_test column are fetched from MY_TEST_TBL using a cursor CUR_TEST. Now this id_test column is a foreign key in MY_TEST_TBL. If we want to use id_test to insert or update any rows in table A_TBL,B_TBL and C_TBL, then in this case it's best to use cursors instead of using complex queries.
Hope this might help to understand the purpose of cursors

select * through dblink

I have some trouble when trying to update a table by looping cursor which select from source table through dblink.
I have two database DB1, DB2.
They are two different database instance.
And I am using this following statement in DB1:
CURSOR TestCursor IS
SELECT a.*, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE#DB2 a;
BEGIN
For C1 in TestCursor loop
INSERT into RPT.TARGET
(
/*The company_name and cust_id are select from SOURCE table from DB2*/
COMPANY_NAME, CUST_ID, TEST_COL_A, TEST_COL_B
)
values
(
C1.COMPANY_NAME, C1.CUST_ID, C1.TEST_COL_A , C1.TEST_COL_B
) ;
End loop;
/*Some code...*/
End
Everything works fine until I add a column "NEW_COL" to SOURCE table#DB2
The insert data got the wrong value.
The value of TEST_COL_A , as I expect, should be 'A'.
However, it contains the value of NEW_COL which i add at SOURCE table.
And the value of TEST_COL_B contains 'A'.
Have anyone encounter the same issue?
It seems like oracle cache the table columns when it compile.
Is there any way to add a column to source table without recompile?
According to this:
Oracle Database does not manage
dependencies among remote schema
objects other than
local-procedure-to-remote-procedure
dependencies.
For example, assume that a local view
is created and defined by a query that
references a remote table. Also assume
that a local procedure includes a SQL
statement that references the same
remote table. Later, the definition of
the table is altered.
Therefore, the local view and
procedure are never invalidated, even
if the view or procedure is used after
the table is altered, and even if the
view or procedure now returns errors
when used. In this case, the view or
procedure must be altered manually so
that errors are not returned. In such
cases, lack of dependency management
is preferable to unnecessary
recompilations of dependent objects.
In this case you aren't quite seeing errors, but the cause is the same. You also wouldn't have a problem if you used explicit column names instead of *, which is usually safer anyway. If you're using * you can't avoid recompiling (unless, I suppose, the * is the last item in the select list, in which case any extra columns on the end wouldn't cause a problem - as long as their names didn't clash).
I recommend that you use a single set processing insert statement in DB1 rather than a row at a time cursor for loop for the insert, for example:
INSERT into RPT.TARGET
select COMPANY_NAME, CUST_ID, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE#DB2
;
Rationale:
Set processing will almost always out perform Row-at-a-time
processing [which is really slow-at-a-time processing].
Set processing the insert is a scalable solution. If the application will need to scale to tens of thousands of rows or millions of rows, the row-at-a-time solution will not likely scale.
Also, using the select * construct is dangerous for the reason you
encountered [and other similar reasons].

select only new row in oracle

I have table with "varchar2" as primary key.
It has about 1 000 000 Transactions per day.
My app wakes up every 5 minute to generate text file by querying only new record.
It will remember last point and process only new records.
Do you have idea how to query with good performance?
I am able to add new column if necessary.
What do you think this process should do by?
plsql?
java?
Everyone here is really really close. However:
Scott Bailey's wrong about using a bitmap index if the table's under any sort of continuous DML load. That's exactly the wrong time to use a bitmap index.
Everyone else's answer about the PROCESSED CHAR(1) check in ('Y','N')column is right, but missing how to index it; you should use a function-based index like this:
CREATE INDEX MY_UNPROCESSED_ROWS_IDX ON MY_TABLE
(CASE WHEN PROCESSED_FLAG = 'N' THEN 'N' ELSE NULL END);
You'd then query it using the same expression:
SELECT * FROM MY_TABLE
WHERE (CASE WHEN PROCESSED_FLAG = 'N' THEN 'N' ELSE NULL END) = 'N';
The reason to use the function-based index is that Oracle doesn't write index entries for entirely NULL values being indexed, so the function-based index above will only contain the rows with PROCESSED_FLAG = 'N'. As you update your rows to PROCESSED_FLAG = 'Y', they'll "fall out" of the index.
Well, if you can add a new column, you could create a Processed column, which will indicate processed records, and create an index on this column for performance.
Then the query should only be for those rows that have been newly added, and not processed.
This should be easily done using sql queries.
Ah, I really hate to add another answer when the others have come so close to nailing it. But
As Ponies points out, Oracle does have a hidden column (ORA_ROWSCN - System Change Number) that can pinpoint when each row was modified. Unfortunately, the default is that it gets the information from the block instead of storing it with each row and changing that behavior will require you to rebuild a really large table. So while this answer is good for quieting the SQL Server fella, I'd not recommend it.
Astander is right there but needs a few caveats. Add a new column needs_processed CHAR(1) DEFAULT 'Y' and add a BITMAP index. For low cardinality columns ('Y'/'N') the bitmap index will be faster. Once you have the rest is pretty easy. But you've got to be careful not select the new rows, process them and mark them as processed in one step. Otherwise, rows could be inserted while you are processing that will get marked processed even though they have not been.
The easiest way would be to use pl/sql to open a cursor that selects unprocessed rows, processes them and then updates the row as processed. If you have an aversion to walking cursors, you could collect the pk's or rowids into a nested table, process them and then update using the nested table.
In MS SQL Server world where I work, we have a 'version' column of type 'timestamp' on our tables.
So, to answer #1, I would add a new column.
To answer #2, I would do it in plsql for performance.
Mark
"astander" pretty much did the work for you. You need to ALTER your table to add one more column (lets say PROCESSED)..
You can also consider creating an INDEX on the PROCESSED ( a bitmap index may be of some advantage, as the possible value can be only 'y' and 'n', but test it out ) so that when you query it will use INDEX.
Also if sure, you query only for every 5 mins, check whether you can add another column with TIMESTAMP type and partition the table with it. ( not sure, check out again ).
I would also think about writing job or some thing and write using UTL_FILE and show it front end if it can be.
If performance is really a problem and you want to create your file asynchronously, you might want to use Oracle Streams, which will actually get modification data from your redo log withou affecting performance of the main database. You may not even need a separate job, as you can configure Oracle Streams to do Asynchronous replication of the changes, through which you can trigger the file creation.
Why not create an extra table that holds two columns. The ID column and a processed flag column. Have an insert trigger on the original table place it's ID in this new table. Your logging process can than select records from this new table and mark them as processed. Finally delete the processed records from this table.
I'm pretty much in agreement with Adam's answer. But I'd want to do some serious testing compared to an alternative.
The issue I see is that you need to not only select the rows, but also do an update of those rows. While that should be pretty fast, I'd like to avoid the update. And avoid having any large transactions hanging around (see below).
The alternative would be to add CREATE_DATE date default sysdate. Index that. And then select records where create_date >= (start date/time of your previous select).
But I don't have enough data on the relative costs of setting a sysdate as default vs. setting a value of Y, updating the function based vs. date index, and doing a range select on the date vs. a specific select on a single value for the Y. You'll probably want to preserve stats or hint the query to use the index on the Y/N column, and definitely want to use a hint on a date column -- the stats on the date column will almost certainly be old.
If data are also being added to the table continuously, including during the period when your query is running, you need to watch out for transaction control. After all, you don't want to read 100,000 records that have the flag = Y, then do your update on 120,000, including the 20,000 that arrived when you query was running.
In the flag case, there are two easy ways: SET TRANSACTION before your select and commit after your update, or start by doing an update from Y to Q, then do your select for those that are Q, and then update to N. Oracle's read consistency is wonderful but needs to be handled with care.
For the date column version, if you don't mind a risk of processing a few rows more than once, just update your table that has the last processed date/time immediately before you do your select.
If there's not much information in the table, consider making it Index Organized.
What about using Materialized view logs? You have a lot of options to play with:
SQL> create table test (id_test number primary key, dummy varchar2(1000));
Table created
SQL> create materialized view log on test;
Materialized view log created
SQL> insert into test values (1, 'hello');
1 row inserted
SQL> insert into test values (2, 'bye');
1 row inserted
SQL> select * from mlog$_test;
ID_TEST SNAPTIME$$ DMLTYPE$$ OLD_NEW$$ CHANGE_VECTOR$$
---------- ----------- --------- --------- ---------------------
1 01/01/4000 I N FE
2 01/01/4000 I N FE
SQL> delete from mlog$_test where id_test in (1,2);
2 rows deleted
SQL> insert into test values (3, 'hello');
1 row inserted
SQL> insert into test values (4, 'bye');
1 row inserted
SQL> select * from mlog$_test;
ID_TEST SNAPTIME$$ DMLTYPE$$ OLD_NEW$$ CHANGE_VECTOR$$
---------- ----------- --------- --------- ---------------
3 01/01/4000 I N FE
4 01/01/4000 I N FE
I think this solution should work..
What you need to do following steps
For the first run, you will have to copy all records. In first run you need to execute following query
insert into new_table(max_rowid) as (Select max(rowid) from yourtable);
Now next time when you want to get only newly inserted values, you can do it by executing follwing command
Select * from yourtable where rowid > (select max_rowid from new_table);
Once you are done with processing above query, simply truncate new_table and insert max(rowid) from yourtable
I think this should work and would be fastest solution;

Resources