Oracle Sequence Trigger - oracle

I am not too familiar with Oracle but I have written a trigger for my application to generate numbering for records using a sequence. The problem I have is, the numbers may already be in use and I want to add a check to ensure if the number is already used, to select the next one available from the sequence. Can this be done firstly and if so, any assistance would be really appreciated?
DROP TRIGGER COMPLAIN_TRG_ENQUIRYNO;
CREATE OR REPLACE TRIGGER COMPLAIN_TRG_ENQUIRYNO
BEFORE INSERT
ON COMPLAIN REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
DECLARE
l_enquiry_no_end complain.enquiry_no_end%TYPE;
BEGIN
SELECT seq_enquiryno.NEXTVAL INTO l_enquiry_no_end FROM dual;
IF :NEW.ENQUIRY_NO_END = ' ' THEN
:NEW.ENQUIRY_NO_END := l_enquiry_no_end;
END IF;
EXCEPTION
WHEN OTHERS THEN
-- Consider logging the error and then re-raise
RAISE;
END ;

Don't use a sequence if you have existing (numeric) data in the column as this can lead to duplicates. Either start from empty and use a sequence or if you are really stuck find the maximum pk you have and reset the startswith property of the sequence.
Alternatively you could use guids instead of guids which have the advantage of always being globally unique - call the sys_guid() function in your trigger. They can lead to other issues with indexing etc though.

Related

Oracle - Fetch returns more than requested number of rows - using triggers

So I am trying to use triggers to basically set some rules.. If anyone has an ID number lower than 3, he will have to pay only 100 dollars, but if someone has an ID above that, he will have to pay more. I did some research and have been told to use triggers and that triggers are very useful when fetching multiple rows. So I tried doing that but it didn't work. Basically the trigger gets created but then when i try to add values, I get the following error:-
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "S.PRICTICKET", line 6
ORA-04088: error during execution of trigger 'S.PRICTICKET'
here is what i did to make the trigger:-
CREATE OR REPLACE TRIGGER PRICTICKET BEFORE INSERT OR UPDATE OR DELETE ON PAYS FOR EACH ROW ENABLE
DECLARE
V_PRICE PAYS.PRICE%TYPE;
V_ID PAYS.ID%TYPE;
V_NAME PAYS.NAME%TYPE;
BEGIN
SELECT ID,NAME INTO V_ID,V_NAME FROM PAYS;
IF INSERTING AND V_ID<3 THEN
V_PRICE:=100;
INSERT INTO PAYS(ID,NAME,PRICE) VALUES (V_ID,V_NAME,V_PRICE);
ELSIF INSERTING AND V_ID>=3 THEN
V_PRICE:=130;
INSERT INTO PAYS(ID,NAME,PRICE) VALUES (V_ID,V_NAME,V_PRICE);
END IF;
END;
and the thing is, when i execute this code, i actually do get a message saying the trigger has been compiled. but when when i try to insert values into the table by using the following code, i get the error message I mentioned above.
INSERT INTO PAYS(ID,NAME) VALUES (19,'SS');
You're getting the error you specified, ORA-01422, because you're returning more than one row with the following SELECT:
SELECT ID,NAME INTO V_ID,V_NAME FROM PAYS;
You need to restrict the result set. For example, I'll use the :NEW psuedorecord to grab the row's new ID value, which if unique, will restrict the SELECT to one row:
SELECT ID,NAME INTO V_ID,V_NAME FROM PAYS WHERE ID = :NEW.ID;
Here is the Oracle docs on using triggers: https://docs.oracle.com/database/121/TDDDG/tdddg_triggers.htm#TDDDG99934
However, I believe your trigger has other issues, please see my comments and we can discuss.
EDIT: Based on our discussion.
ORA-04088: error during execution of trigger
Using INSERT inside a BEFORE INSERT trigger on the same table will create an infinite loop. Please consider using an AFTER INSERT and change your INSERTS to UPDATES, or an INSTEAD OF INSERT.
Additionally, remove DELETE from the trigger definition. That makes no sense in this context.
Let's begin clearing up a few things. You were told "triggers are very useful when fetching multiple rows" this is, as a general rule and without additional context, false. There are 4 types of DML triggers:
Before Statement - fires 1 time for the statement regardless of the number of rows processed.
Before Row - fires once for each row processed during the statement before old and new values are merged into a single set of values. At this point you are allowed to change the values in the columns.
After Row - fires once for row processed during the statement after merging old and new values into a single set of values. At this point you cannot change the column values.
After statement - fires once for the statement regardless of the number of rows processed.
Keep in mind that the trigger is effectively part of the statement.
A trigger can be fired for Insert, Update, or Delete. But, there is no need to fire on each. In this case as suggested, remove the Delete. But also the Update as your trigger is not doing anything with it. (NOTE: there are compound triggers, but they contain segments for each of the above).
In general a trigger cannot reference the table that it is fired upon. See error ORA-04091.
If you're firing a trigger on an Insert it cannot do an insert into that same table (also see ORA-04091) and even if you get around that the Insert would fire the trigger, creating a recursive and perhaps a never ending loop - that would happen here.
Use :New.column_name and :Old.column_name as appropriate to refer to column values. Do not attempt to select them.
Since you are attempting to determine the value of a column you must use a Before trigger.
So applying this to your trigger the result becomes:
CREATE OR REPLACE TRIGGER PRICTICKET
BEFORE INSERT ON PAYS
FOR EACH ROW ENABLE
BEGIN
if :new.id is not null
if :new.ID<3 then
:new.Price :=100;
else
:new.Price := 130;
end if ;
else
null; -- what should happen here?
end if ;
END PRICTICKET ;

PL/SQL Trigger on INSTERT OR UPDATE : ":NEW" IS NULL => ambiguous?

I'm quite a newbie in PL/SQL and I'm trying to do quite complex data integrity checks via triggers.
I've already understood how to avoid problems when calling a table inside a trigger that is used on the same table (via a temporary external table) but now I'm facing a really mind-blowing problem : I thought that ":NEW" was referencing the value in my table AFTER an update but things don't look that simple... It is the new value SET by the update or insert... which looks to be NULL if nothing has been specified, even if the corresponding field value is NOT NULL after the update... wich is driving me crazy.
My trigger is set when inserting or updating several variables :
CREATE OR REPLACE TRIGGER TRG_INS_UP_INSTRUMENT_EVENT
AFTER INSERT OR UPDATE OF EVENT_ID, DATE_BEGIN,DATE_END,INSTR_ID,TYPE_EVENT_ID ON AIS_INSTRUMENT_EVENT
But now... If there already is a line with non-null fields and I do an
UPDATE AIS_INSTRUMENT_EVENT SET INSTR_ID='642' WHERE EVENT_ID='6479'
I actually get a ":NEW.DATE_BEGIN" which is NULL... event thought nor the older or newer values are NULL (because I just didn't update it).
How can I distinguish - in my trigger - the case when the DATE_BEGIN is updated and SET voluntary to NULL from the case in which nothing has been specified (and this field must thus remain the same but not necessarily NULL...). I have to many possible combination to check one by one...
Thanks in advance for your help!
What you are saying is not true. :new contains the full row regardless whether the column is referenced in the UPDATE statement:
CREATE TABLE test (test INTEGER, last_changed DATE);
CREATE OR REPLACE TRIGGER TRG_INS_UP_TEST
AFTER INSERT OR UPDATE OF test, last_changed ON test
FOR EACH ROW
BEGIN
dbms_output.put_line('LAST CHANGED IS ' || :new.last_changed);
END;
INSERT INTO test (test, last_changed) VALUES (1, SYSDATE);
COMMIT;
UPDATE test SET test = test + 1;
DBMS Output:
LAST CHANGED IS 01.09.17
To achieve what you want the mechanism works slightly different. You have to look at two different use cases:
1.) You want the trigger not to fire unless a certain column is mentioned. This use cases is by the reference in the trigger declaration (INSERT OR UDATE OF "column_name"). If the INSERT/UPDATE statement only affects columns that are not mentioned the trigger will not fire.
2.) You want the trigger not to fire unless a certain row is modified. So you want the trigger to only if fire is a value has actually changed. This is done by the WHEN restriction of the trigger. It is usually used in conjunction with DECODE, like so:
CREATE OR REPLACE TRIGGER TRG_INS_UP_TEST
AFTER INSERT OR UPDATE OF test, last_changed ON test
FOR EACH ROW
WHEN (DECODE(new.test,old.test,0,1)=1 OR DECODE(new. last_changed,old. last_changed,0,1)=1)
BEGIN
...
END;
So to answer your original question: If you want to the trigger too only fire in cases where the column DATE_BEGIN is set to NULL you will have to declare your trigger using both approaches
CREATE OR REPLACE TRIGGER TRG_INS_UP_INSTRUMENT_EVENT
AFTER INSERT OR UPDATE OF DATE_BEGIN ON AIS_INSTRUMENT_EVENT
FOR EACH ROW
WHEN (DECODE(new.DATE_BEGIN,old. DATE_BEGIN,0,1)=1 AND new.DATE_BEGIN IS NULL)
The limitation to certain columns ("INSERT OR UPDATE OF DATE_BEGIN") is not strictly necessary but it is good practice since it improves performance since it excludes the trigger from firing at all.
Sorry I think I made a to quick conclusion... The bug was mine. I've tested on a "Toy" table and, indeed, the :NEW was not null, even when not set by the UPDATE. I found the bug in the meantime. All this is too new to me ;-).
Sorry for disturbing.

Problems inserting data into Oracle table with sequence column via SSIS

I am doing data insert into a table in Oracle which is having a sequence set to it in one of the columns say Id column. I would like to know how to do data loads into such tables.
I followed the below link -
It's possible to use OleDbConnections with the Script Component?
and tried to create a function to get the .nextval from the Oracle table but I am getting the following error -
Error while trying to retrieve text for error ORA-01019
I realized that manually setting the value via the package i.e. by using the Script task to enumerate the values but is not incrementing the sequence and that is causing the problem. How do we deal with it? Any links that can help me solve it?
I am using SSIS-2014 but I am not able to tag it as I don't due to paucity of reputation points.
I created a workaround to cater to this problem. I have created staging tables of the destination without the column that takes the Sequence Id. After the data gets inserted, I am then calling SQL statement to get the data into the main tables from staging table and using the .nextval function. Finally truncating/dropping the table depending on the need. It would still be interesting to know how this same thing can be handled via script rather having this workaround.
For instance something like below -
insert into table_main
select table_main_sequence_name.nextval
,*
from (
select *
from table_stg
)
ORA-01019 may be related to fact you have multiple Oracle clients installed. Please check ORACLE_HOME variable if it contains only one client.
One workaround I'm thinking about is creating two procedures for handling sequence. One to get value you start with:
create or replace function get_first from seq as return number
seqid number;
begin
select seq_name.nexval into seqid from dual;
return seqid;
end;
/
Then do your incrementation in script. And after that call second procedure to increment sequence:
create or replace procedure setseq(val number) as
begin
execute immediate 'ALTER SEQUENCE seq_name INCREMENT BY ' || val;
end;
/
This is not good approach but maybe it will solve your problem

PL/SQL Trigger - Dynamically reference :NEW or :OLD

Is it possible to dynamically reference the :NEW/OLD pseudo records, or copy them?
I'm doing a audit trigger for a very wide table, so would like to avoid having separate triggers for insert/delete/update.
When updating/inserting I want to record the :NEW values in the audit table, when deleting I want to record the :OLD values.
create or replace trigger audit_tgr
before insert or update or delete on 'table_name'
for each row
begin
if (inserting or updating) then
insert into audit table (a,b,c) values(:new.a,:new.b,:new.c);
else
insert into audit table (a,b,c) values(:old.a,:old.b,:old.c);
end;
You could try:
declare
l_deleting_ind varchar2(1) := case when DELETING then 'Y' end;
begin
insert into audit_table (col1, col2)
values
( CASE WHEN l_deleting_ind = 'Y' THEN :OLD.col1 ELSE :NEW.col1 END
, CASE WHEN l_deleting_ind = 'Y' THEN :OLD.col2 ELSE :NEW.col2 END
);
end;
I found that the variable was required - you can't access DELETING directly in the insert statement.
WOW, You want to have only ONE insert in your trigger to avoid what?
"I have a single insert statement INSERT INTO HIST ( EMP_ID, NAME ) VALUES (:NEW.EMP_ID , :NEW.NAME ) ; when deleting though, I want to use :OLD , not not have a seperate insert statement for that. "
It's a wide table. SO? It's not like there no REPLACE in text editors, you're not going to write the Insert again, just copy, paste, select, replace :NEW with :OLD.
Tony does have a solution but I seriously doubt that performs better than 2 inserts would perform.
What's the big deal?
EDIT
the main thing I'm trying to avoid is having to managed 2 inserts when the table changes. – Matthew Watson
I battle this attitude all the time. Those who write Java or C++ or .Net have a built-in RBO... Do this, this is good. Don't do that, that's bad. They write code according to these rules and that's fine. The problem is when these rules are applied to databases. Databases don't behave the same way code does.
In the code world, having essentially the same code in two "places" is bad. We avoid it. One would abstract that code to a function and call it from the two places and thus avoid maintaining it twice, and possibly missing one, etc. We all know the drill.
In this case, while it's true that in the end I recommend two inserts, they are separated by an ELSE. You won't change one and forget the other one. IT'S Right There. It's not in a different package, or in some compiled code, or even somewhere else in the same trigger. They're right beside each other, there's an ELSE and the Insert is repeated with :NEW, instead of :OLD. Why am I so crazed about this? Does it really make a difference here? I know two inserts won't be worse than other ideas, and it could be better.
The real reason is being prepared for the times when it does matter. If you're avoiding two inserts just for the sake of maintenance, you're going to miss the times when this makes a HUGE difference.
INSERT INTO log
SELECT * FROM myTable
WHERE flag = 'TRUE'
ELSE -- column omitted for clarity
INSERT INTO log
SELECT * FROM myTable
WHERE flag = 'FALSE'
Some, including Matthew, would say this is bad code, there are two inserts. I could easily replace 'TRUE' and 'FALSE' with a bind variable and flip it at will. And that's what most people would do. But if True is .1% of the values and 99.9% is False, you want two inserts, because you want two execution plans. One is better off with an index and the other an FTS. So, yes, you do have two Inserts to maintain. That's not always bad and in this case it's good and desirable.
You can use a compound trigger and programmatically check if it us I/U/D.
Compound Triggers
Why don't you use Oracle's built in standard or fine-grained auditing?
Use a compound trigger, as others have suggested. Save the old or new values, as appropriate, to variables, and use the variables in your insert statement:
declare
v_col1 table_name.col1%type;
v_col2 table_name.col2%type;
begin
if deleting then
v_col1 := :old.col1;
v_col2 := :old.col2;
else
v_col1 := :new.col1;
v_col2 := :new.col2;
end if;
insert into audit_table(col1, col2)
values(v_col1, v_col2);
end;

ORA-04091: table [blah] is mutating, trigger/function may not see it

I recently started working on a large complex application, and I've just been assigned a bug due to this error:
ORA-04091: table SCMA.TBL1 is mutating, trigger/function may not see it
ORA-06512: at "SCMA.TRG_T1_TBL1_COL1", line 4
ORA-04088: error during execution of trigger 'SCMA.TRG_T1_TBL1_COL1'
The trigger in question looks like
create or replace TRIGGER TRG_T1_TBL1_COL1
BEFORE INSERT OR UPDATE OF t1_appnt_evnt_id ON TBL1
FOR EACH ROW
WHEN (NEW.t1_prnt_t1_pk is not null)
DECLARE
v_reassign_count number(20);
BEGIN
select count(t1_pk) INTO v_reassign_count from TBL1
where t1_appnt_evnt_id=:new.t1_appnt_evnt_id and t1_prnt_t1_pk is not null;
IF (v_reassign_count > 0) THEN
RAISE_APPLICATION_ERROR(-20013, 'Multiple reassignments not allowed');
END IF;
END;
The table has a primary key "t1_pk", an "appointment event id"
t1_appnt_evnt_id and another column "t1_prnt_t1_pk" which may or may
not contain another row's t1_pk.
It appears the trigger is trying to make sure that nobody else with the
same t1_appnt_evnt_id has referred to the same one this row is referring to a referral to another row, if this one is referring to another row.
The comment on the bug report from the DBA says "remove the trigger, and perform the check in the code", but unfortunately they have a proprietary code generation framework layered on top of Hibernate, so I can't even figure out where it actually gets written out, so I'm hoping that there is a way to make this trigger work. Is there?
I think I disagree with your description of what the trigger is trying to
do. It looks to me like it is meant to enforce this business rule: For a
given value of t1_appnt_event, only one row can have a non-NULL value of
t1_prnt_t1_pk at a time. (It doesn't matter if they have the same value in the second column or not.)
Interestingly, it is defined for UPDATE OF t1_appnt_event but not for the other column, so I think someone could break the rule by updating the second column, unless there is a separate trigger for that column.
There might be a way you could create a function-based index that enforces this rule so you can get rid of the trigger entirely. I came up with one way but it requires some assumptions:
The table has a numeric primary key
The primary key and the t1_prnt_t1_pk are both always positive numbers
If these assumptions are true, you could create a function like this:
dev> create or replace function f( a number, b number ) return number deterministic as
2 begin
3 if a is null then return 0-b; else return a; end if;
4 end;
and an index like this:
CREATE UNIQUE INDEX my_index ON my_table
( t1_appnt_event, f( t1_prnt_t1_pk, primary_key_column) );
So rows where the PMNT column is NULL would appear in the index with the inverse of the primary key as the second value, so they would never conflict with each other. Rows where it is not NULL would use the actual (positive) value of the column. The only way you could get a constraint violation would be if two rows had the same non-NULL values in both columns.
This is perhaps overly "clever", but it might help you get around your problem.
Update from Paul Tomblin: I went with the update to the original idea that igor put in the comments:
CREATE UNIQUE INDEX cappec_ccip_uniq_idx
ON tbl1 (t1_appnt_event,
CASE WHEN t1_prnt_t1_pk IS NOT NULL THEN 1 ELSE t1_pk END);
I agree with Dave that the desired result probalby can and should be achieved using built-in constraints such as unique indexes (or unique constraints).
If you really need to get around the mutating table error, the usual way to do it is to create a package which contains a package-scoped variable that is a table of something that can be used to identify the changed rows (I think ROWID is possible, otherwise you have to use the PK, I don't use Oracle currently so I can't test it). The FOR EACH ROW trigger then fills in this variable with all rows that are modified by the statement, and then there is an AFTER each statement trigger that reads the rows and validate them.
Something like (syntax is probably wrong, I haven't worked with Oracle for a few years)
CREATE OR REPLACE PACKAGE trigger_pkg;
PROCEDURE before_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID);
PROCEDURE after_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE PACKAGE BODY trigger_pkg AS
TYPE rowid_tbl IS TABLE OF(ROWID);
modified_rows rowid_tbl;
PROCEDURE before_stmt_trigger IS
BEGIN
modified_rows := rowid_tbl();
END before_each_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID) IS
BEGIN
modified_rows(modified_rows.COUNT) = row;
END for_each_row_trigger;
PROCEDURE after_stmt_trigger IS
BEGIN
FOR i IN 1 .. modified_rows.COUNT LOOP
SELECT ... INTO ... FROM the_table WHERE rowid = modified_rows(i);
-- do whatever you want to
END LOOP;
END after_each_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE TRIGGER before_stmt_trigger BEFORE INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.before_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER after_stmt_trigger AFTER INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.after_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER for_each_row_trigger
BEFORE INSERT OR UPDATE ON mytable
WHEN (new.mycolumn IS NOT NULL) AS
BEGIN
trigger_pkg.for_each_row_trigger(:new.rowid);
END;
With any trigger-based (or application code-based) solution you need to
put in locking to prevent data corruption in a multi-user environment.
Even if your trigger worked, or was re-written to avoid the mutating table
issue, it would not prevent 2 users from simultaneously updating
t1_appnt_evnt_id to the same value on rows where t1_appnt_evnt_id is not
null: assume there are currenly no rows where t1_appnt_evnt_id=123 and
t1_prnt_t1_pk is not null:
Session 1> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =456;
/* OK, trigger sees count of 0 */
Session 2> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =789;
/* OK, trigger sees count of 0 because
session 1 hasn't committed yet */
Session 1> commit;
Session 2> commit;
You now have a corrupted database!
The way to avoid this (in trigger or application code) would be to lock
the parent row in the table referenced by t1_appnt_evnt_id=123 before performing the check:
select appe_id
into v_app_id
from parent_table
where appe_id = :new.t1_appnt_evnt_id
for update;
Now session 2's trigger must wait for session 1 to commit or rollback before it performs the check.
It would be much simpler and safer to implement Dave Costa's index!
Finally, I'm glad no one has suggested adding PRAGMA AUTONOMOUS_TRANSACTION to your trigger: this is often suggested on forums and works in as much as the mutating table issue goes away - but it makes the data integrity problem even worse! So just don't...
I had similar error with Hibernate. And flushing session by using
getHibernateTemplate().saveOrUpdate(o);
getHibernateTemplate().flush();
solved this problem for me. (I'm not posting my code block as I was sure that everything was written properly and should work - but it did not until I added the previous flush() statement). Maybe this can help someone.

Resources