I have a table which I am trying to do an insert/update on depending on the values I am given. But the insert is not working for this particular table, yet it works for the previous tables which the script has run on.
To test this problem, I put in a few anonymous blocks into oracle's sqldeveloper which inserts or updates depending on whether a key is present. Updates seem to work fine, but when it comes to inserting a new row, nothing is inserted.
If I had this table:
COFFEE_ID TEA_ID NAME
11 100 combo 1
12 101 combo 2
13 102 combo 3
Doing this will not insert anything and will instead move on to the next anonymous block:
begin
insert into COFFEE_TEA(COFFEE_ID, TEA_ID, NAME) values (14, 103, 'combo 4');
exception when dup_val_on_index then
update ....
end;
....
I suspect it has something to do with the trigger on this table. It is a BEFORE EACH ROW trigger type, and it would do an insert data into some other table. There is no exception handling in the trigger, so I'm guessing it must fail but not report it (doesn't show up in sqldeveloper when I run the script).
My two questions would be,
When the trigger runs, what happens if the ID it's trying to insert to the other table already exists? Looks like it silently fails?
How best should I fix this? I am unsure if I can change the trigger code itself, but would it be possible to catch the error inside my anonymous block (assuming that it's actually the trigger that's causing the problem). If so, how would I know what exception to catch if it fails silently?
I removed the exception in sqldeveloper and it tells me that a unique constraint was violated. Namely that the data being inserted into the other table through the trigger is the cause.
Your additional information tells us that your trigger is hurling ORA-00001, a unique key violation. This is the error which the DUP_VAL_ON_INDEX exception handles. So it seems like your exception handler which is supposed to be dealing with key violations on COFFEE_TEA is also swallowing the exceptions from your trigger. Messy.
There are two possible solutions. One is to put decent error handling in the trigger code. The other is to use MERGE for your data loading routine.
I always prefer MERGE as a mechanism for performing upserts, because I don't like using exceptions to handle legitimate expected states. Find out more.
Ideally you should do both. Triggers are supposed to be self-contained code: imposing unhandled exceptions on routines which interact with their tables breaks the enscapsulation.
A trigger will not modify the DML process on a table. Remove the exception block, the insert will either succeed or fail with an error if COFFEE_TEA is a table.
In other words, the following script will never output 0 if COFFEE_TEA is a table:
BEGIN
INSERT INTO coffee_tea(COFFEE_ID, TEA_ID, NAME) values (14, 103, 'combo 4');
dbms_output.put_line(sql%rowcount);
END;
Related
So I am trying to use triggers to basically set some rules.. If anyone has an ID number lower than 3, he will have to pay only 100 dollars, but if someone has an ID above that, he will have to pay more. I did some research and have been told to use triggers and that triggers are very useful when fetching multiple rows. So I tried doing that but it didn't work. Basically the trigger gets created but then when i try to add values, I get the following error:-
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "S.PRICTICKET", line 6
ORA-04088: error during execution of trigger 'S.PRICTICKET'
here is what i did to make the trigger:-
CREATE OR REPLACE TRIGGER PRICTICKET BEFORE INSERT OR UPDATE OR DELETE ON PAYS FOR EACH ROW ENABLE
DECLARE
V_PRICE PAYS.PRICE%TYPE;
V_ID PAYS.ID%TYPE;
V_NAME PAYS.NAME%TYPE;
BEGIN
SELECT ID,NAME INTO V_ID,V_NAME FROM PAYS;
IF INSERTING AND V_ID<3 THEN
V_PRICE:=100;
INSERT INTO PAYS(ID,NAME,PRICE) VALUES (V_ID,V_NAME,V_PRICE);
ELSIF INSERTING AND V_ID>=3 THEN
V_PRICE:=130;
INSERT INTO PAYS(ID,NAME,PRICE) VALUES (V_ID,V_NAME,V_PRICE);
END IF;
END;
and the thing is, when i execute this code, i actually do get a message saying the trigger has been compiled. but when when i try to insert values into the table by using the following code, i get the error message I mentioned above.
INSERT INTO PAYS(ID,NAME) VALUES (19,'SS');
You're getting the error you specified, ORA-01422, because you're returning more than one row with the following SELECT:
SELECT ID,NAME INTO V_ID,V_NAME FROM PAYS;
You need to restrict the result set. For example, I'll use the :NEW psuedorecord to grab the row's new ID value, which if unique, will restrict the SELECT to one row:
SELECT ID,NAME INTO V_ID,V_NAME FROM PAYS WHERE ID = :NEW.ID;
Here is the Oracle docs on using triggers: https://docs.oracle.com/database/121/TDDDG/tdddg_triggers.htm#TDDDG99934
However, I believe your trigger has other issues, please see my comments and we can discuss.
EDIT: Based on our discussion.
ORA-04088: error during execution of trigger
Using INSERT inside a BEFORE INSERT trigger on the same table will create an infinite loop. Please consider using an AFTER INSERT and change your INSERTS to UPDATES, or an INSTEAD OF INSERT.
Additionally, remove DELETE from the trigger definition. That makes no sense in this context.
Let's begin clearing up a few things. You were told "triggers are very useful when fetching multiple rows" this is, as a general rule and without additional context, false. There are 4 types of DML triggers:
Before Statement - fires 1 time for the statement regardless of the number of rows processed.
Before Row - fires once for each row processed during the statement before old and new values are merged into a single set of values. At this point you are allowed to change the values in the columns.
After Row - fires once for row processed during the statement after merging old and new values into a single set of values. At this point you cannot change the column values.
After statement - fires once for the statement regardless of the number of rows processed.
Keep in mind that the trigger is effectively part of the statement.
A trigger can be fired for Insert, Update, or Delete. But, there is no need to fire on each. In this case as suggested, remove the Delete. But also the Update as your trigger is not doing anything with it. (NOTE: there are compound triggers, but they contain segments for each of the above).
In general a trigger cannot reference the table that it is fired upon. See error ORA-04091.
If you're firing a trigger on an Insert it cannot do an insert into that same table (also see ORA-04091) and even if you get around that the Insert would fire the trigger, creating a recursive and perhaps a never ending loop - that would happen here.
Use :New.column_name and :Old.column_name as appropriate to refer to column values. Do not attempt to select them.
Since you are attempting to determine the value of a column you must use a Before trigger.
So applying this to your trigger the result becomes:
CREATE OR REPLACE TRIGGER PRICTICKET
BEFORE INSERT ON PAYS
FOR EACH ROW ENABLE
BEGIN
if :new.id is not null
if :new.ID<3 then
:new.Price :=100;
else
:new.Price := 130;
end if ;
else
null; -- what should happen here?
end if ;
END PRICTICKET ;
I get this error when ever I try to fire a trigger after insert on passengers table. this trigger is supposed to call a procedure that takes two parameters of the newly inserted values and based on that it updates another table which is the booking table. however, i am getting this error:
ORA-04091: table AIRLINESYSTEM.PASSENGER is mutating, trigger/function may not see it
ORA-06512: at "AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE", line 11 ORA-06512: at
"AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE", line 15 ORA-06512: at
"AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE_T1", line 3 ORA-04088: error during execution of
trigger 'AIRLINESYSTEM.CALCULATE_FLIGHT_PRICE_T1' (Row 3)
I complied and tested the procedure in the SQL command line and it works fine. The problem seems to be with the trigger. This is the trigger code:
create or replace trigger "CALCULATE_FLIGHT_PRICE_T1"
AFTER
insert on "PASSENGER"
for each row
begin
CALCULATE_FLIGHT_PRICE(:NEW.BOOKING_ID);
end;
Why is the trigger isn't calling the procedure?
You are using database triggers in a way they are not supposed to be used. The database trigger tries to read the table it is currently modifying. If Oracle would allow you to do so, you'd be performing dirty reads.
Fortunately, Oracle warns you for your behaviour, and you can modify your design.
The best solution would be to create an API. A procedure, preferably in a package, that allows you to insert passengers in exactly the way you would like it. In pseudo-PL/SQL-code:
procedure insert_passenger
( p_passenger_nr in number
, p_passenger_name in varchar2
, ...
, p_booking_id in number
, p_dob in number
)
is
begin
insert into passenger (...)
values
( p_passenger_nr
, p_passenger_name
, ...
, p_booking_id
, p_dob
);
calculate_flight_price
( p_booking_id
, p_dob
);
end insert_passenger;
/
Instead of your insert statement, you would now call this procedure. And your mutating table problem will disappear.
If you insist on using a database trigger, then you would need to avoid the select statement in cursor c_passengers. This doesn't make any sense: you have just inserted a row into table passengers and know all the column values. Then you call calculate_flight_price to retrieve the column DOB, which you already know.
Just add a parameter P_DOB to your calculate_flight_price procedure and call it with :new.dob, like this:
create or replace trigger calculate_flight_price_t1
after insert on passenger
for each row
begin
calculate_flight_price
( :new.booking_id
, :new.dob
);
end;
Oh my goodness... You are trying a Dirty Read in the cursor. This is a bad design.
If you allow a dirty read, it return the wrong answer, but also it returns an answer that never existed in the table. In a multiuser database, a dirty read can be a dangerous feature.
The point here is that dirty read is not a feature; rather, it's a liability. In Oracle Database, it's just not needed. You get all of the advantages of a dirty read—no blocking—without any of the incorrect results.
Read more on "READ UNCOMMITTED isolation level" which allows dirty reads. It provides a standards-based definition that allows for nonblocking reads.
Other way round
You are misusing the trigger. I mean wrong trigger used.
you insert / update a row in table A and a trigger on table A (for each row) executes a query on table A (through a procedure)??!!!
Oracle throws an ORA-04091 which is an expected and normal behavior, Oracle wants to protect you from yourself since it guarantees that each statement is atomic (i.e will either fail or succeed completely) and also that each statement sees a consistent view of the data
You would expect the query (2) not to see the row inserted on (1). This would be in contradiction
Solution: -- use before instead of after
CREATE OR REPLACE TRIGGER SOMENAME
BEFORE INSERT OR UPDATE ON SOMETABLE
I've created a trigger that only allows a user to have 10 current placed orders. So now when the customer tries to placed order number 11 the oracle database throws back a error. Well 3 errors.
ORA-20000: You currently have 10 or more orders processing.
ORA-06512: at "C3283535.TRG_ORDER_LIMIT", line 12
ORA-04088: error during execution of trigger
'C3283535.TRG_ORDER_LIMIT'
The top error is one I've created using:
raise_application_error(-20000, 'You currently have 10 or more orders
processing.');
I just wondered after search and trying many ways how to change the error messages for the other two errors or even not show them all together to the user?
Here is the code I've used
create or replace trigger trg_order_limit
before insert on placed_order for each row
declare
v_count number;
begin
-- Get current order count
select count(order_id)
into v_count
from placed_order
where fk1_customer_id = :new.fk1_customer_id;
-- Raise exception if there are too many
if v_count >= 10 then
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20000, 'You currently have 10 or more orders processing.');
end if;
end;
Thanks a lot
Richard
The exception propagation goes from the internal-to-external block, as opposed to variable scope which goes from external-to-internal block. For more reference on this, read McLaughlin's "Programming with PL/SQL", Chapter 5.
What you are getting here is an exception stack - exceptions raised from the innermost blocks to the outermost blocks.
When you raise an exception from a trigger, your raise_application_error statement returns an error.
It is then propagated to the trigger block which says ORA-06512: at "C3283535.TRG_ORDER_LIMIT", line 12. This is because the trigger treats the raised exception as an error and stops to continue.
The error is then propagated to the session which raises the ORA-04088: error during execution of trigger 'C3283535.TRG_ORDER_LIMIT'. This error reports to us about where, as in which part of the program, the error was raised.
If you are using a front-end program like Java Server Pages or PHP, you will catch the raised error - 20000 first. So, you can display the same to your end user.
EDIT :
About the first error - ORA-20000, you can change it in the RAISE_APPLICATION_ERROR statement itself.
If you want to handle the ORA-06512, you can use Uday Shankar's answer which is helpful in taking care of this error and showing an appropriate error message.
But, you will still be getting the last ORA-04088. If I was at your place I wouldn't have worried, as after getting the ORA-20000 I would raise an application error at the front end itself while hiding all the other details from the user.
In fact, this is the nature of Oracle's exception stack. All the errors from the innermost to the outermost block are raised. This is helpful a lot of times for us to identify the exact source of error.
In the trigger you can add the exception handling part as shown below:
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20000, 'You currently have 10 or more orders processing.');
I see that this is quite an old post, but I think that readers should be aware that
This does not really enforce the business rule (max 10 orders). If
is is just "some" number to avoid too high amounts and you don't
care if sometimes people have 12 orders then this may be fine. But if not, think of a scenario where you have 9 orders already, and then orders for the same customer are inserted from 2 different sessions / transactions simultaneously. In that case you will end up with 11 orders, without detecting this overflow situation. So you can not rely on this trigger actually.
Besides that, you might need to have this trigger fire on update too, if the fk1_customer_id may be updated (I have seen implementations where at first a NULL is put into the FK column, and later being updated to the actual value). You may want to consider if this scenario is realistic.
There is a fundamental flaw in the trigger. You are inside a transaction and inside a statement that is currently being executed but not complete yet. So what if the insert is not a single row insert but something like
insert into placed_order (select ... from waiting_orders ...)
what do you expect the trigger to see?
This kind of business rule is not easy to enforce. But if you choose to do it in a trigger, you better do it in an after statement trigger (thus, not in a before row trigger). An after statement trigger still will not see results of other uncommitted transactions, but at least the current statement is in a defined state.
In fact the business rule CAN fundamentally only be enforced at commit time; but there is not a thing like an ON-COMMIT trigger in the Oracle database.
What you can do is denormalising the count of records into the customers table (add a column ORDER_COUNT), and place a deferred constraint (ORDER_COUNT <= 10) in that table.
But then you are still relying on correctly maintaining this field throughout your code.
A fully reliable alternative, but somewhat cumbersome, is to create a materialized view (something like SELECT fk_customer_id, count(*) order_count from placed_orders group by fk_customer_id, with FAST REFRESH ON COMMIT on the placed_order table and create a check constraint order_count <= 10 on the materialized view.
This is about the only way to reliably enforce this type of constraints, without having to think of all possible situations like concurrent sessions, updates etc.
Note however that the FAST REFRESH ON COMMIT will slow down your commit; so this solution is not useable for high volumes (sigh... Why just does Oracle not provide an ON COMMIT trigger...)
I have ORM setup and working with Oracle on an existing database and have been able to get inserts to work when I access the sequence but because triggers were used in the original application the sequence skips a number.
Is there a way to get ORM to use the trigger?
Disabling the trigger is not an option since it is used by the existing app and cannot be disabled during migration.
component persistent="true" table="table_name" schema="schema_name" {
property name="table_id" column="table_id" fieldtype="id" generator="sequence" sequence="schema_name.sequence_name";
...
}
Triggers are not accessible program units. The only way to "call" a trigger is to execute the appropriate DML against the owning table.
There are two possible resolutions to your problem.
Rewrite the trigger. You say another application still needs the trigger to populate the ID, but you could change the trigger's logic with a conditional....
if :new.id is null then
:new.id := whatever_seq.nextval; --11g syntax for brevity
end if;
This will populate the ID when the other application insert into the table but won't overwrite your value.
Stop worrying. Sequences are merely generators of unique identifiers. The numbers ascend but it really doesn't matter if there are gaps. Unless you are handling billions of rows it is extremely unlikely your sequence will run out of numbers before your applications get retired.
Do you mean that the DB normally assigns an ID, using an insert trigger? That would explain why you're skipping a number. You could try generator="select" which will get hibernate to read the ID back after the insert has occurred (and the trigger has been fired). It's there to handle exactly the situation I think you're describing.
The following link on the PostgreSQL documentation manual http://www.postgresql.org/docs/8.3/interactive/populate.html says that to disable autocommit in postgreSQL you can simply place all insert statements within BEGIN; and COMMIT;
However I have difficulty in capturing any exceptions that may happen between the BEGIN; COMMIT; and if an error occurs (like trying to insert a duplicate PK) I have no way to explicitly call the ROLLBACK or COMMIT commands. Although all insert statements are automatically rolled back, PostgreSQL still expects an explicit call to either the COMMIT or ROLLBACK commands before it can consider the transaction to be terminated. Otherwise, the script has to wait for the transaction to time out and any statements executed thereafter will raise an error.
In a stored procedure you can use the EXCEPTION clause to do this but the same does not apply in my circumstance of performing bulk inserts. I have tried it and the exception block did not work for me because the next statement/s executed after the error takes place fails to execute with the error:
ERROR: current transaction is aborted, commands ignored until end of transaction block
The transaction remains open as it has not been explicitly finalised with a call to COMMIT or ROLLBACK;
Here is a sample of the code I used to test this:
BEGIN;
SET search_path TO testing;
INSERT INTO friends (id, name) VALUES (1, 'asd');
INSERT INTO friends (id, name) VALUES (2, 'abcd');
INSERT INTO friends (id, nsame) VALUES (2, 'abcd'); /*note the deliberate mistake in attribute name and also the deliberately repeated pk value number 2*/
EXCEPTION /* this part does not work for me */
WHEN OTHERS THEN
ROLLBACK;
COMMIT;
When using such technique do I really have to guarantee that all statements will succeed? Why is this so? Isn't there a way to trap errors and explicitly call a rollback?
Thank you
if you do it between begin and commit then everything is automatically rolled back in case of an exception.
Excerpt from the url you posted:
"An additional benefit of doing all insertions in one transaction is that if the insertion of one row were to fail then the insertion of all rows inserted up to that point would be rolled back, so you won't be stuck with partially loaded data."
When I initialize databases, i.e. create a series of tables/views/functions/triggers/etc. and/or loading in the initial data, I always use psql and it's Variables to control the flow. I always add:
\set ON_ERROR_STOP
to the top of my scripts, so whenever I hit any exception, psql will abort. It looks like this might help in your case too.
And in cases when I need to do some exception handling, I use anonymous code blocks like this:
DO $$DECLARE _rec record;
BEGIN
FOR _rec IN SELECT * FROM schema WHERE schema_name != 'master' LOOP
EXECUTE 'DROP SCHEMA '||_rec.schema_name||' CASCADE';
END LOOP;
EXCEPTION WHEN others THEN
NULL;
END;$$;
DROP SCHEMA master CASCADE;