PL/SQL Oracle Error Handling - oracle

I've created a trigger that only allows a user to have 10 current placed orders. So now when the customer tries to placed order number 11 the oracle database throws back a error. Well 3 errors.
ORA-20000: You currently have 10 or more orders processing.
ORA-06512: at "C3283535.TRG_ORDER_LIMIT", line 12
ORA-04088: error during execution of trigger
'C3283535.TRG_ORDER_LIMIT'
The top error is one I've created using:
raise_application_error(-20000, 'You currently have 10 or more orders
processing.');
I just wondered after search and trying many ways how to change the error messages for the other two errors or even not show them all together to the user?
Here is the code I've used
create or replace trigger trg_order_limit
before insert on placed_order for each row
declare
v_count number;
begin
-- Get current order count
select count(order_id)
into v_count
from placed_order
where fk1_customer_id = :new.fk1_customer_id;
-- Raise exception if there are too many
if v_count >= 10 then
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20000, 'You currently have 10 or more orders processing.');
end if;
end;
Thanks a lot
Richard

The exception propagation goes from the internal-to-external block, as opposed to variable scope which goes from external-to-internal block. For more reference on this, read McLaughlin's "Programming with PL/SQL", Chapter 5.
What you are getting here is an exception stack - exceptions raised from the innermost blocks to the outermost blocks.
When you raise an exception from a trigger, your raise_application_error statement returns an error.
It is then propagated to the trigger block which says ORA-06512: at "C3283535.TRG_ORDER_LIMIT", line 12. This is because the trigger treats the raised exception as an error and stops to continue.
The error is then propagated to the session which raises the ORA-04088: error during execution of trigger 'C3283535.TRG_ORDER_LIMIT'. This error reports to us about where, as in which part of the program, the error was raised.
If you are using a front-end program like Java Server Pages or PHP, you will catch the raised error - 20000 first. So, you can display the same to your end user.
EDIT :
About the first error - ORA-20000, you can change it in the RAISE_APPLICATION_ERROR statement itself.
If you want to handle the ORA-06512, you can use Uday Shankar's answer which is helpful in taking care of this error and showing an appropriate error message.
But, you will still be getting the last ORA-04088. If I was at your place I wouldn't have worried, as after getting the ORA-20000 I would raise an application error at the front end itself while hiding all the other details from the user.
In fact, this is the nature of Oracle's exception stack. All the errors from the innermost to the outermost block are raised. This is helpful a lot of times for us to identify the exact source of error.

In the trigger you can add the exception handling part as shown below:
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20000, 'You currently have 10 or more orders processing.');

I see that this is quite an old post, but I think that readers should be aware that
This does not really enforce the business rule (max 10 orders). If
is is just "some" number to avoid too high amounts and you don't
care if sometimes people have 12 orders then this may be fine. But if not, think of a scenario where you have 9 orders already, and then orders for the same customer are inserted from 2 different sessions / transactions simultaneously. In that case you will end up with 11 orders, without detecting this overflow situation. So you can not rely on this trigger actually.
Besides that, you might need to have this trigger fire on update too, if the fk1_customer_id may be updated (I have seen implementations where at first a NULL is put into the FK column, and later being updated to the actual value). You may want to consider if this scenario is realistic.
There is a fundamental flaw in the trigger. You are inside a transaction and inside a statement that is currently being executed but not complete yet. So what if the insert is not a single row insert but something like
insert into placed_order (select ... from waiting_orders ...)
what do you expect the trigger to see?
This kind of business rule is not easy to enforce. But if you choose to do it in a trigger, you better do it in an after statement trigger (thus, not in a before row trigger). An after statement trigger still will not see results of other uncommitted transactions, but at least the current statement is in a defined state.
In fact the business rule CAN fundamentally only be enforced at commit time; but there is not a thing like an ON-COMMIT trigger in the Oracle database.
What you can do is denormalising the count of records into the customers table (add a column ORDER_COUNT), and place a deferred constraint (ORDER_COUNT <= 10) in that table.
But then you are still relying on correctly maintaining this field throughout your code.
A fully reliable alternative, but somewhat cumbersome, is to create a materialized view (something like SELECT fk_customer_id, count(*) order_count from placed_orders group by fk_customer_id, with FAST REFRESH ON COMMIT on the placed_order table and create a check constraint order_count <= 10 on the materialized view.
This is about the only way to reliably enforce this type of constraints, without having to think of all possible situations like concurrent sessions, updates etc.
Note however that the FAST REFRESH ON COMMIT will slow down your commit; so this solution is not useable for high volumes (sigh... Why just does Oracle not provide an ON COMMIT trigger...)

Related

Deadlock detected due to Pragma Autonomous Transaction Procedure

I have the below procedure which has pragma autonomous_transaction clause.Now this procedure is being called in Java code after validating the come business logic. After an execution of this proc, It starts with some java stuffs...
create or replace procedure UPDATE_INSTRUMENT
is
pragma autonomous_transaction;
begin
begin
update abc
set AUTHSTATUS = p_AUTHSTATUS,
STATUS = p_STATUS,
USERID = p_USERID,
LASTUPDATED = TO_DATE(p_LASTUPDATED, 'DD/MM/YYYY'),
USERDATETIME = TO_DATE(p_USERDATETIME, 'DD/MM/YYYY')
where TRANSACNO = p_TRANSACNO;
commit;
end;
begin
update xyz
set AUTHSTATUS = p_AUTHSTATUS,
USERID = p_USERID,
AUTHDATE = TO_DATE(SYSDATE, 'DD/MM/YYYY'),
LASTUPDATED = TO_DATE(SYSDATE, 'DD/MM/YYYY'),
where TRANSACNO = p_TRANSACNO;
commit;
end;
end UPDATE_INSTRUMENT;
Table 'xyz' has three triggers and out of that 1 is on Insert and 2 are on Before update event.
P.N:- Table 'xyz' is not updated or locked anywhere before calling to this Procedure.
I am getting the below errors.
ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "ADTTRG_xyz", line 277
ORA-04088: error during execution of trigger 'ADTTRG_xyz'
The table abc is getting updated properly, but it is failing to update table xyz.
Please explain why this deadlock is occurring.
"How deadlock is occurring."
Deadlocks occur when two sessions simultaneously attempt to change a common resource - such as a table or unique index - in such a way that neither session can commit without the other committing first. This is always an application design flaw, in that deadlocks are the result of complicated flow and poorly-implemented logic strategies.
There are a few clues that this is the case here.
ORA-06512: at "ADTTRG_xyz", line 277
For a start, a trigger with several hundred lines of code is a code smell. That's a lot of code to have in a trigger. It seems like there's a opportunity for competition there. Especially as...
Table 'xyz' has three triggers and ... 2 are on Before update event.
You have two BEFORE UPDATE triggers on the 'xyz' table and the event which generates the deadlock is an update of 'xyz'. This may not be a coincidence.
You must investigate these two triggers and establish which tables and indexes they need, so that you can spot whether they are in contention.
pragma autonomous_transaction;
The PL/SQL documentation says "If an autonomous transaction tries to access a resource held by the main transaction, a deadlock can occur." Autonomous transactions are another code smell. There are very few valid use cases for autonomous transactions; more often they are used to wrangle a bad data model into submission.
So you have a lot of things to investigate. Oracle offers diagnostics to help with this.
When deadlocks happen Oracle produces a tracefile. This file will be written to an OS directory. If you don't know where that is you can query the V$DIAG_INFO view. Find out more. The tracefile will tell you the two rowids which generated the deadlock; you can find out the object id using dbms_rowid.rowid_object() and plug that into select object_name from all_objects where object_id = :oid;. Depending on how your organisation arranges things you may not have access to the tracefile directory, in which case you will need to ask a DBA for help.
Once you know what table is in deadlock you know what you must change in your application logic. Potentially that's quite a big change, as your code has a number of red flags (long trigger bodies, two triggers on same event, autonomous transaction). Good luck!

PostgreSQL vs. Oracle default transaction management

In PostgreSQL, if you encounter an error in transaction (for example when your insert statement violates unique constraint), the whole transaction is aborted, you cannot commit it and no rows are inserted:
database=# begin;
BEGIN
database=# insert into table (id, something) values ('1','whatever');
INSERT 0 1
database=# insert into table (id, something) values ('1','whatever');
ERROR: duplicate key value violates unique constraint "table_id_key"
Key (id)=(1) already exists.
database=# insert into table (id, something) values ('2','whatever');
ERROR: current transaction is aborted, commands ignored until end of transaction block
database=# rollback;
database=# select * from table;
id | something |
-----+------------+
(0 rows)
You can change that by setting ON_ERROR_ROLLBACK to "on" or "interactive", after that you can do multiple inserts ignoring errors, commit and have only successfully inserted rows in table after transaction end.
database=# \set ON_ERROR_ROLLBACK interactive
In Oracle, this is the default transaction management behaviour, which surprises me. Isn't this completely counterintuitive and dangerous?
When I start a transaction I want to be sure that all the statements were successfull. What if my multiple inserts comprise some kind of an object or data structure? I end up completely unaware of the data state in my database and should be checking it after the commit.
If one of the inserts fails I want to be sure that other inserts will be rollbacked or not even evaluated after the first error, which is exactly how it's done in PostgreSQL.
Why does Oracle have such way of transaction management as a default, and why is it considered good practice?
For example, some random guy here in comments
This is a very neat feature.
I don't understand this, though: "Normally, any error you make will
throw an exception and cause your current transaction to be marked as
aborted. This is sane and expected behavior..."
No, it's really not. Oracle doesn't work this way, nor does MySQL. I
have no experience with MSSQL or DB2 but I'll bet a dollar each they
don't work this way either. There no intuitive reason why a syntax
error, or any other error for that matter, should abort a transaction.
I can only assume there's either some limitation deep in the Postgres
guts that requires this behavior, or that it conforms to some obscure
part of the SQL standard that everyone else sensibly ignores. There's
certainly no API / UX reason why it should work this way.
We really shouldn't be too proud of any workarounds we've developed
for this pathological behavior. It's like IT Stockholm Syndrome.
Does not it violate even the definition of the transaction?
Transactions provide an "all-or-nothing" proposition, stating that
each work-unit performed in a database must either complete in its
entirety or have no effect whatsoever.
I agree with you. I think it's a mistake not to abort the whole tx. But people are used to that, so they think it's reasonable and correct. Like people who use MySQL think that the DBMS should accept 0000-00-00 as a date, or people using Oracle expect that '' IS NULL.
The idea that there's a clear distinction between a syntax error and something else is flawed.
If I write
BEGIN;
CREATE TABLE new_customers (...);
INSET INTO new_customers (...)
SELECT ... FROM customers;
DROP TABLE customers;
COMMIT;
I don't care that it's a typo resulting in a syntax error that caused me to lose my data. I care that the transaction didn't successfully execute all its statements but still committed.
It'd be technically feasible to allow soft rollback in PostgreSQL before any rows are actually written by a statement - probably before we even enter the executor. So failures in the parse and parameter binding phases could allow the tx not to be aborted. We have a statement memory context we could use to clean up.
However, once the statement starts changing rows, it's doing so on disk with the same transaction ID as the prior statements in the tx. So you can't roll it back without rolling back the whole tx. To allow statement rollback Pg needs to assign a new subtransaction ID. That costs resources. You can do it explicitly with SAVEPOINTs when you want to, and internally that's what psql is doing. In theory we could allow the server to do this implicitly for each statement to implement statement rollback, just at a performance cost. But I doubt any patch implementing this would get committed, at least not without a LOT of argument, because most of the PostgreSQL team are (IMO reasonably) not fond of "whoops, that broke but we'll continue anyway" transaction semantics.

Re-run PL/SQL script command

I have a simple insert script that I want to expand upon.
DECLARE
i varchar2(3000) := dbms_random.string('A',8);
BEGIN
INSERT INTO BUYERS
(USER_ID,BUYER_CD,BUYER_ENG_NM,REG_DT)
VALUES
(i,'tes','test','test');
EXCEPTION WHEN OTHER
THEN
(this is where I need help)
end;
We have dynamic replication going on between two DB's. However, for some odd reason we have to run a script twice for the changes to commit to both DB's for that reason I am creating a script that will attempt to do a insert amongst all tables. As of now I'm only working on one table. Within the exception handler how do I make the script run again when the initial insert fails? Any help is appreciated.
If a problem happens with the insert then the best approach is to find out what the error is and raise the error. This is best accomplished by an autonomous logging procedure that will record the what, where, when and then RAISE the error again so processing stops. You do not want to take a chance of inserting records once, twice or not at all which could happen if the errors are not raised again.
The LOG_ERROR procedure below can be created from the answers to your previous questions about error handling.
DECLARE
i varchar2(3000) := dbms_random.string('A',8);
BEGIN
INSERT INTO BUYERS
(USER_ID,BUYER_CD,BUYER_ENG_NM,REG_DT)
VALUES
(i,'tes','test','test');
EXCEPTION WHEN OTHER
THEN
--by the time you got here there is no point in trying to insert again
LOG_ERROR(SQLERRM, LOCATION);
RAISE;
end;

PLSQL Trigger may be causing INSERT INTO to fail silently?

I have a table which I am trying to do an insert/update on depending on the values I am given. But the insert is not working for this particular table, yet it works for the previous tables which the script has run on.
To test this problem, I put in a few anonymous blocks into oracle's sqldeveloper which inserts or updates depending on whether a key is present. Updates seem to work fine, but when it comes to inserting a new row, nothing is inserted.
If I had this table:
COFFEE_ID TEA_ID NAME
11 100 combo 1
12 101 combo 2
13 102 combo 3
Doing this will not insert anything and will instead move on to the next anonymous block:
begin
insert into COFFEE_TEA(COFFEE_ID, TEA_ID, NAME) values (14, 103, 'combo 4');
exception when dup_val_on_index then
update ....
end;
....
I suspect it has something to do with the trigger on this table. It is a BEFORE EACH ROW trigger type, and it would do an insert data into some other table. There is no exception handling in the trigger, so I'm guessing it must fail but not report it (doesn't show up in sqldeveloper when I run the script).
My two questions would be,
When the trigger runs, what happens if the ID it's trying to insert to the other table already exists? Looks like it silently fails?
How best should I fix this? I am unsure if I can change the trigger code itself, but would it be possible to catch the error inside my anonymous block (assuming that it's actually the trigger that's causing the problem). If so, how would I know what exception to catch if it fails silently?
I removed the exception in sqldeveloper and it tells me that a unique constraint was violated. Namely that the data being inserted into the other table through the trigger is the cause.
Your additional information tells us that your trigger is hurling ORA-00001, a unique key violation. This is the error which the DUP_VAL_ON_INDEX exception handles. So it seems like your exception handler which is supposed to be dealing with key violations on COFFEE_TEA is also swallowing the exceptions from your trigger. Messy.
There are two possible solutions. One is to put decent error handling in the trigger code. The other is to use MERGE for your data loading routine.
I always prefer MERGE as a mechanism for performing upserts, because I don't like using exceptions to handle legitimate expected states. Find out more.
Ideally you should do both. Triggers are supposed to be self-contained code: imposing unhandled exceptions on routines which interact with their tables breaks the enscapsulation.
A trigger will not modify the DML process on a table. Remove the exception block, the insert will either succeed or fail with an error if COFFEE_TEA is a table.
In other words, the following script will never output 0 if COFFEE_TEA is a table:
BEGIN
INSERT INTO coffee_tea(COFFEE_ID, TEA_ID, NAME) values (14, 103, 'combo 4');
dbms_output.put_line(sql%rowcount);
END;

Oracle triggers error are not captured while using ADODB

I have and application which uses Adodb to insert data in Oracle table(customers database).
Data is successfully inserted if there are no errors.
If there is any error like invalid datatype etc. Error is raised and captured by my application and dumped in log gile.
My customer has written their own triggers on this particular table. When a record is inserted few other checking are done be fore the data insertion
Now all fine until now.
But recently we found that many a times data is not inserted in the oracle table.
When checked in log file no error was found.
Then I logged the query which was executed.
Copied the query to oracle Sql prompt and executed it gave error of trigger.
My Issue is
Customer is not ready to share the details of trigger.
Error is not raised while inserting to oracle table so we are not able to log it or take any action.
The same qry when executed directly in oracle the trigger errors are show.
Help needed for
Why the error is not raised in ADODB
Do I have to inform customer to implement any error raising
Anything that you can suggest for resolving the issue
I have 0% to 10% knowledge of Oracle
"Copied the query to oracle Sql prompt and executed it gave error of trigger." Since the ADO session doesn't report an error, it may be that the error from the trigger is misleading. It may simply be a check on the lines of "Hey, you are not allowed to insert into this table except though the application".
"Error is not raised while inserting to oracle table so we are not able to log it or take any action."
If the error isn't raised at the time of insert, it MAY be raised at the time of committing. Deferred constraints and materialized views could give this.
Hypothetically, I could reproduce your experience as follows:
1. Create a table tab_a with a deferrable constraint initially deferred (eg val_a > 10)
2. The ADO session inserts a row violating the constraint but it dooesn't error because the constraint is deferred
3. The commit happens and the constraint violation exception fires and the transaction is rolled back instead of being committed.
So see if you are catering for the possibility of an error in the commit.
It may also be something else later in the transaction which results in a rollback of the whole transaction (eg a deadlock). Session tracing would be good. Failing that, look into a SERVERERROR trigger on the user to log the error (eg in a file, so it won't be rolled back)
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7004.htm#i2153530
You can log your business logic in log table.
But you have to use stored procedure to log the message.
Stored procedure should have pragma Transaction such that your log data must be saved in log table.
You are trigger should have error handling - and in error handling , you have to call Logged stored procedure (which have pragma transaction)
I've never used adodb ( and I assume that is what you are using, not ADO.NET?).. But, a quick look at its references leads to this question.. Are you actually checking the return state of your query?
$ok = $DB->Execute("update atable set aval = 0");
if (!$ok) mylogerr($DB->ErrorMsg());

Resources