Oracle triggers error are not captured while using ADODB - oracle

I have and application which uses Adodb to insert data in Oracle table(customers database).
Data is successfully inserted if there are no errors.
If there is any error like invalid datatype etc. Error is raised and captured by my application and dumped in log gile.
My customer has written their own triggers on this particular table. When a record is inserted few other checking are done be fore the data insertion
Now all fine until now.
But recently we found that many a times data is not inserted in the oracle table.
When checked in log file no error was found.
Then I logged the query which was executed.
Copied the query to oracle Sql prompt and executed it gave error of trigger.
My Issue is
Customer is not ready to share the details of trigger.
Error is not raised while inserting to oracle table so we are not able to log it or take any action.
The same qry when executed directly in oracle the trigger errors are show.
Help needed for
Why the error is not raised in ADODB
Do I have to inform customer to implement any error raising
Anything that you can suggest for resolving the issue
I have 0% to 10% knowledge of Oracle

"Copied the query to oracle Sql prompt and executed it gave error of trigger." Since the ADO session doesn't report an error, it may be that the error from the trigger is misleading. It may simply be a check on the lines of "Hey, you are not allowed to insert into this table except though the application".
"Error is not raised while inserting to oracle table so we are not able to log it or take any action."
If the error isn't raised at the time of insert, it MAY be raised at the time of committing. Deferred constraints and materialized views could give this.
Hypothetically, I could reproduce your experience as follows:
1. Create a table tab_a with a deferrable constraint initially deferred (eg val_a > 10)
2. The ADO session inserts a row violating the constraint but it dooesn't error because the constraint is deferred
3. The commit happens and the constraint violation exception fires and the transaction is rolled back instead of being committed.
So see if you are catering for the possibility of an error in the commit.
It may also be something else later in the transaction which results in a rollback of the whole transaction (eg a deadlock). Session tracing would be good. Failing that, look into a SERVERERROR trigger on the user to log the error (eg in a file, so it won't be rolled back)
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7004.htm#i2153530

You can log your business logic in log table.
But you have to use stored procedure to log the message.
Stored procedure should have pragma Transaction such that your log data must be saved in log table.
You are trigger should have error handling - and in error handling , you have to call Logged stored procedure (which have pragma transaction)

I've never used adodb ( and I assume that is what you are using, not ADO.NET?).. But, a quick look at its references leads to this question.. Are you actually checking the return state of your query?
$ok = $DB->Execute("update atable set aval = 0");
if (!$ok) mylogerr($DB->ErrorMsg());

Related

I keep receiving Fatal Error/Reset Connection for PL/SQL in Oracle

I am working on a script for a school project, and it requires using a trigger to maintain a history table. I successfully created this trigger earlier, however I had to drop tables and revise a bit. Now that I have finished revising the database, when I try to create the trigger again, I keep receiving the following message.
Error starting at line : 189 in command -
CREATE OR REPLACE TRIGGER WeightChangeTrigger
BEFORE UPDATE OF Current_weight_lbs ON Account
FOR EACH ROW
BEGIN
INSERT INTO WeightChange (WeightChange_id, Account_id, OldWeight, NewWeight, ChangeDate)
VALUES (NVL((SELECT MAX(WeightChange_id)+1 FROM WeightChange), 1),
:NEW.Account_id,
:OLD.Current_weight_lbs,
:NEW.Current_weight_lbs,
TRUNC(SYSDATE));
END;
Error report -
ORA-00603: ORACLE server session terminated by fatal error
ORA-00600: internal error code, arguments: [kqlidchg0], [], [], [], [], [], [], [], [], [], [], []
ORA-00604: error occurred at recursive SQL level 1
ORA-00001: unique constraint (SYS.I_PLSCOPE_SIG_IDENTIFIER$) violated
00603. 00000 - "ORACLE server session terminated by fatal error"
*Cause: An Oracle server session was in an unrecoverable state.
*Action: Log in to Oracle again so a new server session will be created
automatically. Examine the session trace file for more
information.
My script starts with DROP TABLE commands so that I can alter and re-run the script. The script operates perfectly throughout all of the table and index creation, and then crashes when it encounters the trigger creation. As I mentioned, it was not doing this earlier (the trigger successfully compiled w/o errors), so perhaps I just need to give it some time as it may be a server issue with Oracle?
Additionally, I receive a pop-up window with the following message:
Your database connection has been reset. Any pending transactions or session state has been lost.
I have closed the developer only to reconnect like it suggests, and have the same thing continually happen. Any advice or explanation for why this is happening would be really appreciated. My project is due Thursday morning and so this is giving me much unnecessary anxiety.
ORA-00600 is never good news; it means Oracle bug and there's nothing much you can do about it but search My Oracle Support or raise Service Request.
Apparently, if you run the following statement, you might get out of the problem:
ALTER SESSION SET PLSCOPE_SETTINGS = 'IDENTIFIERS:NONE';
Additionally, this might help:
ORA-00001: unique constraint (SYS.I_PLSCOPE_SIG_IDENTIFIER$) violated
Try to simply rename the trigger and run the script again.

PostgreSQL vs. Oracle default transaction management

In PostgreSQL, if you encounter an error in transaction (for example when your insert statement violates unique constraint), the whole transaction is aborted, you cannot commit it and no rows are inserted:
database=# begin;
BEGIN
database=# insert into table (id, something) values ('1','whatever');
INSERT 0 1
database=# insert into table (id, something) values ('1','whatever');
ERROR: duplicate key value violates unique constraint "table_id_key"
Key (id)=(1) already exists.
database=# insert into table (id, something) values ('2','whatever');
ERROR: current transaction is aborted, commands ignored until end of transaction block
database=# rollback;
database=# select * from table;
id | something |
-----+------------+
(0 rows)
You can change that by setting ON_ERROR_ROLLBACK to "on" or "interactive", after that you can do multiple inserts ignoring errors, commit and have only successfully inserted rows in table after transaction end.
database=# \set ON_ERROR_ROLLBACK interactive
In Oracle, this is the default transaction management behaviour, which surprises me. Isn't this completely counterintuitive and dangerous?
When I start a transaction I want to be sure that all the statements were successfull. What if my multiple inserts comprise some kind of an object or data structure? I end up completely unaware of the data state in my database and should be checking it after the commit.
If one of the inserts fails I want to be sure that other inserts will be rollbacked or not even evaluated after the first error, which is exactly how it's done in PostgreSQL.
Why does Oracle have such way of transaction management as a default, and why is it considered good practice?
For example, some random guy here in comments
This is a very neat feature.
I don't understand this, though: "Normally, any error you make will
throw an exception and cause your current transaction to be marked as
aborted. This is sane and expected behavior..."
No, it's really not. Oracle doesn't work this way, nor does MySQL. I
have no experience with MSSQL or DB2 but I'll bet a dollar each they
don't work this way either. There no intuitive reason why a syntax
error, or any other error for that matter, should abort a transaction.
I can only assume there's either some limitation deep in the Postgres
guts that requires this behavior, or that it conforms to some obscure
part of the SQL standard that everyone else sensibly ignores. There's
certainly no API / UX reason why it should work this way.
We really shouldn't be too proud of any workarounds we've developed
for this pathological behavior. It's like IT Stockholm Syndrome.
Does not it violate even the definition of the transaction?
Transactions provide an "all-or-nothing" proposition, stating that
each work-unit performed in a database must either complete in its
entirety or have no effect whatsoever.
I agree with you. I think it's a mistake not to abort the whole tx. But people are used to that, so they think it's reasonable and correct. Like people who use MySQL think that the DBMS should accept 0000-00-00 as a date, or people using Oracle expect that '' IS NULL.
The idea that there's a clear distinction between a syntax error and something else is flawed.
If I write
BEGIN;
CREATE TABLE new_customers (...);
INSET INTO new_customers (...)
SELECT ... FROM customers;
DROP TABLE customers;
COMMIT;
I don't care that it's a typo resulting in a syntax error that caused me to lose my data. I care that the transaction didn't successfully execute all its statements but still committed.
It'd be technically feasible to allow soft rollback in PostgreSQL before any rows are actually written by a statement - probably before we even enter the executor. So failures in the parse and parameter binding phases could allow the tx not to be aborted. We have a statement memory context we could use to clean up.
However, once the statement starts changing rows, it's doing so on disk with the same transaction ID as the prior statements in the tx. So you can't roll it back without rolling back the whole tx. To allow statement rollback Pg needs to assign a new subtransaction ID. That costs resources. You can do it explicitly with SAVEPOINTs when you want to, and internally that's what psql is doing. In theory we could allow the server to do this implicitly for each statement to implement statement rollback, just at a performance cost. But I doubt any patch implementing this would get committed, at least not without a LOT of argument, because most of the PostgreSQL team are (IMO reasonably) not fond of "whoops, that broke but we'll continue anyway" transaction semantics.

PL/SQL Oracle Error Handling

I've created a trigger that only allows a user to have 10 current placed orders. So now when the customer tries to placed order number 11 the oracle database throws back a error. Well 3 errors.
ORA-20000: You currently have 10 or more orders processing.
ORA-06512: at "C3283535.TRG_ORDER_LIMIT", line 12
ORA-04088: error during execution of trigger
'C3283535.TRG_ORDER_LIMIT'
The top error is one I've created using:
raise_application_error(-20000, 'You currently have 10 or more orders
processing.');
I just wondered after search and trying many ways how to change the error messages for the other two errors or even not show them all together to the user?
Here is the code I've used
create or replace trigger trg_order_limit
before insert on placed_order for each row
declare
v_count number;
begin
-- Get current order count
select count(order_id)
into v_count
from placed_order
where fk1_customer_id = :new.fk1_customer_id;
-- Raise exception if there are too many
if v_count >= 10 then
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20000, 'You currently have 10 or more orders processing.');
end if;
end;
Thanks a lot
Richard
The exception propagation goes from the internal-to-external block, as opposed to variable scope which goes from external-to-internal block. For more reference on this, read McLaughlin's "Programming with PL/SQL", Chapter 5.
What you are getting here is an exception stack - exceptions raised from the innermost blocks to the outermost blocks.
When you raise an exception from a trigger, your raise_application_error statement returns an error.
It is then propagated to the trigger block which says ORA-06512: at "C3283535.TRG_ORDER_LIMIT", line 12. This is because the trigger treats the raised exception as an error and stops to continue.
The error is then propagated to the session which raises the ORA-04088: error during execution of trigger 'C3283535.TRG_ORDER_LIMIT'. This error reports to us about where, as in which part of the program, the error was raised.
If you are using a front-end program like Java Server Pages or PHP, you will catch the raised error - 20000 first. So, you can display the same to your end user.
EDIT :
About the first error - ORA-20000, you can change it in the RAISE_APPLICATION_ERROR statement itself.
If you want to handle the ORA-06512, you can use Uday Shankar's answer which is helpful in taking care of this error and showing an appropriate error message.
But, you will still be getting the last ORA-04088. If I was at your place I wouldn't have worried, as after getting the ORA-20000 I would raise an application error at the front end itself while hiding all the other details from the user.
In fact, this is the nature of Oracle's exception stack. All the errors from the innermost to the outermost block are raised. This is helpful a lot of times for us to identify the exact source of error.
In the trigger you can add the exception handling part as shown below:
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20000, 'You currently have 10 or more orders processing.');
I see that this is quite an old post, but I think that readers should be aware that
This does not really enforce the business rule (max 10 orders). If
is is just "some" number to avoid too high amounts and you don't
care if sometimes people have 12 orders then this may be fine. But if not, think of a scenario where you have 9 orders already, and then orders for the same customer are inserted from 2 different sessions / transactions simultaneously. In that case you will end up with 11 orders, without detecting this overflow situation. So you can not rely on this trigger actually.
Besides that, you might need to have this trigger fire on update too, if the fk1_customer_id may be updated (I have seen implementations where at first a NULL is put into the FK column, and later being updated to the actual value). You may want to consider if this scenario is realistic.
There is a fundamental flaw in the trigger. You are inside a transaction and inside a statement that is currently being executed but not complete yet. So what if the insert is not a single row insert but something like
insert into placed_order (select ... from waiting_orders ...)
what do you expect the trigger to see?
This kind of business rule is not easy to enforce. But if you choose to do it in a trigger, you better do it in an after statement trigger (thus, not in a before row trigger). An after statement trigger still will not see results of other uncommitted transactions, but at least the current statement is in a defined state.
In fact the business rule CAN fundamentally only be enforced at commit time; but there is not a thing like an ON-COMMIT trigger in the Oracle database.
What you can do is denormalising the count of records into the customers table (add a column ORDER_COUNT), and place a deferred constraint (ORDER_COUNT <= 10) in that table.
But then you are still relying on correctly maintaining this field throughout your code.
A fully reliable alternative, but somewhat cumbersome, is to create a materialized view (something like SELECT fk_customer_id, count(*) order_count from placed_orders group by fk_customer_id, with FAST REFRESH ON COMMIT on the placed_order table and create a check constraint order_count <= 10 on the materialized view.
This is about the only way to reliably enforce this type of constraints, without having to think of all possible situations like concurrent sessions, updates etc.
Note however that the FAST REFRESH ON COMMIT will slow down your commit; so this solution is not useable for high volumes (sigh... Why just does Oracle not provide an ON COMMIT trigger...)

where to see the error message from RAISE_APPLICATION_ERROR?

I created a trigger in an Oracle database. This trigger will be executed before a insert procedure, to kill all duplicate data. The procedure is executed by a C# application.
TRIGGER Kill_Duplicates
BEGIN
IF ( xxx ) THEN
Raise_application_error(-22222, ' is duplicate!');
END IF;
END
Where to read the message from Raise_application_error? for example, if some duplicates data enter the database, it triggers the Raise_application_error, where to read this - "(-22222, ' is duplicate!')"?
Is there any ways to debug trigger? if my trigger wasn't correct, for example, syntax problem, logic problem, then how to read the exception message of the trigger itself? how would i know and how to get the exceptions/errors?
The exception will be passed to the session that executed the DML statement that caused the trigger to be executed.
I'm suspicious that your error message suggests that you are trying to enforce integrity with a trigger. That's usually a Bad Thing.

How to create error log for stored procedure in oracle 10g?

I need an example of creating error log file for stored procedure in oracle.
please give me an example with table creation and stored procedure creation and error log creation.
Thanks in advance
EDIT (relevant info from other question)
Suppose there is a stored procedure. When I am executing that stored procedure, some expected error/exception may occur, so I need to create an error log table in which all the errors will automatically be store whenever I will execute the stored procedure.
For example, if there is some column which does not allow null values, but the user is entering null values, then that error should be generated and it should stored in the error log table.
You haven't really given a lot of detail about your requirements. Here is a simple error log table and a procedure to log error messages into it:
CREATE TABLE error_log (ts TIMESTAMP NOT NULL, msg VARCHAR2(4000));
CREATE PROCEDURE log_error (msg IN VARCHAR2) IS
BEGIN
INSERT INTO error_log (ts, msg)
VALUES (SYSTIMESTAMP, SUBSTR(insert_log.msg, 1, 4000));
END log_error;
You might or might not need it to be an autonomous transaction. That would depend on whether you want the log to record errors from procedures that rollback their changes.
Typically, this will be implemented in a more generic logging system which would log not only errors, but warnings and debug info too.
If you want a DML statement (insert/update/delete) to log an error for each row (instead of just failing on the first row that errors), you can use the LOG ERRORS clause - instead of the statement failing, the statement will succeed, and the rows that were not inserted/updated/deleted will be written to the error log table you specify, along with the error code and error message applicable. Refer to the link provided by vettipayyan.
If you want all exceptions that are raised within a procedure to be logged, you can catch them with WHEN OTHERS:
BEGIN
-- your code here
EXCEPTION
WHEN OTHERS THEN
log_error(DBMS_UTILITY.format_error_stack);
log_error(DBMS_UTILITY.format_error_backtrace);
RAISE;
END;
Here's the page with code samles:
DML ErrorLogging

Resources