I want to create an error logging and debugging mechanism for my PLSQL code.
There will be a log_Error table in which i will inserting the errors/debugs.
I am planning to insert debugs periodically in my code so that it will be easy for me to identify till which point my code is getting execution
In the exception section i will be inserting the error messages in this log table.
Also,I want a mechanism in which i can enable this logging mechanism for a particular session instead of all the sessions by default.
If this logging happens by default, it will create unnecessary performance impact and create unwanted logs
Can you please suggest and approach in which i am able to enable/disable logging mechanism for a session manually?
You can create a small logging package where you set a flag per session like this
create package debug_log_pk as
bLogflag boolean := false;
end debug_log_pk;
then create a procedure that insert data into your table:
create or replace procedure log_error( ..... )
as
pragma autonomous_transaction;
begin
if debug_log_pk.bLogflag then
insert into logging_table (...) values (...);
commit;
end if;
end;
Somewhere in your program. set:
debug_log_pk.bLogflag := true;
That can be done anywhere in your application code before you want to log, and will apply for the reset of the session. And you can turn logging off as well :)
Also the pragma autonomous_transaction; puts the logging into a separate transaction so it will survive a rollback in the db.
Also have a look at this:
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1171766400346817259
Related
guys:
I wonder if there is a way to write a trigger in Oracle to do both things: saving data to a log table and raising a user defined exception as well?
I am trying to figure out a strange error on my team's database, which causes data inconsistency per business logic. Multiple team's application can access this database. So I wrote a trigger to monitor certain column in a table which causes the problem. I want to save data such as user ID, saving time etc. to a log table if value is incorrect, but I also want to raise exception to attract attention. However, whenever my trigger raises the user defined exception, saving data to log table is not finished. Can anyone give a suggestion about it? Thank you in advance.
You can write a logging function that uses an autonomous transaction
create or replace procedure log_autonomous( p_log_message in varchar2,
p_other_parameters... )
as
pragma autonomous_transaction;
begin
insert into log_table ...
commit;
end;
and then call that logging function from your trigger
create or replace trigger my_trigger
before insert or update on some_table
for each row
declare
begin
if( some_bad_thing )
then
log_autonomous( 'Some message', ... );
raise_application_error( -20001, 'Some error' );
end if;
end;
The log_table message will be preserved because it was inserted in a separate (autonomous) transaction. The triggering transaction will be rolled back because the trigger raises an exception.
I have a simple insert script that I want to expand upon.
DECLARE
i varchar2(3000) := dbms_random.string('A',8);
BEGIN
INSERT INTO BUYERS
(USER_ID,BUYER_CD,BUYER_ENG_NM,REG_DT)
VALUES
(i,'tes','test','test');
EXCEPTION WHEN OTHER
THEN
(this is where I need help)
end;
We have dynamic replication going on between two DB's. However, for some odd reason we have to run a script twice for the changes to commit to both DB's for that reason I am creating a script that will attempt to do a insert amongst all tables. As of now I'm only working on one table. Within the exception handler how do I make the script run again when the initial insert fails? Any help is appreciated.
If a problem happens with the insert then the best approach is to find out what the error is and raise the error. This is best accomplished by an autonomous logging procedure that will record the what, where, when and then RAISE the error again so processing stops. You do not want to take a chance of inserting records once, twice or not at all which could happen if the errors are not raised again.
The LOG_ERROR procedure below can be created from the answers to your previous questions about error handling.
DECLARE
i varchar2(3000) := dbms_random.string('A',8);
BEGIN
INSERT INTO BUYERS
(USER_ID,BUYER_CD,BUYER_ENG_NM,REG_DT)
VALUES
(i,'tes','test','test');
EXCEPTION WHEN OTHER
THEN
--by the time you got here there is no point in trying to insert again
LOG_ERROR(SQLERRM, LOCATION);
RAISE;
end;
I created a trigger in an Oracle database. This trigger will be executed before a insert procedure, to kill all duplicate data. The procedure is executed by a C# application.
TRIGGER Kill_Duplicates
BEGIN
IF ( xxx ) THEN
Raise_application_error(-22222, ' is duplicate!');
END IF;
END
Where to read the message from Raise_application_error? for example, if some duplicates data enter the database, it triggers the Raise_application_error, where to read this - "(-22222, ' is duplicate!')"?
Is there any ways to debug trigger? if my trigger wasn't correct, for example, syntax problem, logic problem, then how to read the exception message of the trigger itself? how would i know and how to get the exceptions/errors?
The exception will be passed to the session that executed the DML statement that caused the trigger to be executed.
I'm suspicious that your error message suggests that you are trying to enforce integrity with a trigger. That's usually a Bad Thing.
I am Kanagaraj. In our stored procedure, we are logging messages at 500 places and logs are stored in a table where we are having some performance issues. We need to split these messages into Debug, Info and Error messages. Based on the level, only limited messages should be logged. If necessary, we will be enabling the next level and see the more logs. What could be the effective way for introducing this level based logging in our procedure?.
Thanks in advance.
Kanagaraj.
Something like this...
create or replace package logger as
min_level number := 2;
procedure ins_log(level number, message varchar2);
end;
create or replace package body logger as
procedure ins_log(level number, message varchar2) is
pragma autonomous_transaction;
begin
if level>=min_level then
insert into loggin(ts, msg) values (sysdate, message);
end if;
commit; // autonomous_transaction requires that
end;
end;
EDIT: Added pragma autonomous_transaction;, thanks Adam
There is a port of log4j for Oracle PL/SQL that can be found on sourceforge. This allows logging to be enabled/disabled at various levels, and for specific packages/functions/procedures simply be modifying a configuration. It also supports redirection to different destinations.
A bit late; I recommend you use Logger: https://github.com/tmuth/Logger---A-PL-SQL-Logging-Utility It will handle your requirements.
Check out PLJ-Logger at https://sourceforge.net/p/plj-logger/home/. Its really easy to implement and has your desired functionality and a lot more. Proper logging built into PL/SQL code will transform it.
P
Few of questions for bulk-bind and trigger (Oracle 10g)
1) Will row level trigger execute in case of bulk binding ?
2) If yes then, is there any option to surpress the execution only for bulk binding ?
3) If no then, is there a way to execute row level trigger in bulk binding ?
4) Will performance hamper in case row level trigger executes for bulk binding ?
Triggers are still enabled and fired when bulk-bind inserts are performed. There is nothing intinsic you can do to stop that, but of course you can put your own logic in the trigger and the code that does the bulk insert like as follows...
In a package specification:
create or replace package my_packags is
in_bulk_mode boolean default false;
... -- rest of package spec
end;
In the trigger:
begin
if NOT my_package.in_bulk_mode then
-- do the trigger stuff
end if;
end;
In the calling code:
my_package.in_bulk_mode := true;
-- do the bulk insert
my_package.in_bulk_mode := false;
Triggers execute within the SQL engine. Bulk-binding impacts the way that the calling language (pl/sql or any OCI language) calls the SQL engine, by reducing the number of calls/statements, but should not bypass any triggers.
(Imagine you have used a trigger to add validation, logging or other constraint to a database, but a third-party application would bypass it simply through using a bulk operation - this would be a recipe for data corruption and security issues).
Your statement level trigger should fire once.
You could 'disable' your trigger by making it check an in-memory session variable before doing anything else, and explicitly setting it before a bulk operation.
Row level triggers would still fire on a per-row basis, which could have a lot more impact.