Oracle: simulating a "post-commit" trigger - oracle

How can I get the equivalent of an "on commit" trigger after inserting some rows into a table?
After inserting several rows into a table, I would like to send a message to an external process that there are rows ready to process. Using a statement-level trigger causes one message per insert, and I would like to send just one message saying "there are rows to be processed."

Create a job. It won't actually be submitted until a commit occurs. (Note: DBMS_SCHEDULER is usually better than DBMS_JOB, but in this case you need to use the old DBMS_JOB package.)
declare
jobnumber number;
begin
dbms_job.submit(job => jobnumber, what => 'insert into test values(''there are rows to process'');');
--do some work here...
commit;
end;
/

As you need to trigger an external process, have a look at DBMS_ALERT instead of DBMS_JOB.
The external process would actively listen on the alert by calling a stored procedure. The stored procedure would return immediately after the alert has been signalled and commited.
Note that DBMS_ALERT is a serialization device. Thus, multiple sessions signalling the same alert name will block just as like they update the same row in a table.

You can set a flag to say "I've sent the message".
To be sure you 'reset' the flag on commit, use dbms_transaction.local_transaction_id then you can simply do a
IF v_flag IS NULL OR dbms_transaction.local_transaction_id != v_flag THEN
v_flag := dbms_transaction.local_transaction_id;
generate message
END IF;

Using Oracle Advanced Queueing you can enqueue an array of records with a listener on the queue table.
The records will load, and the listener can then kick off any process you wish, even a web service call
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_aq.htm#i1001754

Related

Submitted Oracle job using dbms_job.submit and it failed but I don't know where to look for an error message

We are initiating the rebuilding of many materialized views by using dbms_job.submit to execute a stored procedure that perform the rebuilding. However, I am having trouble trying to figure out how to determine if a submitted job failed. The issue that I am having is that the job is failing but I cannot identify what the issue is. So, I am trying to start out with a simple example, which is probably failing on a permission issue but I don't know where to look for the error message.
I have the following test procedure that I want to initiate using dbms_job.submit.
CREATE OR REPLACE PROCEDURE MYLANID.JUNKPROC
AS
lv_msg varchar2(3000);
BEGIN
INSERT INTO MYLANID.junk_log ( msg ) VALUES ('Hello World' );
commit;
EXCEPTION
WHEN OTHERS THEN
lv_msg := SUBSTR(sqlerrm, 1, 3000);
INSERT INTO MYLANID.junk_log ( msg ) VALUES (lv_msg);
END;
/
Note that this table is used above:
CREATE TABLE MYLANID.JUNK_LOG (
EVENT_TIME TIMESTAMP(6) DEFAULT systimestamp,
MSG VARCHAR2(3000 BYTE))
To submit the above procedure as a job, I execute the following anonymous block.
declare l_jobid binary_integer;
BEGIN
dbms_job.submit(job => l_jobid, what => 'BEGIN MYLANID.JUNKPROC; END;');
DBMS_OUTPUT.PUT_LINE('l_jobid:' || l_jobid);
commit;
END;
I then execute the following SQL...
select * from all_jobs;
...to see one record that represents my submitted job. When I re-query the all_jobs view, I see that this record quickly disappears from the view within a few seconds, presumably when the job completes. All is happy so far. I would like to use the presence of a record in the all_jobs view to determine whether a submitted job is running or has failed. I expect to be able to tell if it failed by looking at the ALL_JOBS.FAILURES column having a non null value > 0.
The problem, probably a permission issue, begins when I switch to another schema and I switch all of the occurrences of the above SQL and replace "MYSCHEMA" with "ANOTHERSCHEMA" that I also have access to. For example, I create the following
Table: ANOTHERSCHEMA.JUNK_LOG
Procedure: ANOTHERSCHEMA.JUNKPROC
I am even able to execute the stored procedure successfully in a query window while logged in as MYSCHEMA:
EXEC ANOTHERSCHEMA.JUNKPROC
However, if I execute the following code to submit a job that involves running the same ANOTHERSCHEMA procedure but by submitting it as a JOB...
declare l_jobid binary_integer;
BEGIN
dbms_job.submit(job => l_jobid, what => 'BEGIN ANOTHERSCHEMA.JUNKPROC; END;');
DBMS_OUTPUT.PUT_LINE('l_jobid:' || l_jobid);
commit;
END;
...then, when I query the jobs ALL_JOBS view...
select * from all_jobs;
...I see that the job has a positive value for the column FAILURE and I have no record of what the error was. This FAILURE count value continues to gradually increment over time as Oracle presumably retries up to 16? times until the job is marked BROKEN in the ALL_JOBS view.
But this is just a simple example and I don't know where to look for the error message that would tell me why the job using ANOTEHRSCHEMA references failed.
Where Do I look for the error log of failed jobs? I'm wondering if this will be somewhere only the DBA can see...
Update:
The above is just a simple test example. In my actual real world situation, my log shows that the job was submitted but I never see anything in USER_JOBS or even DBA_JOBS, which should show everything. I don't understand why the dbms_job.submit procedure would return the job number of the submitted job indicating that it was submitted but no job is visible in the DBA_JOBS view! The job that I did submit should have taken a long time to run, so I don't expect that it completed faster than I could notice.
First off, you probably shouldn't be using dbms_job. That package has been superseded for some time by the dbms_scheduler package which is significantly more powerful and flexible. If you are using Oracle 19c or later, Oracle automatically migrates dbms_job jobs to dbms_scheduler.
If you are using an Oracle version prior to 19c and a dbms_job job fails, the error information is written to the database alert log. That tends to be a bit of a pain to query from SQL particularly if you're not a DBA. You can define an external table that reads the alert log to make it queryable. Assuming you're on 11g, there is a view, x$dbgalertext, that presents the alert log information in a way that you can query it but DBAs generally aren't going to rush to give users permission on x$ tables.
If you use dbms_scheduler instead (or if you are on 19c or later and your dbms_job jobs get converted to dbms_scheduler jobs), errors are written to dba_scheduler_job_run_details. dbms_scheduler in general gives you a lot more logging information than dbms_job does so you can see things like the history of successful runs without needing to add a bunch of instrumentation code to your procedures.

How to wait until a query returns rows?

On Oracle DB, how to do the below logic (that is "wait until at least one row is returned and return a column value form it"), but without the polling (looping, wasting CPU and possibly I/O) but with some wait/block mechanism?
So when calling the get_one() function it should not return until it can fetch a row from the table matching some conditions.
function get_one()
return number
is
c1 sys_refcursor;
n number;
begin
loop
open c1 for select number_column from t1 where some_conditions;
fetch c1 into n;
if not c1%notfound then return n;
close c1;
dbms_lock.sleep(1); -- this wait reduces load, but is still polling, also it delays reaction time
end loop;
end;
The solution should work for external applications (like application servers with J2EE, .NET and similar), so use of triggers would probably not fit.
There are two oracle db features that can meet these requirements:
Database Change Notification (DCN)
Java - Oracle Database Change Notification
Oracle Advanced Queuing (AQ)
Oracle Advanced Queue In Java
function get_one()
return number
is
n number;
begin
loop
select (select number_column from t1 where some_conditions) into n from dual;
if n is null then
dbms_lock.sleep(1); -- wait 1 second
else
return n;
end if;
end loop;
end;
The DBMS_LOCK package provides an interface to Oracle Lock Management
services. You can request a lock of a specific mode, give it a unique
name recognizable in another procedure in the same or another
instance, change the lock mode, and release it.
You may need some grants to execute this Oracle package
I am not in favour of implementing code that keeps waiting or polling directly on Oracle. That might skew the Oracle statistics like DB Time and wait times.
In order to implement server code that needs to act upon a certain set of rows being created or modified you can resort to:
A schedule job that wakes up in a predetermined interval and query for the rows. If the rows are present, then it call the procedure that act on the new rows.
Triggers
Depending on what it is that is being inserted, you can have a trigger that is called upon the creation of the rows. Beware of the mutant object errors that might arise if you try to modify the original row that has the trigger.
If it is client application that calls "get_one", you might as well have the client application polling it every few seconds based on a timer (no client or DB CPU wasted in between calls).

where to see the error message from RAISE_APPLICATION_ERROR?

I created a trigger in an Oracle database. This trigger will be executed before a insert procedure, to kill all duplicate data. The procedure is executed by a C# application.
TRIGGER Kill_Duplicates
BEGIN
IF ( xxx ) THEN
Raise_application_error(-22222, ' is duplicate!');
END IF;
END
Where to read the message from Raise_application_error? for example, if some duplicates data enter the database, it triggers the Raise_application_error, where to read this - "(-22222, ' is duplicate!')"?
Is there any ways to debug trigger? if my trigger wasn't correct, for example, syntax problem, logic problem, then how to read the exception message of the trigger itself? how would i know and how to get the exceptions/errors?
The exception will be passed to the session that executed the DML statement that caused the trigger to be executed.
I'm suspicious that your error message suggests that you are trying to enforce integrity with a trigger. That's usually a Bad Thing.

Oracle PL/SQL: a scheduled procedure, leading to firing of a trigger?

Okay, I'm new to Oracle PL/SQL and I've stumbled across a problem that I cannot figure out.
I have a procedure that leads to transferring data from one table to another and a trigger that activates on the insertion in the second table. I scheduled that procedure to run every minute (for testing - would be daily once I've figured it out), using the DBMS_JOB.SUBMIT - the scheduled part works perfectly, however after the completion of the procedure the trigger is not fired. I tried with before and after insert clauses, but it is still not working. If I call the procedure directly it works and it does fire the trigger just fine. So... I'm already wondering whether the scheduled procedure can fire the trigger at all?!
This is the schedule's code:
DECLARE
VJOBN BINARY_INTEGER;
BEGIN
DBMS_JOB.SUBMIT(
JOB => VJOBN,
INTERVAL => 'SYSDATE + 1/2880',
WHAT => 'BEGIN my_procedure(); END;'
);
END;
create or replace TRIGGER TO_PRJ
AFTER INSERT ON PROJECTS
FOR EACH ROW
BEGIN
IF INSERTING
THEN DBMS_OUTPUT.PUT_LINE('INSERTED PROJECT WITH ID: '||:NEW.PROJECT_ID||')
END IF;
END;​
Table PROJECTS has ID number, name varchar2, and some other that are not important.
The procedure transfers the ID and the name from orders to projects.
P.S. I'm using http://apex.oracle.com and when I get the timestamp from it the time is actually 6 hours behind me - not sure if it can be of any significance...
DBMS_OUTPUT and DBMS_JOB do not work the way you are trying to use them. The scheduled job is probably running, the trigger is firing - but since DBMS_OUTPUTneeds to be activated in the session that executes the DBMS_OUTPUT commands (i.e. the internal session used by DBMS_JOB) you will never see any output.
DBMS_OUTPUT's output is not visible across session, so the session that issues the DBMS_JOB.submit command will NOT receive the output, even if DBMS_OUTPUT is activated for that session.
Try using scheduler, it's much better then jobs. And bring there code of trigger and tables, it may help

bulk upload and trigger

Few of questions for bulk-bind and trigger (Oracle 10g)
1) Will row level trigger execute in case of bulk binding ?
2) If yes then, is there any option to surpress the execution only for bulk binding ?
3) If no then, is there a way to execute row level trigger in bulk binding ?
4) Will performance hamper in case row level trigger executes for bulk binding ?
Triggers are still enabled and fired when bulk-bind inserts are performed. There is nothing intinsic you can do to stop that, but of course you can put your own logic in the trigger and the code that does the bulk insert like as follows...
In a package specification:
create or replace package my_packags is
in_bulk_mode boolean default false;
... -- rest of package spec
end;
In the trigger:
begin
if NOT my_package.in_bulk_mode then
-- do the trigger stuff
end if;
end;
In the calling code:
my_package.in_bulk_mode := true;
-- do the bulk insert
my_package.in_bulk_mode := false;
Triggers execute within the SQL engine. Bulk-binding impacts the way that the calling language (pl/sql or any OCI language) calls the SQL engine, by reducing the number of calls/statements, but should not bypass any triggers.
(Imagine you have used a trigger to add validation, logging or other constraint to a database, but a third-party application would bypass it simply through using a bulk operation - this would be a recipe for data corruption and security issues).
Your statement level trigger should fire once.
You could 'disable' your trigger by making it check an in-memory session variable before doing anything else, and explicitly setting it before a bulk operation.
Row level triggers would still fire on a per-row basis, which could have a lot more impact.

Resources