Oracle: How to execute an insert trigger without delaying the insert response? - oracle

The trigger below is delaying my insert response. How can I prevent this?
create or replace
TRIGGER GETHTTPONINSERT
BEFORE INSERT ON TABLENAME
FOR EACH ROW
Declare
--
BEGIN
-- The inserted data is transfered via HTTP to a remote location
END;
EDIT People are telling me to do batch jobs, but I would rather have the data earlier than having 100% consistency. The advantage of the trigger is that it happens as soon as the data arrives, but I can't afford the insert response delay.

One approach is to have the trigger create a dbms_job that runs once (each) time to perform the http transfer. The dbms_job creation is relatively quick and you can think of this as effectively spawning a new thread in parallel.
See http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7267435205059 for further info - his example deals with sending email, but the idea is the same.

There is a perfect solution for this exact situation called Database Change Notification.
You can think of it almost exactly like an async trigger.
You use the DBMS_Change_Notification package to tell oracle which tables to watch and what to do when a change occurs. You can monitor for DML and DDL, you can have Oracle batch the changes (i.e. wait for 10 changes to occur before firing). It will call a sproc with an object containing all the rowids of the changed rows... you can decide how to handle, including calling HTTP. It will not have to finish for the insert to commit.
Documentation for 10gR2

Maybe you could create a local table that store the info do you have to transfer, and create a job that executes every X minutes. The job read from the table, transfer all the data and delete the transfered data from the table.

Isn't it possible to use the Oracle replication options? You send your inserted data via http to a remote location in an after or before statement trigger. What will happen when there is a rollback? Your hhtp send message will not be rollbacked so you have inconsistent data.

well obviously, you could prevent the delay by removing the Trigger....
Else, the trigger will ALWAYS be executed before your insert, thats what the TRIGGER BEFORE INSERT is made for.
Or maybe you could give us more details on what you need exactly?

If you are getting to this question after 2020, look at DBMS_CQ_NOTIFICATION:
https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_CQ_NOTIFICATION.html

Related

Avoid waiting on sending HTTP request via Oracle database

I have an application that is connected to an Oracle database where it can execute select, insert and update statements. I want to call a function in that application that would tell the Oracle database to send a http request.
My objectives:
Fire and forget - the application and database don't care about the response or whether the request was successful
The application and database should never wait for the response
I was thinking about these approaches:
Application inserts a row to a table, trigger after insert calls a db function that uses UTL_HTTP to perform the request
Application calls a db function that uses UTL_HTTP to perform the request via a select statement
What are the advantages/pitfalls of said approaches with regard to my objectives?
Here are some thoughts of the top of my head, I'm sure there are other considerations:
If you do it in a trigger, the http request will go out even if you then decide to rollback the insert. So are you okay with having the http request sent but no row ends up in the table? If so, a trigger would be fine.
If you do it in a trigger, then any other interfaces or humans manually needing to load rows to the table will cause the http request to fire. Is that what you want? Then a trigger is great. If not, a trigger isn't so great.
Triggers are dropped if the table is dropped. If you do maintenance of the kind we do all the time in data warehousing (CTAS a new table, drop the old, rename the new to the old, etc..) you can easily lose your code because the code is contained in the trigger and the trigger disappears with the table drop. Triggers are not safe places for complex code. If you do end up using a trigger, consider having the trigger simply call a procedure, and store all your real code in that procedure.
If you decide to not use a trigger at all, you are better off writing a procedure than a function you call through a select statement. The point of a function is to return something, and in this case, you don't have anything from the http process you are wishing to return. And the select statement would be arbitrary. Just use a normal procedure call.

PL/SQL - retrieve output

Is there a way to retrieve output from PL/SQL continuously rather than wait until the SP completes its execution. Continuously mean as when it executes the execute immediate.
Any other mechanism to retrieve pl/sql output?
As per Oracle docs
Output that you create using PUT or PUT_LINE is buffered in the SGA. The output cannot be retrieved until the PL/SQL program unit from which it was buffered returns to its caller. So, for example, Enterprise Manager or SQL*Plus do not display DBMS_OUTPUT messages until the PL/SQL program completes.
As far as I know, there is a way, but not with DBMS_OUTPUT.PUT_LINE. Technique I use is:
create a log table which will accept values you'd normally display using DBMS_OUTPUT.PUT_LINE. Columns I use are
ID (a sequence, to be able to sort data)
Date (to know what happened when; might not be enough for sorting purposes because operations that take very short time to finish might have the same timestamp)
Message (a VARCHAR2 column, large enough to accept the whole information)
create a logging procedure which will be inserting values into that table. It should be an autonomous transaction so that you could COMMIT within (and be able to access data from other sessions), without affecting the main transaction
Doing so, you'd
start your PL/SQL procedure
call the logging procedure whenever appropriate (basically, where you'd put the DBMS_OUTPUT.PUT_LINE call)
in another session, periodically query the log table as select * from log_table order by ID desc
Additionally, you could write a simple Apex application with one report page which selects from the logging table and refreshes periodically (for example, every 10 seconds or so) and view the main PL/SQL procedure's execution.
The approach that Littlefoot has provided is what I normally use as well.
However, there is another approach that you can try for a specific use case. Let's say you have a long-running batch job (like a payroll process for example). You do not wish to be tied down in front of the screen monitoring the progress. But you want to know as soon as the processing of any of the rows of data hits an error so that you can take action or inform a relevant team. In this case, you could add code to send out emails with all the information from the database as soon as the processing of a row hits an error (or meets any condition you specify).
You can do this using the functions and procedures provided in the 'UTL_MAIL' package. UTL_MAIL Documentation from Oracle
For monitoring progress without the overhead of logging to tables and autonomous transactions. I use:
DBMS_APPLICATION.SET_CLIENT_INFO( TO_CHAR(SYSDATE, 'HH24:MI:SS') || ' On step A' );
and then monitor in v$session.client_infofor your session. It's all in memory and won't persist of course but it is a quick and easy ~zero cost way of posting progress.
Another option (Linux/UNIX) for centralised logging that is persistent and again avoids logging in the database more generally viewable that I like is interfacing to syslog and having Splunk or similar pick these up. If you have Splunk or similar then this makes the monitoring viewable without having to connect to the database query directly. See this post here for how to do this.
https://community.oracle.com/thread/2343125

select query to wait for insertion of other record

In my application multiple requests simultaneously read record from one table and based on that insert new record in table.
I want to execute request serially so that the second request reads the latest value inserted by the first request.
I tried to achieve this using select for update query but it lock only row to be wait for update, as I can't update existing record it got same value as previous request got.
Is it possible using Oracle locking mechanism? How?
Dude - that's what transactions are for!
Strong suggestion:
Put your code into a PL/SQL stored procedure
Wrap the select/insert in a "begin tran/commit"
Don't even think about locks, if you can avoid it!

Oracle, save/map csv string to a table using utl_file and external tables

I use a pl/sql procedure calling a webservice. This webservice returns me a large csv-string which I hold in a clob. Since I do not want to parse the csv by foot, I thought of using external tables. So what I need to do is storing the csv data in a corresponding table.
What I am doing at the moment is, that I store the clob using utl_file. the stored file is defined in a external table. Ok, when I am the only user this works very well. But since DBs are multiuser I have to watchout if someone else is calling the procedure and overwriting the external table data source file. What is the best way avoid a mess in table data source? Or what is the best way to store a cvs-sting into a table?
Thanks
Chris
You want to make sure that the procedure is run by at most one session. There are several ways to achieve this goal:
The easiest way would be to lock a specific row at the beginning of your procedure (SELECT ... FOR UPDATE NOWAIT). If the lock succeeds, go on with your batch. If it fails it means the procedure is already being executed by another session. When the procedure ends, either by success or failure, the lock will be released. This method will only work if your procedure doesn't perform intermediate commits (which would release the lock before the end of the procedure).
You could also use the DBMS_LOCK package to request a lock specific to your procedure. Use the DBMS_LOCK.request procedure to request a lock. You can ask for a lock that will only be released at the end of your session (this would allow intermediate commits to take place).
You could also use AQ (Oracle queuing system), I have little experience with AQ though so I have no idea if it would be a sensible method.
Maybe you should generate temporary filename for each CSV? Something like:
SELECT TO_CHAR(systimestamp, 'YYYYMMDDHH24MISSFF') filename FROM dual
You can use UTL_FILE.FRENAME.
In similar situations, I have the external_table pointing to a file (eg "fred.txt").
When I get a new source file in, I use UTL_FILE.FRENAME to try to rename it to fred.txt. If the rename fails, then another process is running, so you return a busy error or wait or whatever.
When the file has finished processing, I rename it again (normally with some date_timestamp).

How to find out when an Oracle table was updated the last time

Can I find out when the last INSERT, UPDATE or DELETE statement was performed on a table in an Oracle database and if so, how?
A little background: The Oracle version is 10g. I have a batch application that runs regularly, reads data from a single Oracle table and writes it into a file. I would like to skip this if the data hasn't changed since the last time the job ran.
The application is written in C++ and communicates with Oracle via OCI. It logs into Oracle with a "normal" user, so I can't use any special admin stuff.
Edit: Okay, "Special Admin Stuff" wasn't exactly a good description. What I mean is: I can't do anything besides SELECTing from tables and calling stored procedures. Changing anything about the database itself (like adding triggers), is sadly not an option if want to get it done before 2010.
I'm really late to this party but here's how I did it:
SELECT SCN_TO_TIMESTAMP(MAX(ora_rowscn)) from myTable;
It's close enough for my purposes.
Since you are on 10g, you could potentially use the ORA_ROWSCN pseudocolumn. That gives you an upper bound of the last SCN (system change number) that caused a change in the row. Since this is an increasing sequence, you could store off the maximum ORA_ROWSCN that you've seen and then look only for data with an SCN greater than that.
By default, ORA_ROWSCN is actually maintained at the block level, so a change to any row in a block will change the ORA_ROWSCN for all rows in the block. This is probably quite sufficient if the intention is to minimize the number of rows you process multiple times with no changes if we're talking about "normal" data access patterns. You can rebuild the table with ROWDEPENDENCIES which will cause the ORA_ROWSCN to be tracked at the row level, which gives you more granular information but requires a one-time effort to rebuild the table.
Another option would be to configure something like Change Data Capture (CDC) and to make your OCI application a subscriber to changes to the table, but that also requires a one-time effort to configure CDC.
Ask your DBA about auditing. He can start an audit with a simple command like :
AUDIT INSERT ON user.table
Then you can query the table USER_AUDIT_OBJECT to determine if there has been an insert on your table since the last export.
google for Oracle auditing for more info...
SELECT * FROM all_tab_modifications;
Could you run a checksum of some sort on the result and store that locally? Then when your application queries the database, you can compare its checksum and determine if you should import it?
It looks like you may be able to use the ORA_HASH function to accomplish this.
Update: Another good resource: 10g’s ORA_HASH function to determine if two Oracle tables’ data are equal
Oracle can watch tables for changes and when a change occurs can execute a callback function in PL/SQL or OCI. The callback gets an object that's a collection of tables which changed, and that has a collection of rowid which changed, and the type of action, Ins, upd, del.
So you don't even go to the table, you sit and wait to be called. You'll only go if there are changes to write.
It's called Database Change Notification. It's much simpler than CDC as Justin mentioned, but both require some fancy admin stuff. The good part is that neither of these require changes to the APPLICATION.
The caveat is that CDC is fine for high volume tables, DCN is not.
If the auditing is enabled on the server, just simply use
SELECT *
FROM ALL_TAB_MODIFICATIONS
WHERE TABLE_NAME IN ()
You would need to add a trigger on insert, update, delete that sets a value in another table to sysdate.
When you run application, it would read the value and save it somewhere so that the next time it is run it has a reference to compare.
Would you consider that "Special Admin Stuff"?
It would be better to describe what you're actually doing so you get clearer answers.
How long does the batch process take to write the file? It may be easiest to let it go ahead and then compare the file against a copy of the file from the previous run to see if they are identical.
If any one is still looking for an answer they can use Oracle Database Change Notification feature coming with Oracle 10g. It requires CHANGE NOTIFICATION system privilege. You can register listeners when to trigger a notification back to the application.
Please use the below statement
select * from all_objects ao where ao.OBJECT_TYPE = 'TABLE' and ao.OWNER = 'YOUR_SCHEMA_NAME'

Resources