I have two procedures A and B. Procedure A performs certain tasks. Procedure B has to monitor how many times procedure A is called in a day.
How to achieve this?
Add a statement to the procedure:
update statistics_table
set proc_a_count = proc_a_count + 1;
Of course, you'll have to create a suitable table to hold the count and initialize it with a zero in the field.
insert a row into a log table.
Oracle does not track this sort of thing by default but if you just want to record some simple information then switch on the built-in AUDIT functionality:
AUDIT EXECUTE PROCEDURE BY ACCESS;
You can view the accesses in the view dba_audit_trail. Find out more.
If for some reason you don't want to use the audit trail - say you want to capture more information - then you will need to use your own logging mechanism. This is a good use for the AUTONOMOUS TRANSACTION pragma. Just be careful that writing the log records doesn't have an undue impact on the performance of your application.
edit
The role of procedure B in your question is entirely superfluous: either the database records how often procedure A runs or else A writes its own trace records. Unless B is a packaged query on the log (however implemented)?
Related
Is there a way to retrieve output from PL/SQL continuously rather than wait until the SP completes its execution. Continuously mean as when it executes the execute immediate.
Any other mechanism to retrieve pl/sql output?
As per Oracle docs
Output that you create using PUT or PUT_LINE is buffered in the SGA. The output cannot be retrieved until the PL/SQL program unit from which it was buffered returns to its caller. So, for example, Enterprise Manager or SQL*Plus do not display DBMS_OUTPUT messages until the PL/SQL program completes.
As far as I know, there is a way, but not with DBMS_OUTPUT.PUT_LINE. Technique I use is:
create a log table which will accept values you'd normally display using DBMS_OUTPUT.PUT_LINE. Columns I use are
ID (a sequence, to be able to sort data)
Date (to know what happened when; might not be enough for sorting purposes because operations that take very short time to finish might have the same timestamp)
Message (a VARCHAR2 column, large enough to accept the whole information)
create a logging procedure which will be inserting values into that table. It should be an autonomous transaction so that you could COMMIT within (and be able to access data from other sessions), without affecting the main transaction
Doing so, you'd
start your PL/SQL procedure
call the logging procedure whenever appropriate (basically, where you'd put the DBMS_OUTPUT.PUT_LINE call)
in another session, periodically query the log table as select * from log_table order by ID desc
Additionally, you could write a simple Apex application with one report page which selects from the logging table and refreshes periodically (for example, every 10 seconds or so) and view the main PL/SQL procedure's execution.
The approach that Littlefoot has provided is what I normally use as well.
However, there is another approach that you can try for a specific use case. Let's say you have a long-running batch job (like a payroll process for example). You do not wish to be tied down in front of the screen monitoring the progress. But you want to know as soon as the processing of any of the rows of data hits an error so that you can take action or inform a relevant team. In this case, you could add code to send out emails with all the information from the database as soon as the processing of a row hits an error (or meets any condition you specify).
You can do this using the functions and procedures provided in the 'UTL_MAIL' package. UTL_MAIL Documentation from Oracle
For monitoring progress without the overhead of logging to tables and autonomous transactions. I use:
DBMS_APPLICATION.SET_CLIENT_INFO( TO_CHAR(SYSDATE, 'HH24:MI:SS') || ' On step A' );
and then monitor in v$session.client_infofor your session. It's all in memory and won't persist of course but it is a quick and easy ~zero cost way of posting progress.
Another option (Linux/UNIX) for centralised logging that is persistent and again avoids logging in the database more generally viewable that I like is interfacing to syslog and having Splunk or similar pick these up. If you have Splunk or similar then this makes the monitoring viewable without having to connect to the database query directly. See this post here for how to do this.
https://community.oracle.com/thread/2343125
I have two blocks that are based on procedure and I want to create master-detail relationship between them.
I do this using Data Block Wizzard. It create trigger ON-CHECK-DELETE-MASTER, this trigger assumes that my detail block is based on table (FRL_XXX.TRIGGERS_QUERY, but it is a procedure) and generates cursor:
CURSOR TRIGGERS_cur IS
SELECT 1 FROM FRL_XXX.TRIGGERS_QUERY F
WHERE F.PTG_PST_CODE = :S_TYPES.PST_CODE;
Is there any workaround to solve this problem?
When I try delete this trigger or remove the cursor I get error:
FRM-30409: Delete Record Behavior for the relation is invalid.
I've never done that, but - let me think aloud.
If data block is based on a procedure, it means that procedure returns (as its IN OUT parameter) an array. I'd say that you'll have to
create your own trigger (i.e. replace the one created by the Wizard
note that these triggers usually have a comment "don't modify it!". If you run the Wizard again, it might overwrite your code, so safer approach is to create a procedure which will do the job, and call that procedure from the trigger
declare a local variable (array) and fetch data into it; pass all parameters to the procedure as you did while calling it in order to populate data block
review array's contents and check whether there's any row that satisfies condition PTG_PST_CODE = :S_TYPES.PST_CODE
if so, do what Wizard's trigger does in that case
Basically, I think that you'll have to write your own process which will replace default Forms behavior.
In Procedure based blocks, If you choose the Relation type as Isolated , Oracle form will allow the Master Detail relationship between two blocks because in that case ON-CHECK-DELETE-MASTER trigger won't be there.
You will be able to retrieve the records from detail block as ON-POPULATE-DETAILS trigger will work as-usual.
In my case, only detail block was based on Procedure and it's working fine.
Note : Isolated type of relationship will delete the master records even if child records exist. You need to handle this case separately.
Please share your approach step by step, if you went ahead as per littlefoot
We know that it is possible to dynamically figure out the name of the procedure or package that is currently executing as explained here and here. This generally applies to statements being executed from other stored procedures (compiled) in the database.
The problem:
We have been trying to log all UPDATE activity on a specific column (called STATE) by placing a trigger on the table and invoking who_called_me from within the trigger. The purpose of doing this is apparently as per the application design the column STATE could get updated by multiple pieces of code (residing in the database) based on certain business conditions. In addition to that, the column could also get updated by the application which is a hibernate based application and at times when the update happens by a hibernate query the who_called_me function returns nothing. There are multiple parts in the application that could also UPDATE the column STATE based on certain conditions.
The who_called_me strategy is working well for us in cases where a stored procedure (which resides in the database) issues the UPDATE statement and who_called_me is successfully capturing the corresponding owner, name, line no. etc. of the stored procedure. But in case the UPDATE happens from hibernate, the function captures no details.
Is there a way to capture which hibernate query UPDATEd the row through the trigger? Or is there any other way?
Note: The trigger code is similar to the answer posted on this question.
you can track the query with ora_sql_text function, e.g. this is the function I use for that:
-- getting sql code, which is calling the current event, as clob
function getEventSQLtext
return clob
is
sqllob clob;
sql_text ora_name_list_t;
dummy integer;
begin
dummy := ora_sql_txt(sql_text);
dbms_lob.createtemporary(sqllob,false);
for i in 1..sql_text.count loop
dbms_lob.writeappend(sqllob,length(sql_text(i)),sql_text(i));
end loop;
return sqllob;
if dummy is null then null; end if; -- removing warning of non-used variable :)
end;
This will be a query which is generated by hibernate and this is the only information you can get because this should be the only thing hibernate can do with DB.
It turns out, the who_called_me approach works better for stored procedure calls where the stack trace can point exactly which line invoked a DML. In, case of hibernate it is possible that the code may not call a stored procedure but in-turn may have individual DMLs which get invoked based on certain conditions. As opposed to other answer given by #simon, the ora_sql_txt function may only work in system event triggers or I may be wrong, but either way it is not capable of capturing the SQL Statement issued by Hibernate (tested that it does not works and retunrs a NULL value).
So at the end of the day, to find what SQL Hibernate is using, DB Trace files and Hibernate debug level logs is the only way for now.
I've been testing Oracle AQ for the first time. I have managed to create 2000 rows of test inserts into the queue I created.
Now, I'd like to clear those out. As I was teaching myself, I set the expiry time to be a month. I can't wait that long. And I don't think I should just delete them from the queue table.
What's the best way to do this?
You can use the DBMS_aqadm.purge_queue_table procedure.
SOLUTION
The SQL looks something like this :
-- purge queue
DECLARE
po_t dbms_aqadm.aq$_purge_options_t;
BEGIN
dbms_aqadm.purge_queue_table('MY_QUEUE_TABLE', NULL, po_t);
END;
Just do a delete on the queue table.
Never mind, just did a check and that's not right:
Oracle Streams AQ does not support data manipulation language (DML) operations on queue tables or associated index-organized tables (IOTs), if any. The only supported means of modifying queue tables is through the supplied APIs. Queue tables and IOTs can become inconsistent and therefore effectively ruined, if DML operations are performed on them.
So, you'll have to create a little PL/SQL routine to pull the items off.
Use the dbms_aq package. Check the example from the documentation: Dequeuing Messages.
Scroll down a little bit and there's a complete example.
I have a situation where I have an Oracle procedure that is being called from at least 3 or 4 different places. I need to be able to be able to call custom-code depending on some data. The custom-code is customer-specific - so, customer A might want to do A-B-C where customer B might want to do 6-7-8 and customer C doesn't need to do anything extra. When customers D...Z come along, I don't want to have to modify my existing procedure.
I'd like to be able to enter the customer-specific procedure into a table. In this existing procedure, check that database table if a custom-code procedure exists and if so, execute it. Each of the customer-code procedures would have the same parameters.
For instance:
My application (3+ places) calls this "delete" procedure
In this delete procedure, look up the name of a child-procedure to call (if one exists at all)
If one exists, execute that delete procedure (passing the parameters in)
I know I can do this with building a string that contains the call to the stored procedure. But, I'd like to know if Oracle 10g has anything built in for doing this kind of thing?
Do each of your customers have their own database? If so the best option would be to use conditional compilation. This has the advantage of not requiring dynamic SQL. Have the main program always call the custom procedure, and use CC flags to vary the code it contains.
Otherwise, Oracle does have a Rule Engine but it is not really intended for our use.
The final solution that we went with was to store the name of a procedure in a database table. We then build the SQL call and use an EXECUTE statement.
Agree with APC's answer and just to expand on it, in this white paper if you look for "Component based installation" it describes a similar problem solved by using conditional compilation.
Your solution seems reasonable given the requirements, so I voted it up.
Another option would be to loop through the results from your table look-up and put calls to the procedures inside a big case statement. It would be more code, but it would have the advantage of making the dependency chain visible so you could more easily catch missing permissions and invalid procedures.