I have a procedure in oracle that runs in about 40 mins when run from oracle.
I have a pass through query in ms access that looks like this
Begin
MyProcedure;
End;
This exact code runs in oracle just fine but hangs in MS access. I don't know if it will finish, it's been going for 6 hours already and I guess it doesn't matter if it finishes or not, this is unacceptable.
Can someone explain what the difference is between running it from oracle and MS access and how I can fix this
I presume MyProcedure is an Oracle stored procedure.
If that's so, I suggest you include logging into it. How? Create an autonomous transaction procedure (so that it could insert logging information into some log table and commit) and call it from MyProcedure, for example before every statement it contains (some nasty selects, updates, whatever). Doing so, you'd be able to trace MyProcedure's execution and see what takes that much time.
Apart from that, see whether there are uncommitted (or rolled back) transactions that hold certain rows (tables?) locked so - when you called MyProcedure - it waits for another session to commit (or roll back) in order to be able to continue its execution.
Related
I face a problem with an oracle job
This job runs every 10 min and it calls a procedure from a package.
Inside the procedure, there is a select and then a loop.
The select could return from 10 to 1000 rows
For one week everything was running fine (, but suddenly it is like the job is not calling the procedure.
It runs successfully every 10 minutes but the procedure is not affecting the rows.
I run the procedure on its own and it works properly.
DBMS Scheduler Run details not showing anything. Everything was successfull. The only difference it that before the problem the run duration was 5 to 30 seconds, and after the problem the duration is just one second.
Do you know what else to look?
Log what's going on within the procedure. How? Create an autonomous transaction procedure which inserts log info into a separate table and commits; as it is an autonomous transaction procedure, that commit won't affect the rest of the transaction (i.e. the main procedure itself).
Log every step of the procedure and then review the result. There's probably something going on, but - it is difficult to guess what. One option might be that you used the
exception
when others then null;
exception handler which successfully hides the problem.
I have an Oracle Stored Procedure that does some inserts and updates on a table in DB.
There is no explicit Commit or Rollback statement at the end of the procedure.
However, when I call this SP through a java class, I see that the inserts and updates are committed into the DB.
So can anyone help me understand if we really need a commit statement at the end of the stored procedure in Oracle?
I am not java experience but as far as I know when you close the connection of the database the data are committed (unless if you rollback them). Now to return into your question is when to use the commit in SP.
When you use DML(insert,update,delete) operation in the procedure on a table, the table will be Locked therefore if any other user try to access the locked table, it has to wait till you commit/rollback your operation. so if your procedure was taking time, due to a long loop or bad optimized query then the user will be blocked. So if you had a commit before the DMl, the no blocks will happen.
Other reason, is the undo tablespace, where all the data not committed will wait there till you commit them, so if for example you inserted lot of data (millions), your undo might get full depend on your size and youll get an error.
so short answer , if your procedure doesn't has lot of operations on big tables and it fast then you can pass by the commit , otherwise it better to add commits.
Is there a way to retrieve output from PL/SQL continuously rather than wait until the SP completes its execution. Continuously mean as when it executes the execute immediate.
Any other mechanism to retrieve pl/sql output?
As per Oracle docs
Output that you create using PUT or PUT_LINE is buffered in the SGA. The output cannot be retrieved until the PL/SQL program unit from which it was buffered returns to its caller. So, for example, Enterprise Manager or SQL*Plus do not display DBMS_OUTPUT messages until the PL/SQL program completes.
As far as I know, there is a way, but not with DBMS_OUTPUT.PUT_LINE. Technique I use is:
create a log table which will accept values you'd normally display using DBMS_OUTPUT.PUT_LINE. Columns I use are
ID (a sequence, to be able to sort data)
Date (to know what happened when; might not be enough for sorting purposes because operations that take very short time to finish might have the same timestamp)
Message (a VARCHAR2 column, large enough to accept the whole information)
create a logging procedure which will be inserting values into that table. It should be an autonomous transaction so that you could COMMIT within (and be able to access data from other sessions), without affecting the main transaction
Doing so, you'd
start your PL/SQL procedure
call the logging procedure whenever appropriate (basically, where you'd put the DBMS_OUTPUT.PUT_LINE call)
in another session, periodically query the log table as select * from log_table order by ID desc
Additionally, you could write a simple Apex application with one report page which selects from the logging table and refreshes periodically (for example, every 10 seconds or so) and view the main PL/SQL procedure's execution.
The approach that Littlefoot has provided is what I normally use as well.
However, there is another approach that you can try for a specific use case. Let's say you have a long-running batch job (like a payroll process for example). You do not wish to be tied down in front of the screen monitoring the progress. But you want to know as soon as the processing of any of the rows of data hits an error so that you can take action or inform a relevant team. In this case, you could add code to send out emails with all the information from the database as soon as the processing of a row hits an error (or meets any condition you specify).
You can do this using the functions and procedures provided in the 'UTL_MAIL' package. UTL_MAIL Documentation from Oracle
For monitoring progress without the overhead of logging to tables and autonomous transactions. I use:
DBMS_APPLICATION.SET_CLIENT_INFO( TO_CHAR(SYSDATE, 'HH24:MI:SS') || ' On step A' );
and then monitor in v$session.client_infofor your session. It's all in memory and won't persist of course but it is a quick and easy ~zero cost way of posting progress.
Another option (Linux/UNIX) for centralised logging that is persistent and again avoids logging in the database more generally viewable that I like is interfacing to syslog and having Splunk or similar pick these up. If you have Splunk or similar then this makes the monitoring viewable without having to connect to the database query directly. See this post here for how to do this.
https://community.oracle.com/thread/2343125
I'm working on a stored procedure. Inside this one, there are many call to the other stored procedures. There are a bunch of them.
I was wondering if there is a option to be able to have the execution time of every stored procedure involved, every function (with a start and end time, ior something like that).
The idea is that I need to optimise it and I should touch every part, and since I not sure where is the longest execution time, is a bit difficult. And after a modification I would like the see the hole process if it's shorter or not.
If I call the procedure from unix, using sql plus, I have no log.
If I call it from TOAD, it's blocked until the end.
Any idea?
I'm not a dba, so I don't have many rights on the database, I'm just a regular user.
If you are using Oracle 11g you should check out the built-in Hierarchical Profiler. It does pretty much exactly what you're proposing to do. Unfortunately rights on DBMS_HPROF are not granted to PUBLIC by default, so you'll need to ask your DBA to grant you EXECUTE privilege. As it's to help you with tuning I'm sure they be only too happy to comply.
I have seen logging procedure that was transaction independent (PRAGMA AUTONOMOUS_TRANSACTION;) and was called from main procedure. It saved in funtime_log table:
current time (wall clock),
sequential number,
thread (session) id,
and text (eg. name of procedure)
This way you can select all events from one session ordered by sequential number and see where the time differs most. In production environment you can simply make this function do nothing to disable logging.
The trigger below is delaying my insert response. How can I prevent this?
create or replace
TRIGGER GETHTTPONINSERT
BEFORE INSERT ON TABLENAME
FOR EACH ROW
Declare
--
BEGIN
-- The inserted data is transfered via HTTP to a remote location
END;
EDIT People are telling me to do batch jobs, but I would rather have the data earlier than having 100% consistency. The advantage of the trigger is that it happens as soon as the data arrives, but I can't afford the insert response delay.
One approach is to have the trigger create a dbms_job that runs once (each) time to perform the http transfer. The dbms_job creation is relatively quick and you can think of this as effectively spawning a new thread in parallel.
See http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7267435205059 for further info - his example deals with sending email, but the idea is the same.
There is a perfect solution for this exact situation called Database Change Notification.
You can think of it almost exactly like an async trigger.
You use the DBMS_Change_Notification package to tell oracle which tables to watch and what to do when a change occurs. You can monitor for DML and DDL, you can have Oracle batch the changes (i.e. wait for 10 changes to occur before firing). It will call a sproc with an object containing all the rowids of the changed rows... you can decide how to handle, including calling HTTP. It will not have to finish for the insert to commit.
Documentation for 10gR2
Maybe you could create a local table that store the info do you have to transfer, and create a job that executes every X minutes. The job read from the table, transfer all the data and delete the transfered data from the table.
Isn't it possible to use the Oracle replication options? You send your inserted data via http to a remote location in an after or before statement trigger. What will happen when there is a rollback? Your hhtp send message will not be rollbacked so you have inconsistent data.
well obviously, you could prevent the delay by removing the Trigger....
Else, the trigger will ALWAYS be executed before your insert, thats what the TRIGGER BEFORE INSERT is made for.
Or maybe you could give us more details on what you need exactly?
If you are getting to this question after 2020, look at DBMS_CQ_NOTIFICATION:
https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_CQ_NOTIFICATION.html