Strange oracle job behavior - oracle

I face a problem with an oracle job
This job runs every 10 min and it calls a procedure from a package.
Inside the procedure, there is a select and then a loop.
The select could return from 10 to 1000 rows
For one week everything was running fine (, but suddenly it is like the job is not calling the procedure.
It runs successfully every 10 minutes but the procedure is not affecting the rows.
I run the procedure on its own and it works properly.
DBMS Scheduler Run details not showing anything. Everything was successfull. The only difference it that before the problem the run duration was 5 to 30 seconds, and after the problem the duration is just one second.
Do you know what else to look?

Log what's going on within the procedure. How? Create an autonomous transaction procedure which inserts log info into a separate table and commits; as it is an autonomous transaction procedure, that commit won't affect the rest of the transaction (i.e. the main procedure itself).
Log every step of the procedure and then review the result. There's probably something going on, but - it is difficult to guess what. One option might be that you used the
exception
when others then null;
exception handler which successfully hides the problem.

Related

Submitted Oracle job using dbms_job.submit and it failed but I don't know where to look for an error message

We are initiating the rebuilding of many materialized views by using dbms_job.submit to execute a stored procedure that perform the rebuilding. However, I am having trouble trying to figure out how to determine if a submitted job failed. The issue that I am having is that the job is failing but I cannot identify what the issue is. So, I am trying to start out with a simple example, which is probably failing on a permission issue but I don't know where to look for the error message.
I have the following test procedure that I want to initiate using dbms_job.submit.
CREATE OR REPLACE PROCEDURE MYLANID.JUNKPROC
AS
lv_msg varchar2(3000);
BEGIN
INSERT INTO MYLANID.junk_log ( msg ) VALUES ('Hello World' );
commit;
EXCEPTION
WHEN OTHERS THEN
lv_msg := SUBSTR(sqlerrm, 1, 3000);
INSERT INTO MYLANID.junk_log ( msg ) VALUES (lv_msg);
END;
/
Note that this table is used above:
CREATE TABLE MYLANID.JUNK_LOG (
EVENT_TIME TIMESTAMP(6) DEFAULT systimestamp,
MSG VARCHAR2(3000 BYTE))
To submit the above procedure as a job, I execute the following anonymous block.
declare l_jobid binary_integer;
BEGIN
dbms_job.submit(job => l_jobid, what => 'BEGIN MYLANID.JUNKPROC; END;');
DBMS_OUTPUT.PUT_LINE('l_jobid:' || l_jobid);
commit;
END;
I then execute the following SQL...
select * from all_jobs;
...to see one record that represents my submitted job. When I re-query the all_jobs view, I see that this record quickly disappears from the view within a few seconds, presumably when the job completes. All is happy so far. I would like to use the presence of a record in the all_jobs view to determine whether a submitted job is running or has failed. I expect to be able to tell if it failed by looking at the ALL_JOBS.FAILURES column having a non null value > 0.
The problem, probably a permission issue, begins when I switch to another schema and I switch all of the occurrences of the above SQL and replace "MYSCHEMA" with "ANOTHERSCHEMA" that I also have access to. For example, I create the following
Table: ANOTHERSCHEMA.JUNK_LOG
Procedure: ANOTHERSCHEMA.JUNKPROC
I am even able to execute the stored procedure successfully in a query window while logged in as MYSCHEMA:
EXEC ANOTHERSCHEMA.JUNKPROC
However, if I execute the following code to submit a job that involves running the same ANOTHERSCHEMA procedure but by submitting it as a JOB...
declare l_jobid binary_integer;
BEGIN
dbms_job.submit(job => l_jobid, what => 'BEGIN ANOTHERSCHEMA.JUNKPROC; END;');
DBMS_OUTPUT.PUT_LINE('l_jobid:' || l_jobid);
commit;
END;
...then, when I query the jobs ALL_JOBS view...
select * from all_jobs;
...I see that the job has a positive value for the column FAILURE and I have no record of what the error was. This FAILURE count value continues to gradually increment over time as Oracle presumably retries up to 16? times until the job is marked BROKEN in the ALL_JOBS view.
But this is just a simple example and I don't know where to look for the error message that would tell me why the job using ANOTEHRSCHEMA references failed.
Where Do I look for the error log of failed jobs? I'm wondering if this will be somewhere only the DBA can see...
Update:
The above is just a simple test example. In my actual real world situation, my log shows that the job was submitted but I never see anything in USER_JOBS or even DBA_JOBS, which should show everything. I don't understand why the dbms_job.submit procedure would return the job number of the submitted job indicating that it was submitted but no job is visible in the DBA_JOBS view! The job that I did submit should have taken a long time to run, so I don't expect that it completed faster than I could notice.
First off, you probably shouldn't be using dbms_job. That package has been superseded for some time by the dbms_scheduler package which is significantly more powerful and flexible. If you are using Oracle 19c or later, Oracle automatically migrates dbms_job jobs to dbms_scheduler.
If you are using an Oracle version prior to 19c and a dbms_job job fails, the error information is written to the database alert log. That tends to be a bit of a pain to query from SQL particularly if you're not a DBA. You can define an external table that reads the alert log to make it queryable. Assuming you're on 11g, there is a view, x$dbgalertext, that presents the alert log information in a way that you can query it but DBAs generally aren't going to rush to give users permission on x$ tables.
If you use dbms_scheduler instead (or if you are on 19c or later and your dbms_job jobs get converted to dbms_scheduler jobs), errors are written to dba_scheduler_job_run_details. dbms_scheduler in general gives you a lot more logging information than dbms_job does so you can see things like the history of successful runs without needing to add a bunch of instrumentation code to your procedures.

Ms access pass through query that calls a procedure hangs

I have a procedure in oracle that runs in about 40 mins when run from oracle.
I have a pass through query in ms access that looks like this
Begin
MyProcedure;
End;
This exact code runs in oracle just fine but hangs in MS access. I don't know if it will finish, it's been going for 6 hours already and I guess it doesn't matter if it finishes or not, this is unacceptable.
Can someone explain what the difference is between running it from oracle and MS access and how I can fix this
I presume MyProcedure is an Oracle stored procedure.
If that's so, I suggest you include logging into it. How? Create an autonomous transaction procedure (so that it could insert logging information into some log table and commit) and call it from MyProcedure, for example before every statement it contains (some nasty selects, updates, whatever). Doing so, you'd be able to trace MyProcedure's execution and see what takes that much time.
Apart from that, see whether there are uncommitted (or rolled back) transactions that hold certain rows (tables?) locked so - when you called MyProcedure - it waits for another session to commit (or roll back) in order to be able to continue its execution.

Procedure to insert records asynchronously

I have a package where I am calling a procedure to insert records to a table and I am calling this procedure twice with a interval of 2 minutes using sys.DBMS_LOCK.sleep (<>);
Problem I facing is my calling form which is from application is still open till the insertion completes.
How can I make sure that when I submit my page and page should close, insertion should happen in backend some kind of asynchronous call. In database procedure are there any asynchronous key word to do this kind of activity?
Thanks
Update
putData(empNo,EmpName);
sys.DBMS_LOCK.sleep (<>);
putData(empNo,EmpName);
and due to the above my page stays till the second procedure finishes. I would like to close the page as soon as the first procedure finishes or when user submits the page.
Update 2
DBMS_JOB.SUBMIT(ln_dummy, 'begin putData('||empNo,EmpName||'); end;');
gives me compilation error wrong number of arguments to call Submit.
How can I resolve this?
Your PutData() procedure is expecting two parameters. You might think you're passing two parameters but you're not. Also, if EmpName is a string - which seems likely - you'll need to wrap it in escaped quotes. Basically you're writing dynamic SQL here, which is always tricky.
Try this:
DBMS_JOB.SUBMIT(ln_dummy, 'begin putData('||empNo||','''||EmpName||'''); end;');
"Other problem is how to run the these jobs at a interval of 10
minutes"
SUBMIT() can take an INTERVAL parameter. It's in the documentation for DBMS_JOB. Find out more.
However, if you want each iterationn to work with different parameter values you probably need to re-think your application design. You should have a procedure which polls a table for values to process.
Or use a queue. It depends on what you're really trying to achieve.

Oracle PL/SQL: a scheduled procedure, leading to firing of a trigger?

Okay, I'm new to Oracle PL/SQL and I've stumbled across a problem that I cannot figure out.
I have a procedure that leads to transferring data from one table to another and a trigger that activates on the insertion in the second table. I scheduled that procedure to run every minute (for testing - would be daily once I've figured it out), using the DBMS_JOB.SUBMIT - the scheduled part works perfectly, however after the completion of the procedure the trigger is not fired. I tried with before and after insert clauses, but it is still not working. If I call the procedure directly it works and it does fire the trigger just fine. So... I'm already wondering whether the scheduled procedure can fire the trigger at all?!
This is the schedule's code:
DECLARE
VJOBN BINARY_INTEGER;
BEGIN
DBMS_JOB.SUBMIT(
JOB => VJOBN,
INTERVAL => 'SYSDATE + 1/2880',
WHAT => 'BEGIN my_procedure(); END;'
);
END;
create or replace TRIGGER TO_PRJ
AFTER INSERT ON PROJECTS
FOR EACH ROW
BEGIN
IF INSERTING
THEN DBMS_OUTPUT.PUT_LINE('INSERTED PROJECT WITH ID: '||:NEW.PROJECT_ID||')
END IF;
END;​
Table PROJECTS has ID number, name varchar2, and some other that are not important.
The procedure transfers the ID and the name from orders to projects.
P.S. I'm using http://apex.oracle.com and when I get the timestamp from it the time is actually 6 hours behind me - not sure if it can be of any significance...
DBMS_OUTPUT and DBMS_JOB do not work the way you are trying to use them. The scheduled job is probably running, the trigger is firing - but since DBMS_OUTPUTneeds to be activated in the session that executes the DBMS_OUTPUT commands (i.e. the internal session used by DBMS_JOB) you will never see any output.
DBMS_OUTPUT's output is not visible across session, so the session that issues the DBMS_JOB.submit command will NOT receive the output, even if DBMS_OUTPUT is activated for that session.
Try using scheduler, it's much better then jobs. And bring there code of trigger and tables, it may help

How do I let my DBA pause and resume a stored procedure that is updating every row in a large table?

I have a table of about a million rows and I need to update every row in the table with the result of a lengthy calculation (the calculation gets a potentially different result for each row). Because it is time consuming, the DBA must be able to control execution. This particular calculation needs to be run once a year (it does a year-end summary). I wanted to create a job using DBMS_SCHEDULER.CREATE_JOB that would grab 100 rows from the table, update them and then stop; the next execution of the job would then pick up where the prior execution left off.
My first thought was to include this code at the end of my stored procedure:
-- update 100 rows, storing the primary key of the last
-- updated row in last_id
-- make a new job that will run in about a minute and will
-- start from the primary key value just after last_id
dbms_scheduler.create_job
( job_name=>'yearly_summary'
, job_type=>'STORED_PROCEDURE'
, job_action=>'yearly_summary_proc(' || last_id || ')'
, start_date=>CURRENT_TIMESTAMP + 1/24/60
, enabled=>TRUE
);
But I get this error when the stored procedure runs:
ORA-27486: insufficient privileges
ORA-06512: at "SYS.DBMS_ISCHED", line 99
ORA-06512: at "SYS.DBMS_SCHEDULER", line 262
ORA-06512: at "JBUI.YEARLY_SUMMARY_PROC", line 37
ORA-06512: at line 1
Suggestions for other ways to do this are welcome. I'd prefer to use DBMS_SCHEDULER and I'd prefer not to have to create any tables; that's why I'm passing in the last_id to the stored procedure.
I would tend to be wary about using jobs like this to control execution. Either the delay between successive jobs would tend to be too short for the DBA to figure out what job to kill/ pause/ etc. or the delay would be long enough that a significant fraction of the run time would be spent in delays between successive jobs.
Without creating any new objects, you can use the DBMS_ALERT package to allow your DBA to send an alert that pauses the job. Your code could call the DBMS_ALERT.WAITONE method every hundred rows to check whether the DBA has signaled a particular alert (i.e. the PAUSE_YEAREND_JOB alert). If no alert was received, the code could continue on. If an alert was received, you could pause the code either until another alert (i.e. RESUME_YEAREND_JOB) was received or a fixed period of time or based on the message the DBA sent with the PAUSE_YEAREND_JOB alert (i.e. the message could be a number of seconds to pause or a date to pause until, etc.)
Of course, you could do the same thing by creating a new table, having the DBA write a row to the table to pause the job, and reading from the table every N rows.
Another avenue to explore would be the dbms scheduler's support tools for execution windows and resource plans.
http://www.oracle-base.com/articles/10g/Scheduler10g.php
and also:
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14231/schedover.htm#sthref3501
With windows and resource plans your DBA can simply configure the system to execute your procedure to obey certain rules - including a job window and executing using only a certain number of resources (i.e. CPU usage).
This way the procedure can run once a year, and CPU usage can be controlled.
This though may not provide the manual control your DBA would like.
Another idea would be to write your procedure to process all records, but commit every 1000 or so. The dbms job.cancel() command could be used by your DBA to cancel the job if they wanted it to stop, and then they can resume it (by rescheduling or rerunning it) when they're ready to go. The trick would be that the procedure would need to be able to keep track of rows processed, e.g. using a 'processed_date' column, or a separate table listing primary keys and processed date.
In addition to the answer about DBMS_ALERT, your DBA would appreciate the ability to see where your stored procedure is up to. You should use the DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS functionality in Oracle to do this.

Resources