I have written one procedure and one job.
From job I am running procedure. Following is the script for creating job
DBMS_SCHEDULER.create_job (job_name => 'IBPROD2.RUN_FETCH_ACCT_ALERTS',
job_type => 'STORED_PROCEDURE',
job_action => 'FETCH_ACCT_ALERTS',
start_date => sysdate,
repeat_interval => 'FREQ=HOURLY;INTERVAL=2;',
enabled => TRUE,
auto_drop => FALSE
);
After creating job I am running following command to get job details for owner IBPROD2 where I can see failure_count column value as 1 for RUN_FETCH_ACCT_ALERTS job.
There is no problem in procedure FETCH_ACCT_ALERTS when I run it manually.
Can anyone help me in why the job is failing? Am I missing something?
Query the ALL_SCHEDULER_JOB_RUN_DETAILS view (or perhaps the DBA equivalent).
select *
from all_scheduler_job_run_details
where job_name = 'IBPROD2.RUN_FETCH_ACCT_ALERTS'
You'll be particularly interested in the error# which will give you an Oracle error number you can look up. Also, the additional_info column might have some, er, additional info.
The error code means this:
ORA-28179: client user name not provided by proxy
Cause: No user name was provided by the proxy user for the client user.
Action: Either specify a client database user name, a distinguished name or an X.509 certificate.
So it's something to do with your security set up. Authentication is failing for reason. As I lack a detailed knowledge of your architecture (and I'm not a security specialist) I'm not in a position to help you.
Because I have already created many jobs to run different procedures
with same owner. All are running successfully.
So in what way does this procedure differ from all the others?
Related
I'm fairly new to SQL and my team has been tasked with running a daily report which requires SQL data. Is there a way to use Oracle SQL Developer to automatically export a query to csv/excel on a daily basis?
The program Oracle SQL Developer itself won't automate it, but there are plenty of options to get it done.
Basically, some computer somewhere needs to have a scheduler that can wake it up and do the work. Some people use their own computers for this, using windows scheduled tasks, cron jobs, or 3rd party programs.
But what "tasks" will your computer run? The task is basically to connect to the database, authenticate, send the Query text, retrieve the results in memory, and then export them as a file. Lot's of people will use python for this, because it can handle all of those steps.
You'll notice that we're already at 2 external "things" that could fail.
(1) your computer,
(2) your python code.
A simpler option, is the oracle database itself. Is it on a remote server that's always running? If so, you might take advantage of that and have it do all the work.
You're looking at a few steps, but I think they are easier (you might have to get permissions from an admin, though).
Specify a file location - remember, this is on the server, not your computer
create or replace directory csv_dir as '/destination/for/results';
Create a stored procedure - acts like a container for your SQL logic.
create or replace procedure write_file is
file_handle UTL_FILE.file_type;
begin
file_handle := utl_file.fopen('CSV_DIR', 'csv_filename.csv', 'w', 32767);
for rws in (
select * from t -- your query here
) loop
utl_file.put_line(file_handle,
rws.c1 || ',' || rws.c2 || ',' || rws.c3 -- your columns here
);
end loop;
utl_file.fclose(file_handle);
end write_file;
Create a scheduler job - runs the stored procedure. This is a feature of your Oracle server.
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'EXPORT_CSV_J',
job_type => 'PLSQL_BLOCK',
job_action => 'begin write_file; end;',
number_of_arguments => 0,
start_date => NULL,
repeat_interval => 'FREQ=DAILY',
end_date => NULL,
enabled => FALSE,
auto_drop => FALSE);
DBMS_SCHEDULER.SET_ATTRIBUTE(
name => 'EXPORT_CSV_J',
attribute => 'logging_level',
value => DBMS_SCHEDULER.LOGGING_RUNS);
DBMS_SCHEDULER.enable(
name => 'EXPORT_CSV_J');
END;
I borrowed this code from this website since it has been a while since I did this myself.
If for some reason your database isn't actually Oracle (I know some people use Oracle SQL Developer even though their actual database is something different) then the steps will be similar but the code will be different.
I have a webpage (generated via PL/SQL) that allows someone to toggle a remote device on or off. They are presented with a list of devices and they use checkboxes to select the ones to toggle. UTL_HTTP is used to communicate with the devices. Currently, the devices are toggled serially. Once all have been toggled, an email is sent to the user. Depending on how many devices are selected, doing this serially has the potential to take too long. So I'm looking at using DBMS_SCHEDULER to execute the toggling in parallel.
The problem is that the toggling process returns a status, either 'OK' or the reason it failed. I need that result to include in the email to the user. So, I need the 'main' procedure to create the SCHEDULER jobs and then wait for them to finish (and somehow get their statuses) before sending an email to the user.
Is this possible, short of having each job write it's status to a table which is polled by the 'main' process? I've read references to DBMS_PIPE for inter-process communication, but haven't found a good example (ie, one that makes sense to me) showing how to do it.
If there's a way to do it, I couldn't figure it out. I ended up having each job write it's status to a table. The main process knows how many individual jobs were created and polls the table to tell when all jobs are finished (or it timesout after a specified amount of time has passed, in case one of the jobs dies for some reason).
An Alternate Solution to Watching Parallel Processes Run Through DBMS_SCHEDULER Jobs
New Edit: An Abbreviated Discussion of the Core Problem
(Edit: 03/10/2014) I added this discussion after some helpful feedback from one of the posters watching this thread.
Opening Comments: Other discussion threads here mention the use of some output value from the function call itself. This is not natively possible with the existing DBMS_SCHEDULER features.
There are no OUT type parameter values or function outputs related to notifying a condition encountered by the procedure invoked. The most immediate problem at hand is: How do we run a series of related tasks via PL/SQL stored procedure in parallel? (i.e., without waiting for one another to finish before each start themselves.)
Whatever invokes the procedure tasks should not wait around for a status output. The response time is likely to be widely variable and whatever calls the procedure in this way would also hang. Associated processes would be waiting for the completion of the procedure or the return of a specified output.
A Suggested Approach: Other comments on this problem are on the right track. Have the custom output written to a table where it can be queried later when a response is ready. If you really want to make this a hands-off task, try putting a trigger on the output table. Every time a message of a specific value (representing a completed state of the request) is populated, invoke a procedure with Oracle's Mail package that sends out your notification email.
How to Track Your Invoked Jobs By Understanding DBMS_SCHEDULER Features
Using a scheduled job is a good way of initiating and watching a set of procedure calls that are not sequentially dependent on one another. I have been able to accomplish a similar approach using the original DBMS_JOB functionality of previous versions of the Oracle database.
A Case Study: Using a web-based application interface (Oracle Application Express) I had a project which allowed the user to initiate a resource-intensive series of database operations. Initiating the request was all that was needed.
The Actual Use Scenario: The user didn't need to wait around for its completion. The problem was that wiring the web request form directly to a call to this database package and its procedures also tied down control of the form and its session making the user "wait" for the procedure itself to complete.
Firing off a scheduled job that invoked this process separated the interaction with the web page from the wait for the actual process to complete. Scheduling a job task was almost instantaneous, so the wait between the submission of the request and return of control back to the web page also had a negligible wait.
Using Oracle DBMS_SCHEDULER: An Introduction to the Approach
The Current Problem and Solution Under Discussion: Use the native DBMS_SCHEDULER status views to monitor the progress of your process. There are many, but ALL_SCHEDULER_JOB_LOG is the simpler of the collection and is a good start to what we're trying to accomplish.
Use an easily identifiable name for each job... and also each job that may be related to one another.
Initiate an additional job to watch all the parallel tasks until the last one is completed. Alter this "watching" job to end once this condition has been met.
The Basic Syntax to Initiate a Database Job on Scheduler
The procedure DBMS_SCHEDULER.CREATE_JOB creates a job in a single call without using an existing program or schedule:
DBMS_SCHEDULER.CREATE_JOB (
job_name IN VARCHAR2,
job_type IN VARCHAR2,
job_action IN VARCHAR2,
number_of_arguments IN PLS_INTEGER DEFAULT 0,
start_date IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
repeat_interval IN VARCHAR2 DEFAULT NULL,
end_date IN TIMESTAMP WITH TIME ZONE DEFAULT NULL,
job_class IN VARCHAR2 DEFAULT 'DEFAULT_JOB_CLASS',
enabled IN BOOLEAN DEFAULT FALSE,
auto_drop IN BOOLEAN DEFAULT TRUE,
comments IN VARCHAR2 DEFAULT NULL);
These are the input parameters that you should pay close attention to:
job_name You can leave it to default, or you can help organize your job requests by using a consistent naming convention. JOB_NAME is a special parameter because it has a helper function called *GENERATE_JOB_NAME*, where you can specify a naming prefix to combine with the internal name assignment. This isn't absolutely necessary, but it helps.
DBMS_SCHEDULER.GENERATE_JOB_NAME
(prefix IN VARCHAR2 DEFAULT 'JOB$_') RETURN VARCHAR2;
An example call within the create_job definition:
job_name => dbms_scheduler.generate_job_name( prefix => 'MY_EXAMPLE_JOB_')
So in the example above, we could have a series of jobs with names like: MY_EXAMPLE_JOB_0001, MY_EXAMPLE_JOB_0002, MY_EXAMPLE_JOB_0003...
job_type This one is straight from the Oracle documentation. Most likely it will be the type: *plsql_block* (The value may also be case sensitive).
repeat_interval Do not set this value for your one-time parallel tasks. The job will identify itself as completed once the referenced stored procedure reaches completion, or errors out.
end_date Leave this null or unassigned as well. This value does not apply for a one-time execution of the procedure it is watching.
start_date Leave this null or unassigned as well. No value means to initiate the specified job as soon as the job is ENABLED.
enabled Defaulted to FALSE you will need to set this to TRUE as soon as you create the job, or when you're ready to initiate the process "thread".
auto_drop This is an important one. The rest of this method depends on the meta data of each job remaining in the DBMS_SCHEDULER's log tables even after they have hit an exception or reached completion. Set this to FALSE.
job_action This will vary depending on the number of parallel processes you initiate. First, you should initiate the first of your parallel processes... and also the related "Monitoring" process that will be active for a specific request. Job actions for a plsql_block type job looks something like this:
Example PL/SQL Block:
BEGIN my_procedure(a, b, c); END;
Creating a Job Monitoring Process
Part of the problem you encountered is that DBMS_SCHEDULER may watch a process of varying execution time, but it's not very good at letting you know when it's done or if it encountered an exception.
Your "watcher" process just needs to be another scheduled job that queries the ALL_SCHEDULER_JOB_LOG table for the procedures it's responsible for, and figures out if all of them have reached the desired closing status.
Assumption: For a given request, you will know the number of parallel processes (remotely initiated switching events) required to complete this type of request... all the processes do not have to start exactly at the same time, but the watcher will need to know it still has to wait, even if all the related processes it can see have matched its criteria as "completed".
The kinds of tasks your "watching" procedure will need to include:
WATCHING SQL Criteria Example:
WITH MONITOR_QUERY AS (
SELECT COUNT(LOG_ID) AS COMPLETED_PROCESS_COUNT
FROM ALL_SCHEDULER_JOB_LOG
WHERE JOB_NAME LIKE '001-REQUEST%')
SELECT CASE WHEN COMPLETED_PROCESS_COUNT = <TOTAL_PROCESSES>
THEN 'DONE' ELSE 'IN-PROGRESS' END as REQUEST_STATUS
FROM MONITOR_QUERY
Also note that when you call the job that runs the monitoring process, you may find it useful to generate a unique job name ahead of time before kicking off its repeating job (should be done only once per request or set of parallel jobs):
DECLARE
who_am_i VARCHAR2(65);
BEGIN
SELECT dbms_scheduler.generate_job_name
INTO who_am_i
FROM DUAL;
--
DBMS_SCHEDULER.CREATE_JOB (job_name => who_am_i,
job_type => 'plsql_block',
job_action => 'BEGIN my_monitoring_task(p_1, p_2, who_am_i); END',
repeat_interval => 300,
comments => 'Interval time units are defaulted to SECONDS';
...);
END;
This job is most effective if it is created at the same time or shortly after the first parallel job in the series is launched.
When The Monitored Request is Completed
When the selection criteria is met (i.e., all related processes are closed in some way) then it's time to fire off your notification and also to stop the watcher for this request.
Stopping the Monitoring Job
DBMS_SCHEDULER.STOP_JOB (
job_name IN VARCHAR2
force IN BOOLEAN DEFAULT FALSE);
job_name If you used a custom naming scheme for each job you initiate, you could also store this value as an input parameter into your watcher procedure call. The watcher would then know how to shut itself off when it's done. Remember, if you use the GENERATE_JOB_NAME function call, you are only specifying the prefix to the entire job_name used in the scheduler.
force Set this one to FALSE (The default) or leave it unspecified. It is better to let Oracle find a way to gracefully put your watcher job to a halt.
Closing Thoughts and Comments
If the outcome or completion of several of your procedures are related, an additional scheduled job can be repeated as a monitoring "heartbeat" to check if all the dependencies for a discrete process have been met.
A comment on clean-up: This design requires the *auto_drop* parameter set as FALSE. A daily or weekly process could also be scheduled to issue a *drop_job* command that will clean up the scheduler's logs of records related to completed and reported requests.
You can also see, by including the invoking *job_name* within the scheduled job itself, you can provide the procedure(s) contained within it the ability to turn itself off once the right conditions have been met.
Advanced Queues to the rescue.
Slave sessions (jobs), when they are ready, put their return values into an AQ (practically any data structure is allowed).
The coordinator session, which initiated the slaves, listens on the queue and gathers the return values.
Actually, AQs are the recommended way for intersession communication in Oracle anyway.
In Oracle 12c the column ALL_SCHEDULER_JOB_RUN_DETAILS.OUTPUT can be used to return values from a job.
For example, create a job and write output using DBMS_OUTPUT:
begin
dbms_scheduler.create_job(
job_name => 'TEST_JOB',
job_type => 'PLSQL_BLOCK',
job_action => q'[begin dbms_output.put_line('Test output'); end; ]',
enabled => true
);
end;
/
Now read the output:
select job_name, to_char(log_date, 'YYYY-MM-DD') log_date, output
from all_scheduler_job_run_details
where owner = user
and job_name = 'TEST_JOB'
order by log_date desc;
JOB_NAME LOG_DATE OUTPUT
-------- -------- -------
TEST_JOB 2017-12-26 Test output
If you are able to use oracle version 11, you might use the DBMS_PARALLEL_EXECUTE pl/sql package, which does what you want. If you cannot upgrade, then you can implement some c callouts from pl/sql which provide similar functionality.
If you decide to use dbms_pipe and you are using the RAC database option be aware that using DBMS_PIPE has its limitations for failover.
I have a table in Oracle 11.2 database. I want the database to run an executable file on a remote server if a specific cell in table1 is updated to a value of 1 AND if the number of existing rows in table2 is > 0. I don't have much experience with what is possible in databases -- is the following possible to achieve this?
create a job using Oracle Scheduler. The job runs immediately, and is used to run an external executable program on a remote server. The job exists, but is not run until step 5 below (is this possible?).
http://docs.oracle.com/cd/E11882_01/server.112/e17120/schedadmin001.htm#BAJHIDDC
attach a DML trigger to the column of the table that fires on an UPDATE statement.
http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/triggers.htm#CIHEHBEB
have the trigger invoke a PL/SQL subprogram
http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/triggers.htm#CIHEGACF
in the PL/SQL subprogram, perform the following business logic: if a specific cell in table1 equals 1, AND if the number of rows in table 2 is greater than 0, proceed to step 5, otherwise stop (exit, quit).
Run the job in step 1
Or, if the job/scheduler is not made to provide this functionality, is there another way to achieve the same thing? That is, have a change in a database table trigger an external job.
UPDATE 1:
I wonder if it's possible to implement steps 1-5 above by just using Oracle Scheduler with DBMS_SCHEDULER.CREATE_JOB using parameter event_condition?
http://docs.oracle.com/cd/E11882_01/server.112/e25494/scheduse005.htm#CHDIAJEB
Here's an example from the above link:
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'process_lowinv_j1',
program_name => 'process_lowinv_p1',
event_condition => 'tab.user_data.event_type = ''LOW_INVENTORY''',
queue_spec => 'inv_events_q, inv_agent1',
enabled => TRUE,
comments => 'Start an inventory replenishment job');
END;
The above code creates a job that starts when an application signals the Scheduler that inventory levels for an item have fallen to a low threshold level.
Could the above code somehow be modified to perform the intended steps? For example, could steps 2-4 above be eliminated by using event_condition here instead? etc. If so, what would it look like, for example, how to set the queue_spec?
Assuming that you install the Oracle Scheduler Agent on the remote server, DBMS_SCHEDULER can run an executable on the remote machine. I would have the DML trigger enqueue a message into an Oracle Advanced Queue (AQ) and use that queue to create an event-based job (DML triggers are not allowed to commit or rollback the transaction but running a DBMS_SCHEDULER job implicitly issues a commit so DML triggers cannot directly run a job). The event-based job and the job that runs the remote executable would be part of a job chain.
Is this possible with oracle's scheduler. I just want to track where it currently is executing, when the job is running and get feedback.
dbms_scheduler.create_job(
job_name => 'hello_oracle_scheduler',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN DBMS_OUTPUT.PUT_LINE('' ''); DBMS_OUTPUT.PUT_LINE(''Hello world of scheduler. Time to execute scheduled jobs!!!''); END;',
number_of_arguments => 0
You better use a table and insert/updates on it to track your JOBs. DMBS_OUTPUT package makes sense in the weird cases where you have a console.
I would recommend using Pablo / Shannon's approach of a table insert through a proc with pragma autonomous_transaction option. However, another option would be to use UTL_MAIL (or UTL_SMTP if on 9i or less) to send an email to yourself if this is just a quick and dirty need.
We want to move our automated statistics gathering from an external script into Oracle 9i's job scheduler. It's a very simple job, and the code basically looks like this:
DBMS_JOB.SUBMIT(
JOB => <output variable>,
WHAT => 'DBMS_STATS.GATHER_DATABASE_STATS(
cascade => TRUE, options => ''GATHER AUTO'');',
NEXT_DATE => <start date>,
INTERVAL => 'SYSDATE + 7');
The job gets created successfully and runs, but fails with the error:
ORA-12012: error on auto execute of job 25
ORA-20000: Insufficient privileges to analyze an object in Database
ORA-06512: at "SYS.DBMS_STATS", line 11015
...
The part I don't get is that the user I submitted the job under has the right permissions to gather those database statistics -- if I run the command manually it works. I was curious if Oracle was ignoring any role-based privileges the user had like it does with creating procedures so I directly granted the user ANALYZE ANY, but still no dice.
Are there some other permissions I'd have to directly grant the user to make this work? I'd rather not have to make a separate job for each schema (which does work if I submit the job under the schema's owner).
What version of 9i are you on. I recall reading on a AskTom thread about 9.2.0.1 having an issue and needing to do a grant select ( i will look up the thread )
Also since you are running DB stats not subroutine ANALYZE ANY DICTIONARY