I have a Oracle Scheduler Job configured in my DB 19c. This is as event-based job, with a queue table created used in this process.
I need to analyse a possible problem in one of this executions. In DBA_SCHEDULER_JOB_LOG table I have the instance and a JOB_SUBNAME called SCHED$_EVTPARINST_89121.
Although I don't have DBA privileges, I have two questions:
How can I get the parameters values passed to this scheduler Job instance?
How can I get the Log (history) of enqueue messages, with enqueue date and set_job_argument_value associated to each enqueue message?
I've tried to check in Oracle documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/scheduling-jobs-with-oracle-scheduler.html#GUID-E9B234FF-C7F3-44B0-AEC8-A997353C414F or https://docs.oracle.com/database/121/ARPLS/d_aq.htm#ARPLS004) without success.
Thanks.
Related
I have the following requirement.
A product has n user
Every user has the option to set one day+time per weak for a notification
Timezone must be considered as well
The complete info is stored inside a mongo db. But now I ask myself should I create cron jobs for every user and notification or setup a task?
what's good for Oracle DBMS_Scheduler?
keeping a job scheduled(disabled) every time. and enable it and run it when needed.
create the job ,run it and drop it.
I have a table x and whenever a records gets submitted to that table ,I should have a job to process that record.
we may or may not have the record insertions always..
Keeping this in mind..what's better...?
Processing rows as they appear in a table in an asynchronous process can be done in a number of different ways, choose the way that suits you:
Add a trigger to the table which creates a one-off job to process the row using DBMS_JOB. This is suitable if the volume of data being inserted to the table is quite low, and you don't want your job running all the time. The advantage of DBMS_JOB is that the job will not start until the insert is committed; if it is rolled back, the job is also rolled back so doesn't run. The disadvantage is that if there is a sustained spike of activity, all the jobs created will crowd out any other jobs that are running.
Create a single job using DBMS_SCHEDULER which runs regularly, polls the table for new records and processes them. This method would need a column on the table that it can update to mark each record as "processed". For example, add a VARCHAR2(1) flag which is set to 'Y' on insert and set to NULL by the job after processing. You could add an index to that flag which will only store entries for unprocessed rows (so it will be small and fast). This method is much more efficient, especially for large data volumes, because each run of the job can effectively process large chunks of data in bulk at a time.
Use Oracle Advanced Queueing. http://docs.oracle.com/cd/E11882_01/server.112/e11013/aq_intro.htm#ADQUE0100
For (1), a separate job is created for each record in the table. You don't need to create the jobs. You do need to monitor them, however; if one fails, you would need to investigate and re-run manually.
For (2), you just create one job and let it run regularly. If one record fails, it could be picked up by the next iteration of the job. I would process each record in a separate transaction so the failure of one record doesn't affect the failure of other records still in the queue.
For (3), you still create a job like (2) but instead of reading the table it pulls requests off a queue.
I'm setting up a Quartz driven job in a Spring. The job needs a single argument which is the id of a database record that it can use to locate the data it needs to process.
The sequence is:
Job starts,
locates next available record id,
processes data.
Because the record id is unknown until the job starts, I cannot set it up when I create the job. I also need to account for restarts if things go bad. From reading Quartz doco it appears that if I store the record Id in the trigger's JobDataMap, then when the server restarts, the job will automatically restart with the same record Id it was original started with.
This is where things get tricky, I'm trying to figure out where and when to get the record Id so I can store it in the trigger's JobDataMap. I'm thinking I need to implement a TriggerListener and use it to set the record Id in the JobDataMap when the triggerFired() callback is called. This will involve a call to the database to get the record Id.
I'm not really sure if this approach is the correct one, or whether I'm barking up the wrong tree. Can someone with some quartz experience tell me if this is correct, or if there is a better way to configure a jobs arguments so that they can be dynamically set and a restart will preserve them?
Thanks
Derek
I need to execute a certain Oracle procedure from the client application and it usually takes longer time and can not really increase the waiting time for the response as the execution time is unpredictable!
Is there a way to execute the the procedure as a scheduler job asynchronously at run time?
If asynchronously executed would "Oracle AQ Asynchronous Notification" be used to notify back the application?
You can use the dbms_scheduler package (or the older dbms_job package) to run a procedure in a separate session asynchronously. Depending on the number of jobs you envision running (and the number of background jobs you want your application to write to some sort of job queue that a fixed number of background jobs read from to pick up and process work. That "job queue" could be an actual Oracle AQ queue or it could be a regular table that the jobs read from.
You could have the procedure send a message to the client using Oracle AQ as well. 99% of the time that I've seen this sort of setup, however, the job wrote some sort of status to a table (or just used the dbms_scheduler data dictionary) and the front-end merely polled the status periodically to determine when the job was done.
I'm trying to initialize my data in my Azure Data Tables but I only want this to happen once on the server at startup (i.e. via the WebRole Role Entry OnStart routine). The problem is if I have multiple instances starting up at the same time then potentially either one of those instances can add records to the same table at the same time hence duplicating the data at runtime.
Is there there like an overarching routine for all instances? An application object in which I can shove a value into and check it in each of the instances to see if the tables have been created or not? A singleton of some sort that azure exposes?
Cheers
Rob
No, but you could use a Blob lease as a mutex. You could also use a table lock in SQL Azure, if you're using that.
You could also use a Queue, and drop a message in there and then just one role would pick up the message and process it.
You could create a new single instance role that does this job on role start.
To be really paranoid about this and address the event of failure in the middle of writing the data, you can do something even more complex.
A queue message is a great way to ensure transactional capabilities as long as the work you are doing can be idempotent.
Each instance adds a message to a queue.
Each instance polls the queue and on receiving a message
Reads the locking row from the table.
If the ‘create data state’ value is ‘unclaimed’
Attempts to update the row with a ‘in process’ value and a timeout expiration timestamp based on the amount of time needed to create the data.
if the update is successful, the instance owns the task of creating the data
So create the data
update the ‘create data state’ to ‘committed’
delete the message
else if the update is unsuccessful the instance does not own the task
so just delete the message.
Else if the ‘create data’ value is ‘in process’, check if the current time is past the expiration timestamp.
That would imply that the ‘in process’ failed
So try all over again to set the state to ‘in process’, delete the incomplete written rows
And try recreating the data, updating the state and deleting the message
Else if the ‘create data’ value is ‘committed’
Just delete the queue message, since the work has been done