Create One Time Scheduled Job, Run When Others Not Running - oracle

I want to create a Scheduled Job in Oracle11g Express.
I am new to Job Scheduling and my search so far points to chains but as I want to create a Job out of a function that runs irregulary at a yet unknown date I believe that Chains aren't working for my case.
The Job will be created when a Procedure finishes, which determines its Scheduled Date X.
The Job will perform some critical changes which is why I don't want it to start while other regular scheduled jobs are running.
I want it to wait until the other jobs finish.
I only want to have it run once and than drop the job.
Is there some good practice for this case or some Option I have missed?

Related

Should I create SYNC jobs only in SQLake?

Should we always be just creating sync jobs as our general rule of thumb in Upsolver SQLake?
In most cases, yes, you want to use sync jobs.The only case that you don't want to use sync job is when you have an input to table that you don't want to wait.
Example: you have 5 jobs that write to a table and some jobs that read from that table. If you don't want the entire pipeline to stuck if one of the 5 jobs is stuck, then your pipeline needs to be unsync (or at least this specific job that you think it may stuck to be unsync)
Note: unsync is not a keyword. If create JOB by default creates unsync job. CREATE SYNC job creates SYNC job.

Scheduling a job in databricks Azure

I want to schedule a databricks job that runs every day at 5:00 AM, 8:30 AM and 9:00 PM.
I am looking for cron syntax.
Create a jobs in the workflows item.
Create a new job
Provide required inputs to a create job for Notebook.
Schedule a time to trigger the notebook in Edit Schedule
Click on scheduled trigger type to schedule the time, provide 0 0 5,21 ? * * as CRON syntax to run at 5:00AM and 9:00PM.
Create another new job to trigger the notebook at 8:30AM, schedule 0 30 8 ? * * as CRON syntax.
Now the two jobs were run at expected times i.e 5:00AM, 8:30AM and 9:00PM.
Minutes must be same for all the timings to schedule a trigger to run the notebook with single job, as per the requirement there is no other option to run with single job we must go for two jobs.

Hadoop schedule jobs to run sequentially (one job after other)?

Lets say I am resource constrained in my Hadoop environment and I don't want to schedule really long running jobs (ie takes days to complete). I am analyzing vast amount of past time series data. I want to schedule mapreduce jobs that take a day's worth of data at a time (which takes an hour to crunch).
So how do I schedule such that new job is submitted as soon as previous job is completed?
If you want a quick and simple approach you could just write a shell script that calls hadoop jar in sequence for each job you want to run.
If you want a more robust approach you could use Apache Oozie to define a workflow of jobs that will run your jobs in sequence. If you are new to Hadoop you may find it easiest to define and run your Oozie workflow using the Hue GUI.

Job action string too long

I'm trying to create a job that will sync two databases in the midnight. There are 10 tables that need to be synced. And it's a very long PL SQL script. When I set this script to JOB ACTION and try to create the job I get "string value too long for attribute job action". What do you suggest I do? Should I seperate the scipt into 10? Isn't there a way to make the job run the code as a script. If I do it manualy all 10 anonymous blocks get executed one after another. I need something that will kind of press F5 for me in the midnight.
What you need is a DBMS_Scheduler chain, in which each action is a separate step and they can be executed at the same time.
http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_sched.htm

Periodic hadoop jobs running (best practice)

Customers able to upload urls in any time to database and application should processes urls as soon as possible. So i need periodic hadoop jobs running or run hadoop job automatically from other application(any script identifies new links were added, generates data for hadoop job and runs job). For PHP or Python script, i could set up cronjob, but what is best practice for periodic hadoop jobs running (prepare data for hadoop, upload data, run hadoop job and move data back to database?
Take a look at Oozie, the new workflow system from Y!, which can run jobs based on different triggers. A good overflow is presented by Alejandro here: http://www.slideshare.net/ydn/5-oozie-hadoopsummit2010
If you want urls to be processed as soon as possible, you'll have them processed each at a time. My recommendation is to wait for some number of links (or MB of links, or for example 10 min, every day).
And batch process them (I do my processing daily, but that jobs takes few hours)

Resources