Will autosys Job run if one of the predecessor job is in terminated state? - job-scheduling

Let me know if a JOB A has dependency on JOB B and JOB C (JOB A is scheduled to run on its own time).
JOB C is in successful state and where as JOB C terminated.
Let me know if JOB A will still run or not?

Related

Wait execution of Box job if dependent Job is in running state and resume execution of box job once dependent job has finished

We have a Box Job which is scheduled to run at X time whose execution needs to be waited if another normal Autosys job A is in progress and resume execution of box job once dependent job has finished..Note the dependent job is not Box Job..We have added condition in Box job JILL as follows
condition:s(Job A)
However Box Job is not waiting even if dependent job is in progress..Is there any way to handle this scenario through Autosys..Appreciate your help
Since Job B outcome is not considered, irrespective of success or failure. The box job should only start if Job B is not in running stage. For this the appropiate condition would be:
condition:n(Job A)
here n represent "Not Running"

How to run selected Azkaban jobs in paralell via a script?

Since there are too many jobs on Azkaban, I have to test new jobs one by one manually.
Assume I upload some new jobs and is it possible to write a Python (or any other language) script to fetch the dependencies between these jobs and then run them on Azkaban in paralell?
For instance, there are there jobs a, b, c and b dependents on a. They are supposed to be scheduled like:
Starts to run job a and job c
When job a finishes, starts to run job b.
I did not find any helpful info or API on the Azkaban official website (Maybe I missed useful info).
Any help is appreciated.

"single point of failure" job tracker node goes down and Map jobs are either running or writing the output

I'm new to hadoop and I would like to know what happens when a "single point of failure" job tracker node goes down and Map jobs are either running or writing the output. Would the Jobtracker starts all mapjobs all over again ?
Job tracker is a single point of failure meaning if it goes down you wont be able to submit any additional Map/reduce jobs and existing jobs would be killed.
When you restart your job tracker, you would need to resubmit whole job again.

Chaining Spring Batch Job

I have two jobs defined in two different xmls. Say Job A & Job B.
I need to call Job B on successful completion of Job A.
What is the best approach of doing this.
I am pretty new to spring-batch so looking for the best approach to handle this.
You can create a superjob and execute your job A and Job B as steps in this super job specifying that Job B should be executed on successful completion of Job A

How to set hudson jobs in such a way that downstream jobs are always executed?

I managed to setup the hudson server, where I have 3 jobs : A, B and C.
The job A is build when anything is checked in the trunk
The job B is build after job A successfully finishes
The job C is build after job B successfully finishes
Job A takes about 25-35 minutes to execute, while jobs B and C are very fast (job B about 1 minute and job C about 1/2 second).
Now, because someone makes a checkin while the job A is executing, it always interrupts the process and jobs B and C are not executed.
So, is there a way to force jobs B and C to be executed after job A finishes the execution successfully?
Just moving this out to an answer - as #davek says, the locks and latches plugin will work. You can make it A, B, and C part of a lock set, so that when B or C is working, A is queued. We use this in our setup to stop builds that share resources from stomping all over each other.
Caveat: There is a known bug in locks, hopefully to be fixed soon where if multiple builds are waiting on the same lock they will occasionally both start when the lock is released, rather than on holding up the other.

Resources