Skipping autosys job execution - job-scheduling

I wanted to start a job in certain time range. If the starting conditions are not met before the specified time, the job should skip the execution for that day & should run asusual for the next day. Please help.

I would use time window:
insert_job: job_name
machine: machine
owner: owner#us
date_conditions: 1
days_of_week: mo,tu,we,th,fr
exclude_calendar: s_holidays
start_times: "20:00"
run_window: "20:00-07:00"

Related

How to make squeue display time limits in hours only?

When viewing submitted jobs managed by Slurm, I would like to have the time limit column (specified by %l) to show only hours, instead of the usual days-hours:minutes:seconds format. This is the command I am currently using:
squeue --format="%.6i %.5P %.25j %.8u %.8T %.10M %.5l %.15b %.5C %.6D %R" --sort=+i --me
and this is the example output:
276350 qgpu jobname username RUNNING 1:14:14 1-00:00:00 gres:gpu:v100:1 18 1 s31n02
So, in this case, I would like the elapsed time to remain as is (1:14:14), but the time limit to change from 1-00:00:00 to 24. Is there a way to do it?
This is the way Slurm displays the dates. Elapsed time will eventually be displayed the same way (days-hours:minutes:seconds) after 23:59:59.
You can use a wrapper script to convert into a different format. Or if you know the time limit is no more than a day, just set the time limit to 23:59:00 by using --time=1439.
salloc -N1 --time=1439 bash
Using your squeue command:
166 mypartition interactive jyvet RUNNING 7:36 23:59:00 N/A 1 1 mynode

DataStage execute shell script to sleep in a loop sequence job

Currently, I have a sequence job in DataStage.
Here is the flow:
StartLoop Activity --> UserVariables Activity --> Job Activity --> Execute Command --> Endloop Activity
The job will run every 30 minutes (8 AM - 8 PM) to get real data. The first loop iteration will load data from 8 PM the previous day to 8 AM the current day, and the others will load data that happens in the last 30 minutes.
The UserVariables Activity is to pass variables (SQL statement) to filter data getting in the Job Activity. The first iteration the UserVariables pass variable A (SQL statement 1) to the Job Activity, from the second iteration, it will pass variable B (SQL statement 2) to the Job Activity.
The Execute Command I currently set the 'Sleep 1800' command for the job to sleep 30 minutes to end the iteration of the loop. But I realized now that it is affected by the running time of each iteration. So with my knowing-nothing about shell script, I have searched for solutions and have this file to sleep until a specific time when minute like 30 or 00 (delay 0-1 minute but it's fine).
The shell script is below, I ran it fine on my system but no success on making it as part of the job.
#!/bin/bash
minute=$(date +%M)
num_1=30
num_2=60
if [ $minute -le 30 ];
then
wait=$((($num_1 - $minute)*$num_2))
sleep $wait
fi
if [ $minute -gt 30 ];
then
wait=$((($num_2 - $minute)*$num_2))
sleep $wait
fi
I am now facing 2 problems right now that I need your help with.
The job runs the first iteration fine with the variable A below:
select * from my_table where created_date between trunc(sysdate-1) + 20/24 and trunc(sysdate) + 8/24;
But from the second iteration it failed with the Job Activity with the variable B below:
select * from my_table where created_date between trunc(sysdate-1/48, 'hh') + 30*trunc(to_number(to_char(sysdate-1/48,'MI'))/30)/1440 and trunc(sysdate, 'hh') + 30*trunc(to_number(to_char(sysdate,'MI'))/30)/1440;
In the parallel job, the log said:
INPUT,0: The following SQL statement failed: select * from my_table where created_date between trunc(sysdate-1/48, hh) + 30*trunc(to_number(to_char(sysdate-1/48,MI))/30)/1440 and trunc(sysdate, hh) + 30*trunc(to_number(to_char(sysdate,MI))/30)/1440.
I realized that maybe it failed to run the parallel job because it removed the single quote in hh and MI.
Is it because when passing variables from UserVariables Activity to Job Activity the variable will remove all the quotes? And how can I fix this?
2. How can I make the shell script above as part of the job like Execute Command or some other stage. I have searched for solutions and I think it's about the ExecSH Before/ After Routine Activity. But after reading from IBM pages, I still don't know where to start with it.
Sorry for adding 2 questions at 1 post that makes it so long but it's very relative to each other so it will take lots of time to answer if I separate it into 2 posts and you guys need more information about it.
Thank you!
Try escaping the single quote characters (precede each with a backslash).
Execute the shell script from an Execute Command activity ahead of the Job activity.

Weekly scheduling of circleci job?

I would like to run jmeter tests in weekly basis.
I have the below code now:
nightly:
triggers:
- schedule:
cron: "0 0 13 ? * SAT *"
filters:
branches:
only:
- master
- beta
Does it run everyday because of nightly: or does it run weekly because of cron: "0 0 13 ? * SAT *". If it is daily, how can I convert to run weekly?
"nightly" is just the name of the Circle CI workflow. It could just as easily be "foo" or "bar". The scheduling is in the cron line, where you specify 5 things: the Minute, Hour, Day of the month, Number of the month and Day of the week that you want the job to run.
For example, to run the workflow every Saturday at 1 pm, you could use "0 13 * * 6".
Using the asterisk (*) in a field means any value of that field is acceptable. Be aware that Circle CI will interpret the time as UTC so you may need to adjust it based on your timezone. https://crontab.guru/ is a nice site for learning about and experimenting with cron entries.

SGE submitted job state doesn't change from "qw"

I'm using Sun Grid Engine on ubuntu 14.04 to queue my jobs to be run on a multicore CPU.
I've installed and set up SGE on my system. I created a "hello_world" dir which contains two shell scripts namely "hello_world.sh" & "hello_world_qsub.sh", first one including a simple command and second one including qsub command to submit the first script file as a job to be run.
Here's what "hello_world.sh" includes:
#!/bin/bash
echo "Hello world" > /home/theodore/tmp/hello_world/hello_world_output.txt
And here's what "hello_world_qsub.sh" includes:
#!/bin/bash
qsub \
-e /home/hello_world/hello_world_qsub.error \
-o /home/hello_world/hello_world_qsub.log \
./hello_world.sh
after giving permission to the second sh file and running it with "./hello_world_qsub.sh" command from the specified dir, the output is reasonable:
Your job 1 ("hello_world.sh") has been submitted
But the output of "qstat" command is frustrating:
job-ID prior name user state submit/start at queue slots ja-task-ID
-----------------------------------------------------------------------------------------------------------------
1 0.50000 hello_worl mhr qw 05/16/2016 20:26:23 1
And the "state" column always remains on "qw" and never changes to "r".
Here's the output of "qstat -j 1" command:
==============================================================
job_number: 1
exec_file: job_scripts/1
submission_time: Mon May 16 20:26:23 2016
owner: mhr
uid: 1000
group: mhr
gid: 1000
sge_o_home: /home/mhr
sge_o_log_name: mhr
sge_o_path: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
sge_o_shell: /bin/bash
sge_o_workdir: /home/mhr/hello_world
sge_o_host: localhost
account: sge
stderr_path_list: NONE:NONE:/home/hello_world/hello_world_qsub.error
mail_list: mhr#localhost
notify: FALSE
job_name: hello_world.sh
stdout_path_list: NONE:NONE:/home/hello_world/hello_world_qsub.log
jobshare: 0
env_list:
script_file: ./hello_world.sh
scheduling info: queue instance "mainqueue#localhost" dropped because it is temporarily not available
All queues dropped because of overload or full
And here's the output of "qhost" command:
HOSTNAME ARCH NCPU LOAD MEMTOT MEMUSE SWAPTO SWAPUS
-------------------------------------------------------------------------------
global - - - - - - -
localhost - - - - - - -
What should I do to make my jobs run and finish their task?
From your qhost output, it looks like your machine "localhost" is properly configured in SGE. However, on "localhost" sge_execd is either not running or not configured properly. If it were, qhost would report statistics for "localhost".

Heroku Apscheduler: timed job OK but not for schedulee job

I am following this to schedule my Django cron job on Heroku.
Procfile:
web: gunicorn tango.wsgi --log-file -
clock: python createStatistics.py
createStatistics.py:
from apscheduler.schedulers.blocking import BlockingScheduler
sched = BlockingScheduler()
#sched.scheduled_job('interval', minutes=1)
def timed_job():
print('This job is run every minute.')
#sched.scheduled_job('cron', day=14, hour=15, minute=37)
def scheduled_job():
print('This job is run on day 14 at minute 37, 3pm.')
sched.start()
The timed_job runs OK, however, the scheduled_job has no effect. Do I need to set up any time zone information for apscheduler (I have the TIME_ZONE set in settings.py)? If so, how? Or did I miss anything?
Specific to Heroku, for reasons I have not been able to figure out yet, it seems that you need to specify the optional id field on cron jobs to make them work. So the cron, job definition would now look like this.
#sched.scheduled_job('cron', id="job_1", day=14, hour=15, minute=37)
def scheduled_job():
print('This job is run on day 14 at minute 37, 3pm.')
You must specify the timezone in every job otherwise Heroku will run in UTC timezone.
#sched.scheduled_job('cron', day=14, hour=15, minute=37, timezone=YOUR_TIME_ZONE)
def scheduled_job():
print('This job is run on day 14 at minute 37, 3pm.')

Resources