How to dynamically chose PBS queues during job submission - bash

I run a lot of small computing jobs in remote cluster where job submission is managed by PBS. Normally in a PBS (bash) script I specify the queue that I would like to submit the job through the command
#PBS -q <queue_name>
The job queue that I need to chose depends on the load on a specific queue. Every time before I submit a job, I analyze this through the command on terminal
qstat -q
which provides me an output which looks like as follows
Queue Memory CPU Time Walltime Node Run Que Lm State
---------------- ------ -------- -------- ---- --- --- -- -----
queue1 -- -- 03:00:00 -- 0 2 -- E R
queue2 -- -- 06:00:00 -- 8 6 -- E R
I would like to automate the queue selection by the job script based on two constraints
The queue selected must have a walltime more than the job time specified. The job time is specified through command #PBS -l walltime=02:30:00.
The queue must have the least no. of jobs in Que as shown in the above output.
I'm having trouble in identifying which tools that I need to use in terminal to help me automate the queue selection

It is possible that you could wrap your qsub submission in another script which would run qstat -q, parse the output, and then select the queue based on the walltime requested and how many active jobs are in each queue. The script could then submit the job and add -q <name of desired queue> to the end of the qsub command.
However, it seems that you are manually trying to do some of what a scheduler - with appropriate policies - does for you. Why do you need to dynamically switch queues? A better setup would be for the queues to essentially categorize the jobs - like you are already doing with walltime - and then allowing the scheduler to run the jobs appropriately. Any setup where a user needs to carefully select the queue seems a little suspect to me.

Related

How to make sbatch wait until last submitted job is *running* when submitting multiple jobs?

I'm running a numerical model which parameters are in a "parameter.input" file. I use sbatch to submit multiple iterations of the model, with one parameter in the parameter file changing every time. Here is the loop I use:
#!/bin/bash -l
for a in {01..30}
do
sed -i "s/control_[0-9][0-9]/control_${a}/g" parameter.input
sbatch --time=21-00:00:00 run_model.sh
sleep 60
done
The sed line changes a parameter in the parameter file. The
run_model.sh file runs the model.
The problem: depending on the resources available, a job might run immediately or stay pending for a few hours. With my default loop, if 60 seconds is not enough time to find resources for job n to run, the parameter file will be modified while job n is pending, meaning job n will run with the wrong parameters. (and I can't wait for job n to complete before submitting job n+1 because each job takes several days to complete)
How can I force batch to wait to submit job n+1 until job n is running?
I am not sure how to create an until loop that would grab the status of job n and wait until it changes to 'running' before submitting job n+1. I have experimented with a few things, but the server I use also hosts another 150 people's jobs, and I'm afraid too much experimenting might create some issues...
Use the following to grab the last submitted job's ID and its status, and wait until it isn't pending anymore to start the next job:
sentence=$(sbatch --time=21-00:00:00 run_model.sh) # get the output from sbatch
stringarray=($sentence) # separate the output in words
jobid=(${stringarray[3]}) # isolate the job ID
sentence="$(squeue -j $jobid)" # read job's slurm status
stringarray=($sentence)
jobstatus=(${stringarray[12]}) # isolate the status of job number jobid
Check that the job status is 'running' before submitting the next job with:
if [ "$jobstatus" = "R" ];then
# insert here relevant code to run next job
fi
You can insert that last snippet in an until loop that checks the job's status every few seconds.

Qsub - delaying/staggering job start with a job array

It is possible to delay or stagger the start of jobs launched through a job array with qsub, e.g. qsub -t1-4 launch.pbs?
I could do this by a sleeping for a small, but random amount of time in my pbs script, but I wonder whether there is a direct way to specify this to the scheduler through qsub
Yes, it is possible.
From http://gridscheduler.sourceforge.net/htmlman/htmlman1/qsub.html :
-a date_time
Available for qsub and qalter only.
Defines or redefines the time and date at which a job
is eligible for execution. Date_time conforms to
[[CC]]YY]MMDDhhmm[.SS], for the details, please see
Date_time in: sge_types(1).
If this option is used with qsub or if a corresponding
value is specified in qmon then a parameter named a and
the value in the format CCYYMMDDhhmm.SS will be passed
to the defined JSV instances (see -jsv option below or
find more information concerning JSV in jsv(1))
You can add this option inside your .pbs.
For example,
#PBS -a 1550
makes the task to wait until 15:50; if it is to late to run at 15:50 today, it will run tomorrow; with
#PBS -a 010900
the task will run in the morning of the first day of the next month.

wait for already finished jobs

I launch a pbs script once others are completed. For that I use this commands:
$ job1=$(qsub job1.pbs)
$ jobN=$(qsub jobN.pbs)
$ qsub -W depend=afterok:$job1:$jobN join.pbs
This works, in most cases. However if I run the joining script when job1 and jobN are already finished, it will go idle indefinitely as it is waiting for the already-finished-jobs to finish. That sounds insane, but this is what happens. If I run qstat I can clearly see that my joining job is being held ('H')
$ qstat -u me
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----
1990613 me workq join.pbs -- 1 1 -- -- H --
However if at least one of the jobs is still running, while the other is already finished, then the joining script will not go idle and will finish.
So what are the solutions to deal with jobs that are already over? We clearly need this job to finish.
When the join job starts, the server still needs to know about the depended-upon jobs; if either of those is gone from qstat, then you'll need to increase keep_completed in qmgr. Otherwise, when the join job is ready to run, the dependency will never get satisfied, and the hold will never get released.
To check: $ qmgr -c 'print server keep_completed'
To add/modify: $ qmgr -c 'set server keep_completed=300'
(I also believe you can set keep_completed on queues.)

How do I create a Stack or LIFO for GNU Parallel in Bash

While my original problem was solved in a different manner (see comment thread under this question, as well as the edits to this question), I was able to create a stack/LIFO for GNU Parallel in Bash. So I will edited my background/question to reflect a situation where it could be needed.
Background
I am using GNU Parallel to process files with a Bash script. As the files are processed, more files are created and new commands need to be added to parallel's list. I am not able to give parallel a complete list of commands, as information is generated as the initial files are processed.
I need a way to add the lines to parallel's list while it is running.
Parallel will also need to wait for a new line if nothing is in the queue and exit once the queue is finished.
Solution
First I created a fifo:
mkfifo /tmp/fifo
Next I created a bash file that cat's the file and pipes the output to parallel, which checks for the end_of_file line. (I wrote this with help from the accepted answer as well as from here)
#!/bin/bash
while true;
do
cat /tmp/fifo
done | parallel --ungroup --gnu --eof "end_of_file" "{}"
Then I write to the pipe with this command, adding lines to parallel's queue:
echo "command here" > /tmp/fifo
With this setup, all new commands are added to the queue. Once the queue is full parallel will begin processing it. This means that if you have slots for 32 jobs (32 processors), then you will need to add 32 jobs in order to start the queue.
If parallel is occupying all of its processors, it will put the job on hold until a processor becomes available.
By using the --ungroup argument, parallel will process/output jobs as they are added to the queue once the queue is full.
Without the --ungroup argument, parallel waits until a new slot is needed to complete a job. From the accepted answer:
Output from the running or completed jobs are held back and will only be printed when JobSlots more jobs has been started (unless you use --ungroup or -u, in which case the output from the jobs are printed immediately). E.g. if you have 10 jobslots then the output from the first completed job will only be printed when job 11 has started, and the output of second completed job will only be printed when job 12 has started.
From http://www.gnu.org/software/parallel/man.html#EXAMPLE:-GNU-Parallel-as-queue-system-batch-manager
There is a a small issue when using GNU parallel as queue system/batch manager: You have to submit JobSlot number of jobs before they will start, and after that you can submit one at a time, and job will start immediately if free slots are available. Output from the running or completed jobs are held back and will only be printed when JobSlots more jobs has been started (unless you use --ungroup or -u, in which case the output from the jobs are printed immediately). E.g. if you have 10 jobslots then the output from the first completed job will only be printed when job 11 has started, and the output of second completed job will only be printed when job 12 has started.

DATASTAGE: how to run more instance jobs in parallel using DSJOB

I have a question.
I want to run more instance of same job in parallel from within a script: I have a loop in which I invoke jobs with dsjob and without option "-wait" and "-jobstatus".
I want that jobs completed before script termination, but I don't know how to verify if job instance terminated.
I though to use wait command but it is not appropriate.
Thanks in advance
First,you should assure job compile option "Allow Multiple Instance" choose.
Second:
#!/bin/bash
. /home/dsadm/.bash_profile
INVOCATION=(1 2 3 4 5)
cd $DSHOME/bin
for id in ${INVOCATION[#]}
do
./dsjob -run -mode NORMAL -wait test demo.$id
done
project -- test
job -- demo
$id -- invocation id
the two line in shell scipt:guarantee the environment path can work.
Run the jobs like you say without the -wait, and then loop around running dsjob -jobinfo and parse the output for a job status of 1 or 2. When all jobs return this status, they are all finished.
You might find, though, that you check the status of the job before it actually starts running and you might pick up an old status. You might be able to fix this by first resetting the job instance and waiting for a status of "Not running", prior to running the job.
Invoke the jobs in loop without wait or job-status option
after your loop , check the jobs status by dsjob command
Example - dsjob -jobinfo projectname jobname.invocationid
you can code one more loop for this also and use sleep command under that
write yours further logic as per status of the jobs
but its good to create Job Sequence to invoke this multi-instance job simultaneously with the help of different invoaction-ids
create a sequence job if these are in same process
create different sequences or directly create different scripts to trigger these jobs simultaneously with invocation- ids and schedule in same time.
Best option create a standard generalized script where each thing will be getting created or getting value as per input command line parameters
Example - log files on the basis of jobname + invocation-id
then schedule the same script for different parameters or invocations .

Resources