This question already has answers here:
Command not found error in Bash variable assignment
(5 answers)
Closed 6 years ago.
I'm not certain what the problem is with my slurm script - the error messages that I'm receiving are ambiguous redirect for my $input and command not found for when I'm trying to define my variables.
#!/bin/bash
#SBATCH --job-name=gim
#SBATCH --time=24:00:00
#SBATCH --ntasks=20
#SBATCH --ntasks-per-node=2
#SBATCH --cpus-per-task=1
#SBATCH -o output_%A_%a.out #Standard Output
#SBATCH -e output_%A_%a.err #Standard Error
module load program
input= gim${SLURM_ARRAY_TASK_ID}.gjf
output= gim${SLURM_ARRAY_TASK_ID}.log
program $input > $output
The way I run it is:
sbatch --array=1-500 ./slurm.job
Whitespace matters:
#!/bin/bash
# ...etc...
input=gim${SLURM_ARRAY_TASK_ID}.gjf
output=gim${SLURM_ARRAY_TASK_ID}.log
program "$input" > "$output"
Note the lack of spaces surrounding the = sign for the assignments. Whitespace matters:
foo = bar # this runs "foo" with "=" as the first argument and "bar" as the second
foo =bar # this runs "foo" with "=bar" as its first argument
foo= bar # this runs "bar" as a command with "foo" as an empty string in its environment
foo=bar # this assigns the value "bar" to the shell variable "foo"
Related
This question already has answers here:
How to wait in bash for several subprocesses to finish, and return exit code !=0 when any subprocess ends with code !=0?
(35 answers)
Closed 2 months ago.
I would like my shell script to fail, if a specific command fails. BUT in any case, run the entire script. So i thought about using return 1 at the end of the command i want to "catch" and maybe add a condition at the end like: if return 1; then exit 1. I'm a bit lost how this should look like.
#!/bin/bash
command 1
# I want THIS command to make the script fail.
# It runs test and in parallel regex
command 2 &
# bash regex that HAS to run in parallel with command 2
regex='^ID *\| *([0-9]+)'
while ! [[ $(command 3) =~ $regex ]] && jobs -rp | awk 'END{exit(NR==0)}'
do
sleep 1
done
...
# Final command for creation of a report
# Script has to run this command also if command 2 fails
command 4
trap is your friend, but you have to be careful of those background tasks.
$: cat tst
#!/bin/bash
trap 'let err++' ERR
{ trap 'let err++' ERR; sleep 1; false; } & pid=$!
for str in foo bar baz ;do echo $str; done
wait $pid
echo happily done
exit $err
$: ./tst && echo ok || echo no
foo
bar
baz
happily done
no
Just make sure you test all your logic.
Problem: Inspired by this thread, I'm trying to write a wrapper script that submits SLURM array jobs with bash variables. However, I'm running into issues with SLURM environment variables like $SLURM_ARRAY_TASK_ID as it acts as an empty variable.
I suspect it has something to do with how the test_wrapper.sh is parsing the yet undefined SLURM variable, but I can't seem to find a solution.
Below I provide a working example with a simple python script that should take an array ID as an input variable, but when it is called by the bash wrapper script, the python script crashes as it receives an empty variable.
test_wrapper.sh :
#!/bin/bash
for argument in "$#"
do
key=$(echo $argument | cut -f 1 -d'=')
value=$(echo $argument | cut -f 2 -d'=')
case "$key" in
"job_name") job_name="$value" ;;
"cpus") cpus="$value" ;;
"memory") memory="$value" ;;
"time") time="$value" ;;
"array") array="$value" ;;
*)
esac
done
sbatch <<EOT
#!/bin/bash
#SBATCH --account=foobar
#SBATCH --cpus-per-task=${cpus:-1}
#SBATCH --mem-per-cpu=${memory:-1}GB
#SBATCH --time=${time:-00:01:00}
#SBATCH --array=${array:-1-2}
#SBATCH --job-name=${job_name:-Default_Job_Name}
if [ -z "$SLURM_ARRAY_TASK_ID" ]
then
echo "The array ID \$SLURM_ARRAY_TASK_ID is empty"
else
echo "The array ID \$SLURM_ARRAY_TASK_ID is NOT empty"
fi
srun python foo.py -a $SLURM_ARRAY_TASK_ID
echo "Job finished with exit code $?"
EOT
where foo.py is:
import argparse
def main(args):
print('array number is : {}'.format(args.array_number))
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-a", "--array_number",
help="the value passed from SLURM_ARRAY_TASK_ID"
)
args = parser.parse_args()
main(args)
$cat slurm-123456789_1.out yields :
The array ID 1 is empty
usage: foo.py [-h] [-a ARRAY_NUMBER]
foo.py: error: argument -a/--array_number: expected one argument
srun: error: nc10931: task 0: Exited with exit code 2
Job finished with exit code 0
I find it strange, that "The array ID 1 is empty" is correctly printing the $SLURM_ARRAY_TASK_ID (??)
So according to this page:
Job arrays will have two additional environment variable set. SLURM_ARRAY_JOB_ID will be set to the first job ID of the array. SLURM_ARRAY_TASK_ID will be set to the job array index value.
That suggests to me that sbatch is supposed to set these for you. In that case, you need to escape all instances of $SLURM_ARRAY_TASK_ID in the script you pass via the heredoc so that they don't get prematurely substituted before sbatch can set the relevant environment variable.
The two options for this are:
If you don't want any expansions to occur at all, quote the heredoc delimiter.
sbatch <<"EOT"
<your script here>
EOT
If you need some expansions to occur but want to disable others, then escape the ones that should not be expanded by putting a \ in front of them like you have done in your existing script.
Thanks to the feedback posted in the comments I was able to fix the issue. Posting a "fixed" version of the wrapper script below.
In short, the solution is to escape $SLURM_ARRAY_TASK_ID.
#!/bin/bash
for argument in "$#"
do
key=$(echo $argument | cut -f 1 -d'=')
value=$(echo $argument | cut -f 2 -d'=')
case "$key" in
"job_name") job_name="$value" ;;
"cpus") cpus="$value" ;;
"memory") memory="$value" ;;
"time") time="$value" ;;
"array") array="$value" ;;
*)
esac
done
{ tee /dev/stderr | sbatch; } <<EOT
#!/bin/bash
#SBATCH --account=foobar
#SBATCH --cpus-per-task=${cpus:-1}
#SBATCH --mem-per-cpu=${memory:-1}GB
#SBATCH --time=${time:-00:01:00}
#SBATCH --array=${array:-1-2}
#SBATCH --job-name=${job_name:-Default_Job_Name}
if [ -z "\$SLURM_ARRAY_TASK_ID" ]
then
echo "The array ID \$SLURM_ARRAY_TASK_ID is empty"
else
echo "The array ID \$SLURM_ARRAY_TASK_ID is NOT empty"
fi
python foo.py -a \$SLURM_ARRAY_TASK_ID
EOT
cat slurm-123456789_1.out yields :
The array ID 1 is NOT empty
array number is : 1
Note: the { tee /dev/stderr | sbatch; } is not necessary, but is very useful for debugging (thanks Charles Duffy)
Say I have a bash script and I want some variables to appear when sourced and others to only be accessible from within the script (both functions and variables). What's the convention to achieve this?
Let's say test.sh is your bash script.
What you can do is extract all the common items and put them in common.sh which can be sourced by other scripts.
The BASH_SOURCE array helps you here:
Consider this script, source.sh
#!/bin/bash
if [[ ${BASH_SOURCE[0]} == "$0" ]]; then
# this code is run when the script is _executed_
foo=bar
privFunc() { echo "running as a script"; }
main() {
privFunc
publicFunc
}
fi
# this code is run when script is executed or sourced
answer=42
publicFunc() { echo "Hello, world!"; }
echo "$0 - ${BASH_SOURCE[0]}"
[[ ${BASH_SOURCE[0]} == "$0" ]] && main
Running it:
$ bash source.sh
source.sh - source.sh
running as a script
Hello, world!
Sourcing it:
$ source source.sh
bash - source.sh
$ declare -p answer
declare -- answer="42"
$ declare -p foo
bash: declare: foo: not found
$ publicFunc
Hello, world!
$ privFunc
bash: privFunc: command not found
$ main
bash: main: command not found
I declare functions in one shell file,
# a.sh
foo() { ... }
function bar() { ... }
and imported in another shell file by source:
# b.sh
source ./a.sh
# invoke foo and bar
foo
bar
Now in the shell, I can use foo/bar after executing b.sh
$ source b.sh
...
# I can call foo or bar now in the shell (undesirable)
$ foo
...
How can I make the functions be local variables in the scope of the importing file, and avoid them to contaminate global/environmental variables?
There's no such thing as "file scope" in shell -- just global scope and function scope. The closest you can come is running b.sh in another shell:
$ b.sh # run b.sh rather than reading it into the current shell
then everything in in b.sh will just be in that other shell and will "go away" when it exits. But that applies to everything defined in b.sh -- all functions, aliases, environment and other variables.
It is possible to isolates private shell functions this way.
# sourced a.sh
# a_main is exposed public
my_public_a() (
private_a() {
echo "I am private_a only visible to my_public_a"
}
private_b() {
echo "I am get_b only visible to my_public_a"
}
case "$1" in
a) private_a;;
b) private_b;;
*) exit;;
esac
)
# b.sh
source a.sh
my_public_a a
my_public_a b
private_a # command not found
private_b # command not found
Even though bash does not provide direct support, what you need is still achievable:
#!/usr/bin/env bash
# b.sh
if [[ "${BASH_SOURCE[0]}" = "$0" ]] ;then
source ./a.sh
# invoke foo and bar
foo
bar
else
echo "b.sh is being sourced. foo/bar will not be available."
fi
Above is not 100% reliable, but should cover most cases.
This question already has answers here:
How to assign a heredoc value to a variable in Bash?
(13 answers)
Closed 2 years ago.
this is my file perl5lib.sh:
export PERL5LIB=`cat |tr '\n' ':'`<<EOF
/home/vul/repository/projects/implatform/daemon/trunk/lib/
/home/vul/repository/projects/platformlib/tool/trunk/cpan_lib
/home/projects/libtrololo
I want to start file as
. perl5lib.sh
to populate PERL5LIB variable, but it hangs. What is wrong?
my goal is to left folder names at the end of file, so I can add new simply:
echo dirname >> myscript
I have tried and export PERL5LIB=$(echo blabla) and cat<<EOF both work separately, but not together.
=================== THE SOLUTION ============================
function do the trick!
function fun
{
export PERL5LIB=`cat|tr '\n' ':'`
}
fun<<EOF
/dir1/lib
/dir2/lib
/dir3/lib
EOF
cat is useless here. Provide EOF inside the subshell:
#! /bin/bash
export PERL5LIB=$(tr '\n' ':'<<EOF
/home/vul/repository/projects/implatform/daemon/trunk/lib/
/home/vul/repository/projects/platformlib/tool/trunk/cpan_lib
/home/projects/libtrololo
EOF
)
What you call "EOF" can be googled as Here Document. Here Document can only be used to feed the standard input of a command.
The below example does what you want without spawning child processes
#!/bin/bash
multilineconcat=
while read line; do
#echo $line
multilineconcat+=$line
multilineconcat+=":"
done << EOF
path1
path2
EOF
echo $multilineconcat
Isn't it waiting for the EOF in your heredoc ?
I'd expect you to say
$ mycommand <<EOF
input1
input2
...
EOF
Note that EOF isn't a magic keyword. It's just a marker, and could be anything (ABC etc.). It indicates the end-of-file, but people simply write EOF as convention.