bsub option confused with job arguments - command-line-arguments

I want to submit a job to LSF using the bsub command. One of the job argument is "-P argument_1". So the overall command looks like
bsub -P project_name -n 4 -W 10:00 my_job -P argument_1
But bsub considers -P argument_1 as the project_name instead of considering as an argument of my_job.
Is there anyway to resolve this issue?

What version of LSF are you using? You can check by running lsid. Try quoting your command and see if that helps:
bsub -P project_name -n 4 -W 10:00 "my_job -P argument_1"

Use a submission script script.sh including my_job -P placeholder_arg1. Then use
sed 's/placeholder_arg1/argument_1/g' < script.sh | bsub
to replace command line argument on-the-fly before submitting the job.

Related

How to pass parameter expansions into qsub?

I'm trying to use qsub to submit multiple parallel jobs, but I'm running into trouble with passing parameter substitutions into qsub. I'm using the -V option, but it doesn't seem to recognize what ${variable} is. Here's some code I tried running:
qsub -cwd -V -pe shared 4 -l h_data=8G,h_rt=00:10:00,highp -N bt2align3 -b y "projPath="$SCRATCH/CUTnTag/data_kayaokur2020"; sample="K4m3_rep1"; cores=8;
bowtie2 --end-to-end --very-sensitive --no-mixed --no-discordant --phred33 -I 10 -X 700
-p ${cores}
-x ${projPath}/bowtie2_index/GRCh38_noalt_analysis/GRCh38_noalt_as
-1 ${projPath}/raw_fastq/${sample}_R1.fastq.gz
-2 ${projPath}/raw_fastq/${sample}_R2.fastq.gz
-S ${projPath}/alignment/sam/${sample}_bowtie2.sam &> ${projPath}/alignment/sam/bowtie2_summary/${sample}_bowtie2.txt"
I just get an error that says "Invalid null command."
Is qsub not able to recognize parameter expansions? Is there a different syntax I should be using? Thanks.

Shell script works from command line but not from cron

I am using https://stackoverflow.com/a/42955871/308851 and it works from command line but not from cron. I even tried running the script with env -i but it stubbornly works.
#!/bin/bash
filename=$(date '+%Y-%m-%d').gz
docker exec -t elastic_db.1.$(docker service ps -f 'name=elastic_db.1' elastic_db -q --no-trunc | head -n1) mysqldump example |gzip -9 > /container/$filename
docker exec -t elastic_drupal.1.$(docker service ps -f 'name=elastic_drupal.1' elastic_drupal -q --no-trunc |head -n1) rclone --config /etc/rclone.conf move /app/$filename example:example/dump/
This compresses a 0 byte file when ran from cron but works just fine otherwise. What am I doing wrong?
Gordon Davisson's comment is correct: changing docker to /usr/bin/docker worked.

Direct group of commands into `docker exec`

I have the following command that works fine and prints foo before returning:
docker exec -i <id> /bin/sh < echo "echo 'foo'"
I want to direct multiple commands into the container with one pipe, for example echo 'foo' and ls /. I have tried the following:
This fails because it runs the commands on the host and pipes the output into the container:
{
echo "foo"
ls /
} | docker exec -i <id> /bin/sh
This fails because it has bad syntax. It also runs on the host:
{
echo "foo"
ls /
} | docker exec -i <id> /bin/sh
This one fails, but I would like to not use an array of strings anyway:
for COMMAND in 'echo "foo"' 'ls /'
do
docker exec -i <id> /bin/sh < echo $COMMAND
done
I've also tried several other methods like piping commands into tee or echo but haven't had any luck. If you would like to know why I want to do this seemingly ridiculous thing, it's because:
This is a short script that I would like to keep all in one place
I would like to use syntax highlighting, so I don't want to store it all in a list of strings
The container has the programs the script should run and the host does not
This is an automatic process that I would like to trigger with crontab on the host
You can run a group of commands in the below fashion
docker exec -i <id> /bin/sh -c 'echo "foo"; ls -l'
OR
docker exec -i 996eee5d121d /bin/sh -c 'echo 'foo'; ls -l'
OR
docker exec -i 996eee5d121d /bin/sh -c 'echo foo; ls -l'
If you want to run more than 2 commands, just append ; after each command like
docker exec -i 996eee5d121d /bin/sh -c 'echo "foo"; ls -l; ls -a'
Use a here document.
docker run -i --rm alpine /bin/sh <<EOF
echo abc
ls /
EOF
Note the difference between quoted and unquoted here document delimiter.
docker exec -i <id> /bin/sh < echo "echo 'foo'"
I think you meant to do:
docker exec -i <id> /bin/sh < <(echo "echo 'foo'")
which is just the same as:
docker exec -i <id> /bin/sh <<<"echo 'foo'"
#edit There is a cool little trick. The idea is to pipe the script itself except first lines to another subprocess, it's sometimes used by installer scripts:
#!/bin/sh
# output this script except first 4 lines to docker
tail -n+5 "$0" | docker run -i --rm alpine /bin/sh -x
exit # we exit original script
#!/bin/sh
# inside docker now
echo abc
ls /
Execution:
$ bash -x ./script.sh
+ tail -n+5 ./script.sh
+ docker run -i --rm alpine /bin/sh -x
+ echo abc
+ ls /
abc
bin
...
var
+ exit
In a similar fashion you could use sed or another parsing tool to extract the only the relevant part between some marks for example.
I found a gist that explained how to pipe commands into docker exec:
echo "echo foo" | docker exec -i <id> /bin/sh -
Now we need a way to pipe multiple commands. Command groups won't work because they run on the host and semicolon separated commands can get messy. I thought of writing a function and getting just its body, it turns out you can do that with a simple declare and sed call.
You can combine all these pieces to pipe a command into the container:
function func {
echo "foo"
ls /
}
declare -f func | sed '1,2d;$d' | docker exec -i <id> /bin/bash -
Syntax highlighting still works in the function and it is easy to read.
If you want to use environment variables that are on the host in the container you have to list them manually in docker exec like so:
... | docker exec -i -e VAR=$VAR <id> /bin/bash -
Edit: I'm leaving this here as a possible solution, but the accepted answer is the proper solution I am using.

bash config file from remote source with an argument [duplicate]

Say I have a file at the URL http://mywebsite.example/myscript.txt that contains a script:
#!/bin/bash
echo "Hello, world!"
read -p "What is your name? " name
echo "Hello, ${name}!"
And I'd like to run this script without first saving it to a file. How do I do this?
Now, I've seen the syntax:
bash < <(curl -s http://mywebsite.example/myscript.txt)
But this doesn't seem to work like it would if I saved to a file and then executed. For example readline doesn't work, and the output is just:
$ bash < <(curl -s http://mywebsite.example/myscript.txt)
Hello, world!
Similarly, I've tried:
curl -s http://mywebsite.example/myscript.txt | bash -s --
With the same results.
Originally I had a solution like:
timestamp=`date +%Y%m%d%H%M%S`
curl -s http://mywebsite.example/myscript.txt -o /tmp/.myscript.${timestamp}.tmp
bash /tmp/.myscript.${timestamp}.tmp
rm -f /tmp/.myscript.${timestamp}.tmp
But this seems sloppy, and I'd like a more elegant solution.
I'm aware of the security issues regarding running a shell script from a URL, but let's ignore all of that for right now.
source <(curl -s http://mywebsite.example/myscript.txt)
ought to do it. Alternately, leave off the initial redirection on yours, which is redirecting standard input; bash takes a filename to execute just fine without redirection, and <(command) syntax provides a path.
bash <(curl -s http://mywebsite.example/myscript.txt)
It may be clearer if you look at the output of echo <(cat /dev/null)
This is the way to execute remote script with passing to it some arguments (arg1 arg2):
curl -s http://server/path/script.sh | bash /dev/stdin arg1 arg2
For bash, Bourne shell and fish:
curl -s http://server/path/script.sh | bash -s arg1 arg2
Flag "-s" makes shell read from stdin.
Use:
curl -s -L URL_TO_SCRIPT_HERE | bash
For example:
curl -s -L http://bitly/10hA8iC | bash
Using wget, which is usually part of default system installation:
bash <(wget -qO- http://mywebsite.example/myscript.txt)
You can also do this:
wget -O - https://raw.github.com/luismartingil/commands/master/101_remote2local_wireshark.sh | bash
The best way to do it is
curl http://domain/path/to/script.sh | bash -s arg1 arg2
which is a slight change of answer by #user77115
You can use curl and send it to bash like this:
bash <(curl -s http://mywebsite.example/myscript.txt)
I often using the following is enough
curl -s http://mywebsite.example/myscript.txt | sh
But in a old system( kernel2.4 ), it encounter problems, and do the following can solve it, I tried many others, only the following works
curl -s http://mywebsite.example/myscript.txt -o a.sh && sh a.sh && rm -f a.sh
Examples
$ curl -s someurl | sh
Starting to insert crontab
sh: _name}.sh: command not found
sh: line 208: syntax error near unexpected token `then'
sh: line 208: ` -eq 0 ]]; then'
$
The problem may cause by network slow, or bash version too old that can't handle network slow gracefully
However, the following solves the problem
$ curl -s someurl -o a.sh && sh a.sh && rm -f a.sh
Starting to insert crontab
Insert crontab entry is ok.
Insert crontab is done.
okay
$
Also:
curl -sL https://.... | sudo bash -
Just combining amra and user77115's answers:
wget -qO- https://raw.githubusercontent.com/lingtalfi/TheScientist/master/_bb_autoload/bbstart.sh | bash -s -- -v -v
It executes the bbstart.sh distant script passing it the -v -v options.
Is some unattended scripts I use the following command:
sh -c "$(curl -fsSL <URL>)"
I recommend to avoid executing scripts directly from URLs. You should be sure the URL is safe and check the content of the script before executing, you can use a SHA256 checksum to validate the file before executing.
instead of executing the script directly, first download it and then execute
SOURCE='https://gist.githubusercontent.com/cci-emciftci/123123/raw/123123/sample.sh'
curl $SOURCE -o ./my_sample.sh
chmod +x my_sample.sh
./my_sample.sh
This way is good and conventional:
17:04:59#itqx|~
qx>source <(curl -Ls http://192.168.80.154/cent74/just4Test) Lord Jesus Loves YOU
Remote script test...
Param size: 4
---------
17:19:31#node7|/var/www/html/cent74
arch>cat just4Test
echo Remote script test...
echo Param size: $#
If you want the script run using the current shell, regardless of what it is, use:
${SHELL:-sh} -c "$(wget -qO - http://mywebsite.example/myscript.txt)"
if you have wget, or:
${SHELL:-sh} -c "$(curl -Ls http://mywebsite.example/myscript.txt)"
if you have curl.
This command will still work if the script is interactive, i.e., it asks the user for input.
Note: OpenWRT has a wget clone but not curl, by default.
bash | curl http://your.url.here/script.txt
actual example:
juan#juan-MS-7808:~$ bash | curl https://raw.githubusercontent.com/JPHACKER2k18/markwe/master/testapp.sh
Oh, wow im alive
juan#juan-MS-7808:~$

Changing script from PBS to SLURM

I have just switched from PBS to SLURM. Originally my script read as:
Trying to change my script from PBS to SLURM. Before looked something like:
qsub -N $JK -e $LOGDIR/JK_MASTER.error -o $LOGDIR/JK_MASTER.log -v
Z="$ZBIN",NBINS="$nbins",MIN="$Theta_min" submit_MASTER_analysis.sh
Now need something like:
sbatch --job-name=$JK -e $LOGDIR/JK_MASTER.error -o $LOGDIR/JK_MASTER.log --export=Z="$ZBIN",NBINS="$nbins",MIN="$Theta_min"
submit_MASTER_analysis.sh
But for some reason this is not quite executing the job, think its a problem with the variables.
I have found out how to do this now so thought I better just update the post for anyone else interested.
In my launch script I now have
`sbatch --job-name=REALIZ_${R}_zbin${Z} \
--output=$RAND_DIR/RANDOM_MASTER_${R}_zbin${Z}.log \
--error=$RAND_DIR/RANDOM_MASTER_${R}_zbin${Z}.error \
--ntasks=1 \
--cpus-per-task=1 \
--ntasks-per-core=1 \
--threads-per-core=1 \
submit_RANDOMS_analysis.sh $JK $ZBIN $nbins $R $Theta_min 'LOW'`
where $JK $ZBIN $nbins $R $Theta_min 'LOW' are the arguments I pas through to the script I am submitting to the queue submit_RANDOMS_analysis.sh. This is then called in the submitted script by for instance the first argument JK=$1.

Resources