Invalid threads definition: entries have to be defined as RULE=THREADS pairs (with THREADS being a positive integer). Unparseable value - cluster-computing

Did you notice that set-threads do not work with a recent version of snakemake? It looks long but you just have to copy/paste. Here is a MRE:
mkdir snakemake-test && cd snakemake-test
touch snakeFile
mkdir profile && touch profile/config.yaml && touch profile/status-sacct.sh && chmod +x profile/status-sacct.sh
mkdir envs && touch envs/environment1.yaml && touch envs/environment2.yaml
In envs/environment1.yaml:
channels:
- bioconda
- conda-forge
dependencies:
- snakemake-minimal=7.3.8
- pandas=1.4.2
- peppy=0.31.2
- eido=0.1.4
In envs/environment2.yaml:
channels:
- bioconda
- conda-forge
dependencies:
- snakemake-minimal=6.15.1
- pandas=1.4.2
- peppy=0.31.2
- eido=0.1.4
In snakeFile:
onstart:
print("\t Creating jobs output subfolders...\n")
shell("mkdir -p jobs/downloadgenome")
GENOME = "mm39"
PREFIX = "Mus_musculus.GRCm39"
rule all:
input:
expand("data/fasta/{genome}/{prefix}.dna.chromosome.1.fa", genome=GENOME, prefix=PREFIX)
rule downloadgenome:
output:
"data/fasta/{genome}/{prefix}.dna.chromosome.1.fa"
params:
genomeLinks = "http://ftp.ensembl.org/pub/release-106/fasta/mus_musculus/dna/Mus_musculus.GRCm39.dna.chromosome.1.fa.gz"
threads: 4
shell:
"""
wget {params.genomeLinks}
gunzip {wildcards.prefix}.dna.chromosome.1.fa.gz
mkdir -p data/fasta/{wildcards.genome}
mv {wildcards.prefix}.dna.chromosome.1.fa data/fasta/{wildcards.genome}
"""
In profile/config.yaml:
snakefile: snakeFile
latency-wait: 60
printshellcmds: True
max-jobs-per-second: 1
max-status-checks-per-second: 10
jobs: 400
jobname: "{rule}.{jobid}"
cluster: "sbatch --output=\"jobs/{rule}/slurm_%x_%j.out\" --error=\"jobs/{rule}/slurm_%x_%j.log\" --cpus-per-task={threads} --ntasks=1 --parsable" # --parsable added for handling the timeout exception
cluster-status: "./profile/status-sacct.sh" # Use to handle timeout exception, do not forget to chmod +x
set-threads:
- downloadgenome=2
In profile/status-sacct.sh:
#!/usr/bin/env bash
# Check status of Slurm job
jobid="$1"
if [[ "$jobid" == Submitted ]]
then
echo smk-simple-slurm: Invalid job ID: "$jobid" >&2
echo smk-simple-slurm: Did you remember to add the flag --parsable to your sbatch call? >&2
exit 1
fi
output=`sacct -j "$jobid" --format State --noheader | head -n 1 | awk '{print $1}'`
if [[ $output =~ ^(COMPLETED).* ]]
then
echo success
elif [[ $output =~ ^(RUNNING|PENDING|COMPLETING|CONFIGURING|SUSPENDED).* ]]
then
echo running
else
echo failed
fi
Now build the conda environments:
cd envs
conda env create -p ./smake --file environment1.yaml
conda env create -p ./smake2 --file environment2.yaml
cd ..
If you run the whole thing with smake2 (snakemake snakemake-minimal=6.15.1) it indeeds run the job with 2 CPUs:
conda activate envs/smake2
snakemake --profile profile/
conda deactivate
rm -r data
rm -r jobs
If you do the same thing with smake (snakemake-minimal=7.3.8), it will crash with the error: Invalid threads definition: entries have to be defined as RULE=THREADS pairs (with THREADS being a positive integer). Unparseable value: '{downloadgenome :'.
conda activate envs/smake
snakemake --profile profile/
more jobs/downloadgenome/*log
I tried many things without success to solve the problem...

This was indeed a bug and has been fixed in PR 1615.

Related

How to convert systemv startup scripts to systemd services?

Advice on converting SysVinit file to Systemd services will be helpful.
Currently, I am using a systemv init script which will be executed after every boot on my STM32MP1 based Avenger96 board. Now I have to switch to Systemd from SysVinit. But I am not sure how to convert the init file to relevant systemd files. I am using Yocto with Ubuntu20.04 as build system. If someone help me to get started would be really great. Below is the init script and image recipe which establishes symlink to the init script.
custom-script.sh which is installed in etc/init.d/ directory of the rootfs.
#!/bin/sh
DAEMON="swupdate"
PIDFILE="/var/run/$DAEMON.pid"
PART_STATUS=$(sgdisk -A 4:get:2 /dev/mmcblk0)
if test "${PART_STATUS}" = "4:2:1" ; then
ROOTFS=rootfs-2
else
ROOTFS=rootfs-1
fi
if test -f /update-ok ; then
SURICATTA_ARGS="-c 2"
rm -f /update-ok
fi
start() {
printf 'Starting %s: ' "$DAEMON"
# shellcheck disable=SC2086 # we need the word splitting
start-stop-daemon -b -q -m -S -p "$PIDFILE" -x "/usr/bin/$DAEMON" \
-- -f /etc/swupdate/swupdate.cfg -L -e rootfs,${ROOTFS} -u "${SURICATTA_ARGS}"
status=$?
if [ "$status" -eq 0 ]; then
echo "OK"
else
echo "FAIL"
fi
return "$status"
}
stop() {
printf 'Stopping %s: ' "$DAEMON"
start-stop-daemon -K -q -p "$PIDFILE"
status=$?
if [ "$status" -eq 0 ]; then
rm -f "$PIDFILE"
echo "OK"
else
echo "FAIL"
fi
return "$status"
}
restart() {
stop
sleep 1
start
}
case "$1" in
start|stop|restart)
"$1";;
reload)
# Restart, since there is no true "reload" feature.
restart;;
*)
echo "Usage: $0 {start|stop|restart|reload}"
exit 1
esac
Image recipe which creates init.d dir and installs above script and also establishes symlink to rc4.d dir.
custom-image.bb
.
.
inherit update-rc.d
SRC_URI = "file://custom-script.sh \
"
S = "${WORKDIR}"
INITSCRIPT_PACKAGES = "${PN}"
INITSCRIPT_NAME = "custom-script.sh"
INITSCRIPT_PARAMS = "start 99 2 3 4 5 . "
do_install_append() {
install -d ${D}${sysconfdir}/rc4.d
install -d 644 ${D}${sysconfdir}/init.d
install -m 0755 ${WORKDIR}/custom-script.sh ${D}${sysconfdir}/init.d
ln -sf ../init.d/custom-script.sh ${D}${sysconfdir}/rc4.d/S99custom-script.sh
ln -sf ../init.d/custom-script.sh ${D}${sysconfdir}/rc4.d/K99custom-script.sh
}
FILES_${PN} += "${sysconfdir}/init.d"
Now I am trying to do the same functionality of custom-script.sh with systemd. Is it possible to make use of systemd-sysv-generator in this case?
Also, will the dir init.d completely removed once we switch to "systemd"? What will happen to other files which are present in etc/init.d?
Can anyone please help me get started?
Your help will be much appreciated.
Thanks in advance.
P.S: Please let me know if any info is missing here.
/etc/init.d will not get deleted by using systemd
Have you checked /etc/systemd and /usr/lib/systemd on your Ubuntu machine for examples of systemd scripts. Along the manual pages of systemd, you should have enough examples to convert your sysv init script to systemd.

Way to pass a script as a docker build argument?

I want to pass a multi-line script as a argument to docker build command, something like that:
docker build -t tertparam --build-arg load_cat_agent=true --build-arg deploy_cat_script='
echo "aaa";
echo "bbb"
' --no-cache .
and execute it during build, my Dockerfile is like
FROM python:3-alpine
ARG load_cat_agent
ARG deploy_cat_script
ADD . /root/
WORKDIR /root/
RUN if [ $load_cat_agent == "true" ]; then \
$deploy_cat_script;\
fi
CMD /root/start.sh && /root/wait.sh
but i found that it always just print
Step 6/7 : RUN if [ $load_cat_agent == "true" ]; then $deploy_cat_script; fi
---> Running in 7868c310e8e5
"aaa" echo "bbb"
how can i do that?
One way is to write the build args to shell script and then run the shell script.
FROM python:3-alpine
ARG load_cat_agent
ARG deploy_cat_script
ADD . /root/
WORKDIR /root/
RUN echo $deploy_cat_script > ./deploy_cat_script.sh
RUN chmod +x ./deploy_cat_script.sh
RUN if [ $load_cat_agent == "true" ]; then \
./deploy_cat_script.sh;\
fi
CMD /root/start.sh && /root/wait.sh
output:
Step 8/9 : RUN if [ $load_cat_agent == "true" ]; then ./deploy_cat_script.sh; fi
---> Running in 08a2f528a14d
aaa
bbb
If you have two images that are so different that the commands you need to build them are different, it's better to just have two separate Dockerfiles. The docker build -f command can specify which Dockerfile to use, and the Docker Compose build: block has a similar dockerfile: option.
# Dockerfile
FROM python:3-alpine
WORKDIR /root/
ADD . ./
CMD ["/root/start.sh"]
# Dockerfile.deploy
FROM python:3-alpine
WORKDIR /root/
ADD . ./
RUN echo "aaa" \
&& echo "bbb"
CMD ["/root/start.sh"]
If you don't mind needing to run docker build multiple times, you can have one image be built FROM the other. It will inherit its filesystem and metadata settings like the default CMD.
# Dockerfile.deploy, version 2
FROM tertparam
RUN echo "aaa" \
&& echo "bbb"
docker build -t tertparam .
docker build -t tertparam-deploy -f Dockerfile.deploy .
In your original example you might be able to get away with eval the string, but that setup is complex enough that you'll need to script it anyways, so the Dockerfile-based approach probably isn't any more difficult.
The problem with $deploy_cat_script is that shell expansions are detecting command separators before variable expansions. One solution is to use eval. Make sure to learn about [eval command and security issues associated with it](Eval command and security issues).
Dockerfile:
FROM python:3-alpine
ARG load_cat_agent
ARG deploy_cat_script
RUN set -x && if [ "$load_cat_agent" == "true" ]; then \
eval "$deploy_cat_script"; \
fi
With something more complicated such as deploy_cat_script='for i in a b c; do echo $i | sed "s/^/test: /"; done' executes like so:
$ docker build -t tertparam --build-arg load_cat_agent=true --build-arg deploy_cat_script='for i in a b c; do echo $i | sed "s/^/test: /"; done' .
Sending build context to Docker daemon 7.168kB
Step 1/4 : FROM python:3-alpine
---> 59acf2b3028c
Step 2/4 : ARG load_cat_agent
---> Using cache
---> 6e383d31f589
Step 3/4 : ARG deploy_cat_script
---> Using cache
---> 04fc43723e0f
Step 4/4 : RUN set -x && if [ "$load_cat_agent" == "true" ]; then eval "$deploy_cat_script"; fi
---> Running in 72e46c08072e
+ '[' true '==' true ]
+ eval 'for i in a b c; do echo $i | sed "s/^/test: /"; done'
test: a
+ echo a
+ sed 's/^/test: /'
+ sed 's/^/test: /'
+ echo b
test: b
+ sed 's/^/test: /'
+ echo c
test: c
Removing intermediate container 72e46c08072e
---> 765de7cf22a1
Successfully built 765de7cf22a1
Successfully tagged tertparam:latest

If statement doesn't execute argument

GNU nano 2.7.4 File: /home/pi/initDisplay/initDisplay.sh
#!/usr/bin/env bash
#HDMI connection?
rm -f hdmi.name
tvservice -n 2>hdmi.name
HDMI_NAME=`cat hdmi.name`
echo $HDMI_NAME
if [ "$HDMI_NAME" == "[E] No device present" ]; then
LCD_ON=`cat /boot/config.txt | grep "#CONFIGURAZIONEHDMI"`
echo $LCD_ON
if [ "$LCD_ON" == "#CONFIGURAZIONEHDMI" ]; then
echo "reboot con la configurazione LCD"
sudo rm -f /boot/config.txt
sudo cp /boot/config_lcd.txt /boot/config.txt
sleep 2
sudo reboot -n
fi
else
HDMI_ON=`cat /boot/config.txt | grep "#CONFIGURAZIONELCD"`
echo $HDMI_ON
if [ $HDMI_ON == "#CONFIGURAZIONELCD" ]; then
echo "reboot con la configurazione HDMI"
sudo rm -f /boot/config.txt
sudo cp /boot/config_hdmi.txt /boot/config.txt
sleep 2
sudo reboot -n
fi
fi
Doesn't start the arg of if statement with $LCD_ON. When I try to execute it, it doesn't return what I expect. Now it returns:
[E] no device detected
#CONFIGURAZIONEHDMI
but it doesn't start to replace file and reboot.
P.S.: The user and the file have privileges to do it
And I already set chmod 777 the file
There might be more on the line that matches, such as extra whitespace, so the equality test doesn't match exactly.
If you want to test whether a matching line exists in a file, you can just test the exit status of grep, rather than storing the output in a variable.
if grep -q "#CONFIGURAZIONEHDMI" /boot/config.txt; then
echo "reboot con la configurazione LCD"
sudo rm -f /boot/config.txt
sudo cp /boot/config_lcd.txt /boot/config.txt
sleep 2
sudo reboot -n
fi
The -q option tells grep not to print the matching line, it just sets its exit status.

Jenkins error :54: expecting anything but ''\n''; got it anyway

I am using below apache groovy script in jenkins pipeline to deploy my artifact(dev.ear) to server. I have embedded shell script in groovy to securely copy dev.ear from jenkins slave to target server(unix server).
node('linux') {
stage('Checkout/Download/Deploy') {
timeout(time: 30, unit: 'MINUTES') {
def ziptmp = '.ziptmp'
output = sh returnStdout: true, script:"/bin/rm -rf ${ziptmp}; /bin/mkdir ${ziptmp}; cd ${ziptmp}; /usr/bin/unzip -qq ${tempdir}/${artifactFilename}; ls -ltr; echo *;
if [ -e dev.ear ]
then
scp dev.ear lsfi#${serverName57}:/apps/wls/dev/applications;
echo "COPIED DEV ARTIFACT TO SERVER"
else
echo "DEPLOYMENT PACKAGE DOESNT CONTAIN DEV ARTIFACT"
fi"
echo "RESULT::: ${output}"
}
}
}
I am getting the below error when I trigger Jenkins job
WorkflowScript: 54: expecting anything but ''\n''; got it anyway # line 54, column 171.
ctFilename}; ls -ltr; echo *;
I removed new lines in the shell script and updated code as below :
def ziptmp = '.ziptmp'
output = sh returnStdout: true, script:"/bin/rm -rf ${ziptmp}; /bin/mkdir ${ziptmp}; cd ${ziptmp}; /usr/bin/unzip -qq ${tempdir}/${artifactFilename}; ls -ltr; echo *; if [ -e dev.ear ] then scp dev.ear lsfi#${serverName57}:/apps/wls/dev/applications; fi;"
echo "RESULT::: ${output}"
But I am getting the below error :
line 2: syntax error near unexpected token `fi'
How to resolve this error.
Groovy doesn't like a newline in a GString. According to the Grails cookbook you can make multiline Strings using either '''Your multiline String''' or """Your multiline ${GString}""".
I'm not very sure on bash syntax, but you also seem to be missing a semicolon after if [ -e dev.ear ] according to these docs.
Putting it all together:
output = sh returnStdout: true, script: """/bin/rm -rf ${ziptmp}; /bin/mkdir ${ziptmp}; cd ${ziptmp}; /usr/bin/unzip -qq ${tempdir}/${artifactFilename}; ls -ltr; echo *;
if [ -e dev.ear ];
then
scp dev.ear lsfi#${serverName57}:/apps/wls/dev/applications;
echo "COPIED DEV ARTIFACT TO SERVER"
else
echo "DEPLOYMENT PACKAGE DOESNT CONTAIN DEV ARTIFACT"
fi"
echo "RESULT::: ${output}"""

graceful stop inotifywait pipeline inside a bash script

I am using a docker to watch and sync data in a folder with inotify and aws-cli but when I try to kill the docker with SIGTERM it exit with code 143 but I want to get a zero exit code. And if i kill the inotify process inside the docker it do return a zero code.
So how can I kill the entrypoint.sh with TERM signal and return a 0 code?
The docker is here. I put the bash script below:
#!/usr/bin/env bash
# S3Sync Entry Point
# Bash strict mode
set -euo pipefail
IFS=$'\n\t'
# VARs
S3PATH=${S3PATH:-}
SYNCDIR="${SYNCDIR:-/sync}"
CRON_TIME="${CRON_TIME:-10 * * * *}"
INITIAL_DOWNLOAD="${INITIAL_DOWNLOAD:-true}"
# Log message
log(){
echo "[$(date "+%Y-%m-%dT%H:%M:%S%z") - $(hostname)] ${*}"
}
# Sync files
sync_files(){
local src="${1:-}"
local dst="${2:-}"
mkdir -p "$dst" # Make sure directory exists
log "Sync '${src}' to '${dst}'"
if ! aws s3 sync --no-progress --delete --exact-timestamps "$src" "$dst"; then
log "Could not sync '${src}' to '${dst}'" >&2; exit 1
fi
}
# Download files
download_files(){
sync_files "$S3PATH" "$SYNCDIR"
}
# Upload files
upload_files(){
sync_files "$SYNCDIR" "$S3PATH"
}
# Run initial download
initial_download(){
if [[ "$INITIAL_DOWNLOAD" == 'true' ]]; then
if [[ -d "$SYNCDIR" ]]; then
# directory exists
if [[ $(ls -A "$SYNCDIR" 2>/dev/null) ]]; then
# directory is not empty
log "${SYNCDIR} is not empty; skipping initial download"
else
# directory is empty
download_files
fi
else
# directory does not exist
download_files
fi
elif [[ "$INITIAL_DOWNLOAD" == 'force' ]]; then
download_files
fi
}
# Watch directory using inotify
watch_directory(){
initial_download # Run initial download
log "Watching directory '${SYNCDIR}' for changes"
inotifywait \
--event create \
--event delete \
--event modify \
--event move \
--format "%e %w%f" \
--monitor \
--quiet \
--recursive \
"$SYNCDIR" |
while read -r changed
do
log "$changed"
upload_files
done
}
# Install cron job
run_cron(){
local action="${1:-upload}"
# Run initial download
initial_download
log "Setup the cron job (${CRON_TIME})"
echo "${CRON_TIME} /entrypoint.sh ${action}" > /etc/crontabs/root
exec crond -f -l 6
}
# Main function
main(){
if [[ ! "$S3PATH" =~ s3:// ]]; then
log 'No S3PATH specified' >&2; exit 1
fi
mkdir -p "$SYNCDIR" # Make sure directory exists
# Parse command line arguments
cmd="${1:-download}"
case "$cmd" in
download)
download_files
;;
upload)
upload_files
;;
sync)
watch_directory
;;
periodic_upload)
run_cron upload
;;
periodic_download)
run_cron download
;;
*)
log "Unknown command: ${cmd}"; exit 1
;;
esac
}
main "$#"
Trying trap like this but failed:
trap "exit" INT TERM
trap "kill 0" EXIT
Answered by the contributor of the docker image.
https://github.com/vladgh/docker_base_images/issues/62
This image uses Tini, which does not make any assumptions about the meaning of the signal it receives and simply forwards it to its child.
In order for your traps to work you need to add the -g flag to Tini in the Dockerfile (krallin/tini#process-group-killing):
ENTRYPOINT ["/sbin/tini", "-g", "--", "/entrypoint.sh"]
An only then you can set a trap at the top of the entrypoint.sh:
trap "exit 0" INT TERM EXIT

Resources