Re-using a key in systemd unit - systemd

My sysd unit is something like
[Unit]
Description=%p
....
[Service]
Type=notify
....
ExecStart=/bin/bash -c '\
/usr/libexec/sdnotify-proxy /run/%p.sock \
....-s LOG_LEVEL=DEBUG \'
WatchdogSec=2
How can I re-use WatchdogSec key in exec-start, rather than duplicating the value 2?

Related

Iterating the values of a submake

I'm trying to automate some commands I use regularly in a Makefile, but I can't seem to figure out the right syntax for it.
Given the following targets:
.PHONY: params-list
params-list:
#aws ssm get-parameters-by-path --path /${SERVICE}/${ENV} | jq -c -r '.Parameters[] | .Name'
.PHONY: params-get
params-get:
#aws ssm get-parameter --name ${PARAM} --with-decryption | jq -c -r .Parameter.Value
I was attempting to call params-list and then feed the results into params-get. My best attempt was along the lines of:
.PHONY: params
params:
for param in $(MAKE) params-list; do \
$(MAKE) params-get PARAM=$${param}; \
done
But obviously doesn't work. What's the right way of achieving this?
One hint is to eschew shell constructs for make ones. Going one step at a time here.
For the get operation, we encode one target for each param. So for param p1 (say), we invent a target params-get<p1>. We note that inside the recipe (i.e., the shell commands) that $# will expand to params-get<p1>. Therefore ${#:params-get<%>=%} will expand to p1.
Writing this out in make syntax:
.PHONY: params-get<p1>
params-get<p1>:
#aws ssm get-parameter --name ${#:params-get<%>=%} --with-decryption | jq -c -r .Parameter.Value
I note that with an intermediate variable we can have exactly the same recipe you used in your earlier incantation.
PARAM = ${#:params-get<%>=%}
.PHONY: params-get<p1>
params-get<p1>:
#aws ssm get-parameter --name ${PARAM} --with-decryption | jq -c -r .Parameter.Value
If we have a second param pa (say), that's easy to add:
PARAM = ${#:params-get<%>=%}
.PHONY: params-get<p1> params-get<pa>
params-get<p1> params-get<pa>:
#aws ssm get-parameter --name ${PARAM} …
I hope you can see some boiler plate appearing here.
Moving towards some more dynamically generated make, rather than hard-coding each param, let's put all the possible params in a list called $PARAMS
PARAMS := p1 pa another
get-targets := ${PARAMS:%=params-get<%>}
PARAM = ${#:params-get<%>=%}
.PHONY: ${get-targets}
${get-targets}:
#aws ssm get-parameter --name ${PARAM} …
.PHONY: params-get
params-get: ${get-targets}
echo $# Done
I have added your original params-get target. Now all the work is done by dependencies.
No shell loop in sight.
The exit code of each aws ssm get-parameter is checked by make.
The code is parallel safe: make -j9 will run 9 jobs in parallel until the work is exhausted. Nice if you have 8 CPUs. (If your make is not parallel safe then it is broken by the way.)
We are pretty close now. We just have to ensure $PARAMS is set to the output of ssm get-parameters-by-path.
Something like:
.PHONY: params-list
params-list:
#aws ssm get-parameters-by-path --path /${SERVICE}/${ENV} | jq -c -r '.Parameters[] | .Name'
PARAMS := $(shell ${MAKE} params-list)
get-targets := ${PARAMS:%=params-get<%>}
PARAM = ${#:params-get<%>=%}
.PHONY: ${get-targets}
${get-targets}:
#aws ssm get-parameter --name ${PARAM} --with-decryption | jq -c -r .Parameter.Value
.PHONY: params-get
params-get: ${get-targets}
echo $# Done
Definitely not a fan of using $(shell …), but this is just a sketch.
You could do a single (long) target/recipe as follows:
.PHONY: params-get
params-get:
#for p in $$(aws ssm get-parameters-by-path --path /${SERVICE}/${ENV} | jq -c -r '.Parameters[] | .Name'); do \
aws ssm get-parameter --name $$p --with-decryption | jq -c -r .Parameter.Value; \
done
This assumes that your parameters don't have white spaces in them
------------ EDIT ---------------
To be able to access the parameters in another recipe, you can create a rule to create a file with the list of files in one recipe, and then access that file in another recipe as so:
# note that in this case .params-list is actually the name of a file,
# but it is declared as phony to force it to be rebuilt every time.
.PHONY: .params-list
.params-list:
aws ssm get-parameters-by-path --path /${SERVICE}/${ENV} | jq -c -r '.Parameters[] | .Name' > $#
.PHONY: params-get
params-get: .params-list
#for param in $$(cat .params-list); do \
aws ssm get-parameter --name $${param} --with-decryption | jq -c -r .Parameter.Value; \
done;

Snakemake with Singularity

I'm trying to use Singularity within one of my Snakemake rules. This works as expected when running my Snakemake pipeline locally. However, when I try to submit using sbatch onto my computing cluster, I run into errors. I'm wondering if you have any suggestions about how to translate the local pipeline to one that can work on the cluster. Thank you in advance!
The rule which causes errors uses Singularity to call variants with DeepVariant:
# Call variants with DeepVariant.
rule deepvariant_call:
input:
ref_path='/labs/jandr/walter/varcal/data/refs/{ref}.fa',
bam='results/{samp}/bams/{samp}_{mapper}_{ref}.rmdup.bam'
params:
nshards='1',
version='0.7.0'
threads: 8
output:
vcf='results/{samp}/vars/{samp}_{mapper}_{ref}_deep.g.vcf.gz'
shell:
'singularity exec --bind /srv/gsfs0 --bind /labs/jandr/walter/ /home/kwalter/.singularity/shub/deepvariant-docker-deepvariant:0.7.0.simg \
/labs/jandr/walter/tb/test/scripts/call_deepvariant.sh {input.ref_path} {input.bam} {params.nshards} {params.version} {output.vcf} '
#
# Error in rule deepvariant_call:
# jobid: 17
# output: results/T1-XX-2017-1068_S51/vars/T1-XX-2017-1068_S51_bowtie2_H37Rv_deep.g.vcf.gz
# shell:
# singularity exec --bind /srv/gsfs0 --bind /labs/jandr/walter/ /home/kwalter/.singularity/shub/deepvariant-docker-deepvariant:0.7.0.simg; /labs/jandr/walter/tb/test/scripts/call_deepvariant.sh /labs/jandr/walter/varcal/data/refs/H37Rv.fa results/T1-XX-2017-1068_S51/bams/T1-XX-2017-1068_S51_bowtie2_H37Rv.rmdup.bam 1 0.7.0 results/T1-XX-2017-1068_S51/vars/T1-XX-2017-1068_S51_bowtie2_H37Rv_deep.g.vcf.gz
# (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
I submit jobs to the cluster with the following:
snakemake -j 128 --cluster-config cluster.json --cluster "sbatch -A {cluster.account} --mem={cluster.mem} -t {cluster.time} -c {threads}"
As seen in the resolved command of error message where semi-colon separates two lines of shell: instead of whitespace, this error is due to string formatting in shell:.
You could use triple-quoted format:
shell:
'''
singularity exec --bind /srv/gsfs0 --bind /labs/jandr/walter/ /home/kwalter/.singularity/shub/deepvariant-docker-deepvariant:0.7.0.simg \
/labs/jandr/walter/tb/test/scripts/call_deepvariant.sh {input.ref_path} {input.bam} {params.nshards} {params.version} {output.vcf}
'''
Or, each line within single quotes:
shell:
'singularity exec --bind /srv/gsfs0 --bind /labs/jandr/walter/ /home/kwalter/.singularity/shub/deepvariant-docker-deepvariant:0.7.0.simg \'
'/labs/jandr/walter/tb/test/scripts/call_deepvariant.sh {input.ref_path} {input.bam} {params.nshards} {params.version} {output.vcf}'

How can I load the docker images before the service starts?

I spend some time with the Vagrant & CoreOS and Docker, There's so much to learn...
I work in a development environment and constantly UP and DESTROY operation So I do not want to download the docker images every time... It takes too much time, images are very heavy.
Well, I pull the images what I use most frequently and save them.
core#core-01 ~ $ docker save ubuntu:latest > /home/core/share/ubuntu.tar
core#core-01 ~ $ docker save mysql > /home/core/share/mysql.tar
core#core-01 ~ $ docker save wordpress:latest > /home/core/share/wordpress.tar
I'm loading them again if required.
core#core-03 ~ $ docker load -i=/home/core/share/wordpress.tar
core#core-04 ~ $ docker load -i=/home/core/share/mysql.tar
so far everything is OK.
But I'm having problems when I try to build the cluster.
I have two simple service database and web
database.1.service
[Unit]
Description=Run database_1
After=docker.service
Requires=docker.service
[Service]
Restart=always
RestartSec=10s
ExecStartPre=/usr/bin/docker ps -a -q | xargs docker rm
ExecStart=/usr/bin/docker run --rm --name database_1 -e "MYSQL_DATABASE=demo" -e "MYSQL_ROOT_PASSWORD=password" -p 3306:3306 mysql
ExecStartPost=/usr/bin/docker ps -a -q | xargs docker rm
ExecStop=/usr/bin/docker kill database_1
ExecStopPost=/usr/bin/docker ps -a -q | xargs docker rm
[Install]
WantedBy=local.target
web.1.service
[Unit]
Description=Run web_1
After=database.1.service
Requires=database.1.service
[Service]
Restart=always
RestartSec=10s
ExecStartPre=/usr/bin/docker ps -a -q | xargs docker rm
ExecStart=/usr/bin/docker run --rm --name web_1 --link database_1:database_1 -e "DB_USER=root" -e "DB_PASSWORD=password" -p 80:80 wordpress
ExecStartPost=/usr/bin/docker ps -a -q | xargs docker rm
ExecStop=/usr/bin/docker kill web_1
ExecStopPost=/usr/bin/docker ps -a -q | xargs docker rm
[Install]
WantedBy=local.target
How do I load mysql image (/home/core/share/mysql.tar) before the service start.
if the service starts then download the images again.
$ fleetctl start database.1.service
$ fleetctl start web.1.service
Can I Load the images as follows?
ExecStartPre=/usr/bin/docker load -i=/home/core/share/mysql.tar
The question is;
How do I create a development environment to work without an internet connection?
I think you might be over-complicating things. You should not have to explicitly ask for an image to be saved and/or reused.
According to the CoreOS documentation
The overlay filesystem works similar to git: our image now builds off of the ubuntu base and adds another layer with Apache on top. These layers get cached separately so that you won't have to pull down the ubuntu base more than once.
While this still requires an internet connection for the initial image download, subsequent launches of the container should reuse the cached image.
If you require more control, you might want to look into maintaining a private Docker registry within your CoreOS cluster. The best way I've found to do this is using Deis, which comes with a load of goodies, including a cluster-wide fault-tolerant file-system and a private Docker registry as standard.

Systemd string escaping

If I run this command
/bin/bash -c 'while true;do /usr/bin/etcdctl set my-container "{\"host\": \"1\", \"port\": $(/usr/bin/docker port my-container 5000 | cut -d":" -f2)}" --ttl 60;sleep 45;done'
I get back from etcd what I expect {"host":"1", "port":49155}
But if I put it in a systemd file
ExecStart=/bin/bash -c 'while true;do /usr/bin/etcdctl set my-container "{\"host\": \"1\", \"port\": $(/usr/bin/docker port my-container 5000 | cut -d":" -f2)}" --ttl 60;sleep 45;done'
I get back {host:1, port:49155}
Any idea of why the escaping is different inside of the file? How can I fix it? Thanks!!
systemd-escape '\"a fun thing"\'
output: \x5c\x22a\x20fun\x20thing\x22\x5c
[Service]
ExecStart=/bin/sh -c 'echo "\x5c\x22a\x20fun\x20thing\x22\x5c"'
will print a fun thing
Systemd is doing isn't like bash as you now know, hence the escaping problem. In fact, systemd removes single and double quotes after parsing them. That fact is right out of the documentation (I went thru this too, then read :D).
The solution, call a script that echo back that info you need (with escaped quotes) if your purpose allows that.
In short -- it's different because systemd does its own string-splitting, unescaping and expansion, and the logic it uses isn't POSIX-compliant.
You can still do what you want, but you'll need more backslashes:
ExecStart=/bin/bash -c '\
while :; do \
port=$(/usr/bin/docker port my-container 5000 | cut -d: -f2); \
/usr/bin/etcdctl set my-container "{\\\"host\\\": \\\"1\\\", \\\"port\\\": $port}" --ttl 60; \
sleep 45; \
done'
Note the use of \\\" for every literal " character in the desired output.
By the way -- personally, I advise against trying to generate JSON through string concatenation -- it's prone to injection vulnerabilities (if someone could put content of their choice in the output of the docker port command, they could potentially insert other key/value pairs into your data by having , "evil": true be in the port variable). This class of issues is avoided by using jq:
ExecStart=/bin/bash -c '\
while :; do \
port=$(/usr/bin/docker port my-container 5000 | cut -d: -f2); \
json=$(jq -nc \
--arg host 1 \
--arg port "$port" \
'{} | .host=$host | .port=($port | tonumber)'); \
/usr/bin/etcdctl set my-container "$json" --ttl 60; \
sleep 45; \
done'
As a happy side effect, the above avoids needing any literal double-quote characters (the only ones used are syntactic to the copy of sh), so we don't need any backslashes to be passed through from systemd to the shell.

Socat and systemd

I need to run a script, which among many things running socat.
Running the script from the command line works fine, now what I want is that this script is run as a service.
This is the script I have:
#!/usr/bin/env sh
set -e
TTY=${AQM_TTY:-/dev/ttyUSB0}
/reg_sesion/create
DESTINOS=(http://127.0.0.1)
LOG_DIR=./logs-aqm
mkdir -p "${LOG_DIR}"
###ADDED####
echo $$ > /var/run/colector.pid
socat -b 115200 ${TTY},echo=0,crnl - |
grep --line-buffered "^rs" |
while read post; do
for destino in ${DESTINOS[#]}; do
wget --post-data="$(echo "${post}" | tr -d "\n")" \
-O /dev/null \
--no-verbose \
--background \
--append-output="${LOG_DIR}/${destino//\/}.log" \
"${destino}/reg_sesion/create"
done
echo "${post}" | tee -a "${LOG_DIR}/aqm.log"
done
And the service file:
[Unit]
Description=colector
[Service]
Type=simple
PIDFile=/var/run/colector.pid
User=root
Group=root
#ExecStart=/root/socat.sh
ExecStart=/bin/sh -c '/root/socat.sh'
[Install]
WantedBy=multi-user.target
When I start the service, the process starts and ends quickly.
any ideas?
Thanks for your time
Remove PIDFile= from your service file, and see whether it works.
PIDFile= is mainly for Type=forking, where your startup program would fork a sub-process, so you tell SYSTEMD (via PIDFile) to watch for that process. In case of Type=simple , with your long-running service, SYSTEMD will create a sub-process to start your service, so it knows exactly what the PID is.

Resources