How to call multiple multiline commands in a yml script? - bash

I'm not quite sure how to add a multiline script with multiple commands in a yml file of my CI - which is in my case a .gitlab-ci.yml:
production:
stage: deploy
image: ${DOCKER_IMAGE}
script:
- while IFS='-' read -r dom app; do
docker stop "$dom-$app" || true &&
docker rm "$dom-$app" || true
docker run
--name "$dom-$app"
--detach
--restart=always
-e VIRTUAL_HOST=$dom
"$dom-$app":latest
done < $FILE
So what I'm doing here is to read a file with a list of apps. For each line I have to stop the existing docker image, remove it and run the new one with some parameters.
How do I have to connect the docker commands (stop, rm and run)? Maybe a &&?
Do I have to use " for $dom-$app?

There are several ways to create multiline strings in YAML.
The way you are using the multiline plain string, all lines will be folded together with spaces.
Also your last line of the string isn't indented enough.
Longer strings like that should be quoted, because chances are high that there is a : or # inside the string which is special in YAML.
I suggest using literal block style, because that means the text will be interpreted exactly as you see it:
script:
- |
while IFS='-' read -r dom app; do
docker stop "$dom-$app" || true
docker rm "$dom-$app" || true
docker run \
--name "$dom-$app" \
--detach \
--restart=always \
-e VIRTUAL_HOST=$dom \
"$dom-$app":latest
done < $FILE
(Note that sequences (items starting with -) don't have to be indented, that's why the - is directly below script.)
You can find more information about YAML quoting styles on my website:
https://www.yaml.info/learn/quote.html

Related

Escape $ (dollar) into bash command in docker compose is not interpreted

I would need some help with docker compose and the $ caracter.
Here an example of container in docker compose :
services:
setup:
image: image:tag
container_name: setup
user: "0"
command: >
/bin/sh -c '
sed -i 's/before with a $$/after with a $$/' /foo/bar/something.txt;
'
When I try with this, the sed command goes without any $, even if I escaped it twice. What am I missing?
Best regards,
On the docker compose side, I see there is no other way to escape the $ caracter : I have to double-it with another $ : $$
However, in the bash command, no $ is interpreted, so the sed command doesn't work

How to pass ALL environment variables to container with docker exec

It's possible to set one or more environment variables in the container while doing docker exec, for example:
docker exec -ti -e VAR=1 -e HOME container_name command
But I would like to pass all the shell's environment variables without explicitly specifying them individually. Essentially the equivalent of sudo -E, although it's a different thing.
According to the documentation, there is no such option. But one hack would be something like:
env > env_vars && docker exec -ti --env-file ./env_vars container_name command
Which works, but I'm looking for a simple one step solution that doesn't involve creating a temporary file. Perhaps a bash trick I don't know or haven't thought of yet. Thanks.
Please note: Passing all environment variables is not recommended and defeats the purpose of container process isolation. This question is for knowledge, not about what should be done. Also, the question is specifically about running a temporary command in an existing container with docker exec, not about docker run.
With Bash it seems using process substitution work:
docker run --rm -ti --env-file <(env) alpine sh
Note, this creates a temporary fifo file behind the scenes anyway.
Note, this will not work properly with variables containing newlines, they are cutoff on newlines. You should do something along, I tried to make it short:
readarray -d '' -t args < <(env -0 | sed -z 's/^/--env\x00/')
docker run --rm -ti "${args[#]}" alpine sh

Passing Variables in Makefile

I'm using a Makefile to run various docker-compose commands and I'm trying to capture the output of a script run on my local machine and pass that value to a Docker image.
start-service:
VERSION=$(shell aws s3 ls s3://redact/downloads/1.2.3/) && \
docker-compose -f ./compose/docker-compose.yml run \
-e VERSION=$$(VERSION) \
connect make run-service
When I run this I can see the variable being assigned but it still errors. Why is the value not getting passed into the -e argument:
VERSION=1.2.3-build342 && \
docker-compose -f ./compose/docker-compose.yml run --rm \
-e VERSION?=$(VERSION) \
connect make run-connect
/bin/sh: VERSION: command not found
You're mixing several different Bourne shell and Make syntaxes here. The Make $$(VERSION) translates to shell $(VERSION), which is command-substitution syntax; GNU Make $(shell ...) generally expands at the wrong time and isn't what you want here.
If you were writing this as an ordinary shell command it would look like
# Set VERSION using $(...) substitution syntax
# Refer to just plain $VERSION
VERSION=$(aws s3 ls s3://redact/downloads/1.2.3/) && ... \
-e VERSION=$VERSION ... \
So when you use this in a Make context, if none of the variables are Make variables (they get set and used in the same command), just double the $ to $$ not escape them.
start-service:
VERSION=$$(aws s3 ls s3://redact/downloads/1.2.3/) && \
docker-compose -f ./compose/docker-compose.yml run \
-e VERSION=$$VERSION \
connect make run-service

How to pass arguments with space by environment variable?

On bash shell, I want to pass argument by environment variable.
like this...
$ export DOCKER_OPTIONS="-p 9200:9200 -e ES_JAVA_OPTS='-Xmx1g -Xms1g' -d "
$ docker run -d $DOCKER_OPTIONS elasticsearch
I expect that "ES_JAVA_OPTS='-Xmx1g -Xms1g'" is passed as an option value of "-e". But I couldn't find a way.
$ set -x
$ docker run -d $DOCKER_OPTIONS elasticsearch
+ docker run -d -p 9200:9200 -e 'ES_JAVA_OPTS='\''-Xmx1g' '-Xms1g'\''' elasticsearch
unknown shorthand flag: 'X' in -Xms1g'
This separated -Xms1g as an another option.
$ docker run -d "$DOCKER_OPTIONS" elasticsearch
+ docker run -d '-p 9200:9200 -e ES_JAVA_OPTS='\''-Xmx1g -Xms1g'\''' elasticsearch
docker: Invalid containerPort: 9200 -e ES_JAVA_OPTS='-Xmx1g -Xms1g'.
This bundled the parameters together.
What should I do?
Use an array to circumvent these awkward parsing problems. Arrays are great because you don't need to do any special quote when defining them. The only place you have to be careful with quotes is when expanding them: always put quotes around "${array[#]}".
dockerOptions=(-p 9200:9200 -e ES_JAVA_OPTS='-Xmx1g -Xms1g' -d)
docker run -d "${dockerOptions[#]}" elasticsearch
Note that export isn't needed since you're passing the options to docker via its command-line rather than as an environment variable.
Also, all uppercase names are reserved for the shell. It's best to avoid them when defining your own variables.

Looping over arguments in bash array for docker commands?

I seem to be stuck here. I'm attempting to write a bash function that starts x number of docker containers, wish an array that holds exposed ports for the given app. I don't want to loop over the array, just the commands, while referencing the array to get the value. The function looks like this:
#!/bin/bash
declare -a HOSTS=( ["app1"]="8002"
["app2"]="8003"
["app3"]="8008"
["app4"]="8009"
["app5"]="8004"
["app6"]="8007"
["app7"]="8006" )
start() {
for app in "$#"; do
if [ "docker ps|grep $app" == "$app" ]; then
docker stop "$app"
fi
docker run -it --rm -d --network example_example \
--workdir=/home/docker/app/src/projects/"$app" \
--volume "${PWD}"/example:/home/docker/app/src/example \
--volume "${PWD}"/projects:/home/docker/app/src/projects \
--volume "${PWD}"/docker_etc/example:/etc/example \
--volume "${PWD}"/static:/home/docker/app/src/static \
--name "$app" --hostname "$app" \
--publish "${HOSTS["$app"]}":"${HOSTS["$app"]}" \
example ./manage.py runserver 0.0.0.0:"${HOSTS[$app]}";
echo "$app"
done
}
And I want to pass arguments like so:
./script.sh start app1 app2 app4
Right now it isn't echoing the app so that points towards the for loop being declared incorrectly...could use some pointers on this.
This line:
if [ "docker ps|grep $app" == "$app" ];
doesn't do what you want. It looks like you mean to say:
if [ "$(docker ps | grep "$app")" == "$app" ];
but you could fail to detect two copies of the application running, and you aren't looking for the application as a word (so if you look for rm you might find perform running and think rm was running).
You should consider, therefore, using:
if docker ps | grep -w -q "$app"
then …
fi
This runs the docker command and pipes the result to grep, and reports on the exit status of grep. The -w looks for a word containing the value of "$app", but does so quietly (-q), so grep only reports success (exit status 0) if it found at least one matching line or failure (non-zero exit status) otherwise.
docker ps -f lets you conveniently check programmatically whether a particular image is running.
for app in "$#"; do
if docker ps -q -f name="$app" | grep -q .; then
docker stop "$app"
:
Unfortunately, docker ps does not set its exit code (at least not in the versions I have available -- I think it has been fixed in some development version after 17.06 but I'm not sure) so we have to use an ugly pipe to grep -q . to check whether the command produced any output. The -q flag just minimizes the amount of stuff it prints (it will print just the container ID instead of a bunch of headers and columnar output for each matching container).

Resources