Scriptable args in docker-compose file - shell

In my docker-compose file (docker-compose.yaml), I would like to set an argument based on a small shell script like this:
services:
backend:
[...]
build:
[...]
args:
PKG_NAME: $(dpkg -l <my_package>)
In the Dockerfile, I read this argument like this:
ARG PKG_NAME
First of all: I know that this approach is OS-dependent (requires dpkg), but for starters I would be happy to make it run on Debian. Also, it's fine it the value is an empty string.
However, docker-compose up throws this error:
ERROR: Invalid interpolation format for "build" option in service "backend": "$(dpkg -l <my_package>)"
Is there a way to dynamically specify an argument in the docker-compose file through a shell script (or another way)?

You can only use variable substitution as described in compose file documentation
You are trying to inject a shell construct and this is not supported.
The documentation has several examples on how to pass vars to compose file. In your case, you could:
export the var in your environment:
export MY_PACKAGE=$(dpkg -l <my_package>)
use that var in your compose file with default:
args:
PKG_NAME: "${MY_PACKAGE:-some_default_pkg}"

Related

Run local script with arguments with docker

I am trying to run a local script with docker bash in windows PowerShell but not working.
My script part is another program, but the finally goal is to process a media file and zip it with the shell script.
The cmd: docker exec -it containername /bin/bash < myscript.sh -f fileone.mp4 -h output
I have an error in ps:
The '<' operator is reserved for future use.
The parameters (and also the files) are changing, if rerun the shell script, and after the script, processing is done it will create a zip file (what I need) with the output name, but random strings will be placed to the zipped filename too.
Anyone tried to use docker in that way in windows?
I figure out a solution for my own question. I just leave it here, if someone needs it.
The docker-compose file:
version: '3.8'
services:
somename:
build:
context: .
dockerfile: Dockerfile
container_name: 'name_if_you_need'
The dockerfile:
FROM debian:latest
# Install and/or config anything what you need
ADD . /newfolder
WORKDIR /newfolder
ENTRYPOINT [ "/newfolder/myscript.sh" ]
To call (with arguments and/or flags if your script need it): docker run --rm -v ${PWD}:/newfolder image_name -flag1 sample.mp4 -flag2 sample (no tty error, not need winpty)
Please note, if your script working with file or files, and you pass it via arguments like me, you need to copy them in your current folder before docker run
With this solution, if your script generates a file or files when/after executing, you will see them automatically in your current folder.

How to export hostname in make file and use in compose file

I am working on docker and docker-compose file, where I need hostname. I am also using Makefile to start container. but this container need hostname.
Following is my Makefile where I start command and subcommand that executes.
This command does not export MY_HOST var value from hostname -i.
start:
export MY_HOST=`hostname -i`
echo ${MY_HOST}
docker -f test.yml up -d
following is my docker-compose yml file where I want to use exported variable.
MyImage:
image:registry.test:latest
restart:always
environment:
MY_HOST=${MY_HOST}
What's wrong with this code? can someone help on this.
Unfortunately it is impossible to pass env variables from one Makefile command to another, because each line execute separately. But you can define variable and reuse it later this way:
MY_HOST := `hostname`
start:
MY_HOST=${MY_HOST}\
docker-compose run --rm shell env
docker-compose.yml
MyImage:
image:registry.test:latest
restart:always
environment:
- MY_HOST
https://www.gnu.org/software/make/manual/make.html#Values
also
https://makefiletutorial.com/#variables
and
pass env variables in make command in makefile

Correct formatting for docker-compose environment variables that are command line arguments

I have the following docker-compose service:
maxwell:
image: zendesk/maxwell:latest
restart: always
environment:
# <irrelevant config here>
MAXWELL_PRODUCER: stdout
MAXWELL_OPTIONS: >-
# <irrelevant config here>
--filter="exclude:*.*"
The file running inside the image starts the server like this:
exec `dirname $0`/maxwell --user=$MYSQL_USERNAME --password=$MYSQL_PASSWORD --host=$MYSQL_HOST --producer=$MAXWELL_PRODUCER $MAXWELL_OPTIONS
The server this image runs, Maxwell, is not starting because it's complaining about malformed options. However, if I SSH into the image (with docker-compose run maxwell bash), and input the above filter option, exactly as written, the server starts normally.
I tried parsing the above YAML and looking at the string. It looks correct, so I don't think it's a YAML formatting issue.
What could be causing the command line option to get mangled?

docker-compose.yml passing arg to build from file contents

I would like to read contents of a file specified by an environment variable and pass it to docker-compose as build arg.
So then in my Dockerfile I can do:
ARG MY_FILE
RUN echo "$MY_FILE" > /my-file
This works perfectly:
docker-compose -f ./docker-compose.yml build --build-arg MY_FILE="$(cat $PATH_TO_MY_FILE)"
However, if I try to do this in docker-compose.yml like so:
build:
context: .
args:
- MY_FILE="$(cat $PATH_TO_MY_FILE)"
it fails with this error:
ERROR: Invalid interpolation format for "build" option in service "my-service": "MY_FILE="$(cat $PATH_TO_MY_FILE)""
Any idea how do I have to construct this string to have the same effect? I tried $$ etc, but doesn't seem to work...
Thanks for your help :)
Docker compose doesn't support this, so you have to use a workaround only. Which would either mean pre-processing the compose file or generate the command you ran by reading the yaml and interpolating by generating the command in bash
You can use something like yq and parse the parameters from docker-compose.yml and generate your command. But honestly what you are doing right now is simple and effective.
In docker service 3, you can do that now.
web:
image: xxxx
env_file:
- web-variables.env
If you have specified a Compose file with docker-compose -f FILE, paths in env_file are relative to the directory that file is in.

Dockerfile: how to set env variable from file contents

I want to set an environment variable in my Dockerfile.
I've got a .env file that looks like this:
FOO=bar.
Inside my Dockerfile, I've got a command that parses the contents of that file and assigns it to FOO.
RUN 'export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")'
The problem I'm running into is that the script above doesn't return what I need it to. In fact, it doesn't return anything.
When I run docker-compose up --build, it fails with this error.
The command '/bin/sh -c 'export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")'' returned a non-zero code: 127
I know that the command /bin/sh -c 'echo "$(cut -d'=' -f2 <<< $(grep FOO .env))"' will generate the correct output, but I can't figure out how to assign that output to an environment variable.
Any suggestions on what I'm doing wrong?
Environment Variables
If you want to set a number of environment variables into your docker image (to be used within the containers) you can simply use env_file configuration option in your docker-compose.yml file. With that option, all the entries in the .env file will be set as the environment variables in image and hence into containers.
More Info about env_file
Build ARGS
If your requirement is to use some variables only within your Dockerfile then you specify them as below
ARG FOO
ARG FOO1
ARG FOO2
etc...
And you have to specify these arguments under the build key in your docker-compose.yml
build:
context: .
args:
FOO: BAR
FOO1: BAR1
FOO2: BAR2
More info about args
Accessing .env values within the docker-compose.yml file
If you are looking into passing some values into your docker-compose file from the .env then you can simply put your .env file same location as the docker-compose.yml file and you can set the configuration values as below;
ports:
- "${HOST_PORT}:80"
So, as an example you can set the host port for the service by setting it in your .env file
Please check this
First, the error you're seeing. I suspect there's a "not found" error message not included in the question. If that's the case, then the first issue is that you tried to run an executable that is the full string since you enclosed it in quotes. Rather than trying to run the shell command "export", it is trying to find a binary that is the full string with spaces in it and all. So to work past that error, you'd need to unquote your RUN string:
RUN export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")
However, that won't solve your underlying problem. The result of a RUN command is that docker saves the changes to the filesystem as a new layer to the image. Only changes to the filesystem are saved. The shell command you are running changes the shell state, but then that shell exits, the run command returns, and the state of that shell, including environment variables, is gone.
To solve this for your application, there are two options I can think of:
Option A: inject build args into your build for all the .env values, and write a script that calls build with the proper --build-arg flag for each variable. Inside the Dockerfile, you'll have two lines for each variable:
ARG FOO1=default value1
ARG FOO2=default value2
ENV FOO1=${FOO1} \
FOO2=${FOO2}
Option B: inject your .env file and process it with an entrypoint in your container. This entrypoint could run your export command before kicking off the actual application. You'll also need to do this for each RUN command during the build where you need these variables. One shorthand I use for pulling in the file contents to environment variables is:
set -a && . .env && set +a

Resources