Docker compose won't find $PWD environment variable - shell

Here's my docker-compose:
version: '2'
services:
couchpotato:
build:
context: ./couchpotato
dockerfile: Dockerfile
ports:
- 5050:5050
volumes:
- "${PWD}/couchpotato/data:/home/CouchPotato/data/"
- "${PWD}/couchpotato/config:/home/CouchPotato/config/"
When I run it inside the shell, in the directory of the docker-compose.yml, I get:
WARNING: The PWD variable is not set. Defaulting to a blank string.
and the compose starts with PWD being empty.
I don't see any error in the file, as seen here: https://docs.docker.com/compose/environment-variables/

You don't need ${PWD} for this, you can just make the path relative and compose will expand it (one major difference between compose paths and those processed by docker run).
version: '2'
services:
couchpotato:
build:
context: ./couchpotato
dockerfile: Dockerfile
ports:
- 5050:5050
volumes:
- "./couchpotato/data:/home/CouchPotato/data/"
- "./couchpotato/config:/home/CouchPotato/config/"
As for why compose doesn't see this variable, that depends on your shell. Compose looks for an exported environment variable, contents of the .env file, and command line flags to the docker-compose command. If each of those comes up empty for the variable, you'll get that warning.

My advice: change all $PWD to .

$PWD will not work if you are running using sudo. Try the recommended settings from Docker for Linux https://docs.docker.com/engine/install/linux-postinstall/.
Sudo will run as a different user, with a different env.
$ sudo env | grep -i pwd
$ env | grep -i pwd
PWD=/home/user
OLDPWD=/

If you really need absolute paths, then call this before calling docker-compose up:
set PWD=%CD%

I had the same issue with one of my env vars. On looking at my bashrc file more closely, I found out that I hadn't exported that variable.
Before:
VAR=<value>
After:
export VAR=<value>

Related

docker-compose.yml passing arg to build from file contents

I would like to read contents of a file specified by an environment variable and pass it to docker-compose as build arg.
So then in my Dockerfile I can do:
ARG MY_FILE
RUN echo "$MY_FILE" > /my-file
This works perfectly:
docker-compose -f ./docker-compose.yml build --build-arg MY_FILE="$(cat $PATH_TO_MY_FILE)"
However, if I try to do this in docker-compose.yml like so:
build:
context: .
args:
- MY_FILE="$(cat $PATH_TO_MY_FILE)"
it fails with this error:
ERROR: Invalid interpolation format for "build" option in service "my-service": "MY_FILE="$(cat $PATH_TO_MY_FILE)""
Any idea how do I have to construct this string to have the same effect? I tried $$ etc, but doesn't seem to work...
Thanks for your help :)
Docker compose doesn't support this, so you have to use a workaround only. Which would either mean pre-processing the compose file or generate the command you ran by reading the yaml and interpolating by generating the command in bash
You can use something like yq and parse the parameters from docker-compose.yml and generate your command. But honestly what you are doing right now is simple and effective.
In docker service 3, you can do that now.
web:
image: xxxx
env_file:
- web-variables.env
If you have specified a Compose file with docker-compose -f FILE, paths in env_file are relative to the directory that file is in.

How to force Git for Windows' bash-shell to not convert path-string to windows path?

I'm using the bash shell provided by Git for Windows for Docker toolbox for Windows. I want to export a string representing a unix path to a environment variable to then use in a docker container. Something like:
export MY_VAR=/my/path; docker-compose up
The problem is that in my container the variable will be something like:
echo $MY_VAR # prints c:/Program Files/Git/my/path
So it seems the shell (my guess) recognizes the string as a path and converts it to windows format. Is there a way to stop this?
I've attempted to use MSYS_NO_PATHCONV=1:
MSYS_NO_PATHCONV=1; export LOG_PATH=/my/path; docker-compose up
But it did not have any effect.
I don't think it's an issue with my docker-compose and dockerfile but I'll attach them if someone is interested.
My Dockerfile:
FROM node:8-slim
RUN mkdir /test \
&& chown node:node /test
USER node
ENTRYPOINT [ "/bin/bash" ]
My docker-compose.yml:
version: '2'
services:
test:
build:
context: .
image: test
environment:
- MY_VAR
volumes:
- ${MY_VAR}:/test
command: -c 'sleep 100000'
The Final goal here is to make a directory on the host machine accessible from the docker container (for logs and such). The directory should be set by an environment variable. Setting the directory in the docker-compose.yml does work, just not for my use case.
If you want your command docker-compose up to be run with MSYS_NO_PATHCONV=1; you have two options:
export LOG_PATH=/c/Windows; export MSYS_NO_PATHCONV=1; docker-compose up This will affect your bash session as the variable is exported
export LOG_PATH=/c/Windows; MSYS_NO_PATHCONV=1 docker-compose up; (note I removed one semi-colon intentionally) This will set MSYS_NO_PATHCONV only in the context of the command to run
Test it with:
$ export LOG_PATH=/c/Windows ; cmd "/c echo %LOG_PATH%";
C:/Windows --> Fails
$ export LOG_PATH=/c/Windows ; MSYS_NO_PATHCONV=1 cmd "/c echo %LOG_PATH%"
/c/Windows --> Success
$ export LOG_PATH=/c/Windows ; export MSYS_NO_PATHCONV=1; cmd "/c echo %LOG_PATH%";
/c/Windows --> Success but MSYS_NO_PATHCONV is now "permanently" set
Seems a workaround is to remove the first / from the string and add it in the docker-compose.yml instead.
new docker-compose.yml:
version: '2'
services:
test:
build:
context: .
image: test
environment:
- MY_VAR
volumes:
- /${MY_VAR}:/test # added '/' to the beginning of the line
command: -c 'sleep 100000'
and then starting the container with:
export MY_VAR=my/path; docker-compose up # removed the '/' from the beginning of the path.
This does seem more like a "lucky" workaround than a perfect solution as when I'll build this on other systems I'll have to remind myself to remove the /. Doable but a bit annoying. Maybe someone has a better idea.

Send variables into docker container to use in a script

I am running a script in the CI/CD of the pipeline. The goal is to get a string to work with.
When I get that result, I save it into a variable and save result in the yaml file of the dockerfile.
I am wanting to pass that variable from the CI environment, into the docker-compose container. So, I am trying to export this like another things are exported, however it doesn't work:
ci/pdf/jenkins-changes.sh
LOG="$(cat ".log")"
export LOG
I have added a variables.env file that looks like this:
LOG=LOG
And then modified the docker-compose.yaml to read the var :
pdf:
image: thisimage/this
build:
context: ../
dockerfile: ./docker/Dockerfile.name
args:
git_branch: ${GIT_BRANCH}
env_file:
- variables.env
environment:
- LOG=${LOG}
volumes:
- do-build:/src/do-build
And in the script that finally runs the docker-container, I have also
Declared it:
FROM ubuntu:16.04 as pdf-builder
ARG log
ENV log=${log}
RUN LOG=${log}
RUN export $LOG
And right after, I run the script.sh that requires the variable, however, it returns Unbound variable and breaks.
LOG=${log}
echo ${LOG}
The answer to this question was:
ci/pdf/jenkins-changes.sh
LOG="$(cat ".log")"
export LOG
Then pass it as an argument, instead of a variable:
pdf:
image: thisimage/this
build:
context: ../
dockerfile: ./docker/Dockerfile.name
args:
git_branch: ${GIT_BRANCH}
env_file:
- variables.env
environment:
- LOG=${LOG}
volumes:
- do-build:/src/do-build
And then, in the dockerfile call it and define it.
ARG log
This should leave it global for any script to use it.

Interactive shell using Docker Compose

Is there any way to start an interactive shell in a container using Docker Compose only? I've tried something like this, in my docker-compose.yml:
myapp:
image: alpine:latest
entrypoint: /bin/sh
When I start this container using docker-compose up it's exited immediately. Are there any flags I can add to the entrypoint command, or as an additional option to myapp, to start an interactive shell?
I know there are native docker command options to achieve this, just curious if it's possible using only Docker Compose, too.
You need to include the following lines in your docker-compose.yml:
version: "3"
services:
app:
image: app:1.2.3
stdin_open: true # docker run -i
tty: true # docker run -t
The first corresponds to -i in docker run and the second to -t.
The canonical way to get an interactive shell with docker-compose is to use:
docker-compose run --rm myapp
(With the service name myapp taken from your example. More general: it must be an existing service name in your docker-compose file, myapp is not just a command of your choice. Example: bash instead of myapp would not work here.)
You can set stdin_open: true, tty: true, however that won't actually give you a proper shell with up, because logs are being streamed from all the containers.
You can also use
docker exec -ti <container name> /bin/bash
to get a shell on a running container.
The official getting started example (https://docs.docker.com/compose/gettingstarted/) uses the following docker-compose.yml:
version: "3.9"
services:
web:
build: .
ports:
- "8000:5000"
redis:
image: "redis:alpine"
After you start this with docker-compose up, you can shell into either your redis container or your web container with:
docker-compose exec redis sh
docker-compose exec web sh
docker-compose run myapp sh should do the deal.
There is some confusion with up/run, but docker-compose run docs have great explanation: https://docs.docker.com/compose/reference/run
If anyone from the future also wanders up here:
docker-compose exec service_name sh
or
docker-compose exec service_name bash
or you can run single lines like
docker-compose exec service_name php -v
That is after you already have your containers up and running.
The service_name is defined in your docker-compose.yml file
Using docker-compose, I found the easiest way to do this is to do a docker ps -a (after starting my containers with docker-compose up) and get the ID of the container I want to have an interactive shell in (let's call it xyz123).
Then it's a simple matter to execute
docker exec -ti xyz123 /bin/bash
and voila, an interactive shell.
This question is very interesting for me because I have problems, when I run container after execution finishes immediately exit and I fixed with -it:
docker run -it -p 3000:3000 -v /app/node_modules -v $(pwd):/app <your_container_id>
And when I must automate it with docker compose:
version: '3'
services:
frontend:
stdin_open: true
tty: true
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- /app/node_modules
- .:/app
This makes the trick: stdin_open: true, tty: true
This is a project generated with create-react-app
Dockerfile.dev it looks this that:
FROM node:alpine
WORKDIR '/app'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
Hope this example will help other to run a frontend(react in example) into docker container.
I prefer
docker-compose exec my_container_name bash
If the yml is called docker-compose.yml it can be launched with a simple $ docker-compose up. The corresponding attachment of a terminal can be simply (consider that the yml has specified a service called myservice):
$ docker-compose exec myservice sh
However, if you are using a different yml file name, such as docker-compose-mycompose.yml, it should be launched using $ docker-compose -f docker-compose-mycompose.yml up. To attach an interactive terminal you have to specify the yml file too, just like:
$ docker-compose -f docker-compose-mycompose.yml exec myservice sh
A addition to this old question, as I only had the case last time. The difference between sh and bash. So it can happen that for some bash doesn't work and only sh does.
So you can:
docker-compose exec CONTAINER_NAME sh
and in most cases: docker-compose exec CONTAINER_NAME bash
use.
If you have time. The difference between sh and bash is well explained here:
https://www.baeldung.com/linux/sh-vs-bash
You can do docker-compose exec SERVICE_NAME sh on the command line. The SERVICE_NAME is defined in your docker-compose.yml. For example,
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
The SERVICE_NAME would be "zookeeper".
According to documentation -> https://docs.docker.com/compose/reference/run/
You can use this docker-compose run --rm app bash
[app] is the name of your service in docker-compose.yml

Docker and .bash_history

Is there any way to share a .bash_history volume with a docker container so that everytime I go into a shell I have my bash history available for scrolling through?
Would be awesome to be able to do the same thing with IPython too.
It is the example from the documentation about volume: Mount a host file as a data volume:
docker run --rm -it -v ~/.bash_history:/root/.bash_history ubuntu /bin/bash
This will drop you into a bash shell in a new container, you will have your bash history from the host and when you exit the container, the host will have the history of the commands typed while in the container.
In your docker-compose.override.yml:
version: '2'
services:
whatever:
…
volumes:
- …
- ~/.bash_history:/root/.bash_history
To keep IPython history, you can set the IPYTHONDIR environment variable to somewhere within your mapped volume.
The docker-compose.override.yml would look like this:
version: '2'
services:
some-service:
environment:
- IPYTHONDIR=/app/.ipython
volumes:
- .:/app
My solution is useful when:
you don't want to share your local .bash_history with .bash_history in your container
you use other shell (like fish shell) but you want to save .bash_history between your builds
you don't want to commit .bash_history to git repo but you want to create it automatically inside same directory when a container starts
I assume file structure to be:
docker-compose.yml
docker/
\--> bash/
\--> .bashrc
\--> .bash_history
docker-compose.yml
web-service:
build: .
volumes:
- ./docker/bash/.bashrc:/home/YOUR_USER_NAME/.bashrc
- ./docker/bash:/home/YOUR_USER_NAME/bash
./docker/bash/.bashrc - it will automatically create .bash_history:
export HISTFILE=~/bash/.bash_history
touch $HISTFILE
Optionally, you can add to .gitignore:
docker/bash/.bash_history
You can also achieve this with a named volume and tell bash where he can find the bash history file by defining the HISTFILE environment variable. I explained a bit more here:
https://antistatique.net/en/we/blog/2019/11/12/tips-docker-keep-your-bash-history
For bash
volumes:
- ./.data/shell_history/php_bash_history.txt:/home/www-data/.bash_history #bash
For sh
volumes:
- ./.data/shell_history/nginx_bash_history.txt:/root/.ash_history #sh

Resources