How to pass .env variables from docker-compose to entrypoint's script? - bash

How can I make the bash script to see env variables set on .env and/or inside docker-compose yaml?
.env
VARIABLE_ENV=VALUE_ENV
docker-compose.yml
version: '3'
services:
service:
volumes:
- ./entrypoint.sh:/entrypoint.sh
environment:
- VARIABLE_YML=VALUE_YML
entrypoint: /entrypoint.sh
entrypoint.sh
#!/bin/bash
echo -e "${VARIABLE_ENV}"
echo -e "${VARIABLE_YML}"

The .env file sets variables that are visible to Compose, in the same way as if you had exported them in your containing shell, but it doesn't automatically set those in containers. You can either tell Compose to pass through specific variables:
environment:
- VARIABLE_YML=VALUE_YML # explicitly set
- VARIABLE_ENV # from host environment or .env file
Or to pass through everything that is set in the .env file:
environment:
- VARIABLE_YML=VALUE_YML # as above
env_file: .env
The other important difference between these two forms is whether host environment variables or the .env file takes precedence. If you pass through a value in environment: then the host environment variable wins, but if it's in env_file:, .env is always used.
VARIABLE_ENV=foo docker-compose up

Related

use local env to replace variables in cloudformation template

I have some variables i will like to replace in userdata of cloudformation template and i do not want to put these variables as parameters in cloudformation.
How can i do this?
Seems cloudformation wants one to always include any variable that need to be replaced as parameters but i feel this is not flexible enough. So not sure if someone else has figured out a way to do this.
Certain variables do not really need to tie to the infrastructure but there is need to replace those variables dynamically.
for example is i have this userdata
UserData:
"Fn::Base64":
!Sub |
#!/bin/bash -xe
cat >> /tmp/docker_compose.yaml << EOF
version: '3.5'
services:
ngnix:
container_name: nginx
image: nginx:$TAG
restart: always
ports:
- 80:80
environment:
SERVER_ID: $SERVER_ID
AWS_REGION: $AWS_REGION
EOF
and i want to be set the env variable values on the machine from where the cloudformation command will be ran
export TAG=1.9.9
export SERVER_ID=12
export AWS_REGION=us-east-1
How can i use these local env values to be replaced in the userdata without having those variables as parameters. I already tried all i can think of and i could not do this.
So wanted to tap into the power of internet if someone has thought of a way or hack.
Thanks
Here is one way of doing it via a script, there may be situations in which this script will give issues, but you'll have to test and see.
I don't want the environment variables being available outside of preparing my cloudformation script - so I've done everything inside one script file; loading the environment variables and substitution.
Note: You will need to install envsubst on your machine.
I have 3 files to start off with:
File one is my cloudformation script, in which I have a default value for each one of my parameters expressed as a bash variable:
cloudformation.yaml
Region
Default: $Region
InstanceType
Default: $InstanceType
Colour:
Default: $Colour
Then I have my variables file:
variables.txt
InstanceType=t2.micro
Colour=Blue
Region=eu-west-1
Then I have my script that does the substitution:
script.sh
#!/bin/bash
source variables.txt
export $(cut -d= -f1 variables.txt)
cat cloudformation.yaml | envsubst > subs_cloudformation.yaml
This is the contents of my folder:
cloudformation.yaml script.sh variables.txt
I make sure my script.sh has the correct permissions:
chmod +x script.sh
And run my script:
./script.sh
The contents of my folder is now:
cloudformation.yaml script.sh variables.txt subs_cloudformation.yaml
And if I view the contents of my subs_cloudformation.yaml file:
Region
Default: eu-west-1
InstanceType
Default: t2.micro
Colour:
Default: Blue
I can now run that cloudformation script, and cloudformation will do the job of substituting those defaults into my template - so all we're doing with the above script is giving cloudformation the defaults.
I've of course just given a snippet of the cloudformation template, you can further improve this by having a dev.txt, qa.txt, production.txt file of variables and substitute whichever one in.
Edit: It doesn't matter where in the file your variable is though, so it can be in userdata or parameters as default.. doesn't matter. You will also need to be careful, this won't check if you have a matching environment variable for every variable in your cloudformation file. If it isn't in your variable file, the substituted value will just be blank.

How to force Git for Windows' bash-shell to not convert path-string to windows path?

I'm using the bash shell provided by Git for Windows for Docker toolbox for Windows. I want to export a string representing a unix path to a environment variable to then use in a docker container. Something like:
export MY_VAR=/my/path; docker-compose up
The problem is that in my container the variable will be something like:
echo $MY_VAR # prints c:/Program Files/Git/my/path
So it seems the shell (my guess) recognizes the string as a path and converts it to windows format. Is there a way to stop this?
I've attempted to use MSYS_NO_PATHCONV=1:
MSYS_NO_PATHCONV=1; export LOG_PATH=/my/path; docker-compose up
But it did not have any effect.
I don't think it's an issue with my docker-compose and dockerfile but I'll attach them if someone is interested.
My Dockerfile:
FROM node:8-slim
RUN mkdir /test \
&& chown node:node /test
USER node
ENTRYPOINT [ "/bin/bash" ]
My docker-compose.yml:
version: '2'
services:
test:
build:
context: .
image: test
environment:
- MY_VAR
volumes:
- ${MY_VAR}:/test
command: -c 'sleep 100000'
The Final goal here is to make a directory on the host machine accessible from the docker container (for logs and such). The directory should be set by an environment variable. Setting the directory in the docker-compose.yml does work, just not for my use case.
If you want your command docker-compose up to be run with MSYS_NO_PATHCONV=1; you have two options:
export LOG_PATH=/c/Windows; export MSYS_NO_PATHCONV=1; docker-compose up This will affect your bash session as the variable is exported
export LOG_PATH=/c/Windows; MSYS_NO_PATHCONV=1 docker-compose up; (note I removed one semi-colon intentionally) This will set MSYS_NO_PATHCONV only in the context of the command to run
Test it with:
$ export LOG_PATH=/c/Windows ; cmd "/c echo %LOG_PATH%";
C:/Windows --> Fails
$ export LOG_PATH=/c/Windows ; MSYS_NO_PATHCONV=1 cmd "/c echo %LOG_PATH%"
/c/Windows --> Success
$ export LOG_PATH=/c/Windows ; export MSYS_NO_PATHCONV=1; cmd "/c echo %LOG_PATH%";
/c/Windows --> Success but MSYS_NO_PATHCONV is now "permanently" set
Seems a workaround is to remove the first / from the string and add it in the docker-compose.yml instead.
new docker-compose.yml:
version: '2'
services:
test:
build:
context: .
image: test
environment:
- MY_VAR
volumes:
- /${MY_VAR}:/test # added '/' to the beginning of the line
command: -c 'sleep 100000'
and then starting the container with:
export MY_VAR=my/path; docker-compose up # removed the '/' from the beginning of the path.
This does seem more like a "lucky" workaround than a perfect solution as when I'll build this on other systems I'll have to remind myself to remove the /. Doable but a bit annoying. Maybe someone has a better idea.

Send variables into docker container to use in a script

I am running a script in the CI/CD of the pipeline. The goal is to get a string to work with.
When I get that result, I save it into a variable and save result in the yaml file of the dockerfile.
I am wanting to pass that variable from the CI environment, into the docker-compose container. So, I am trying to export this like another things are exported, however it doesn't work:
ci/pdf/jenkins-changes.sh
LOG="$(cat ".log")"
export LOG
I have added a variables.env file that looks like this:
LOG=LOG
And then modified the docker-compose.yaml to read the var :
pdf:
image: thisimage/this
build:
context: ../
dockerfile: ./docker/Dockerfile.name
args:
git_branch: ${GIT_BRANCH}
env_file:
- variables.env
environment:
- LOG=${LOG}
volumes:
- do-build:/src/do-build
And in the script that finally runs the docker-container, I have also
Declared it:
FROM ubuntu:16.04 as pdf-builder
ARG log
ENV log=${log}
RUN LOG=${log}
RUN export $LOG
And right after, I run the script.sh that requires the variable, however, it returns Unbound variable and breaks.
LOG=${log}
echo ${LOG}
The answer to this question was:
ci/pdf/jenkins-changes.sh
LOG="$(cat ".log")"
export LOG
Then pass it as an argument, instead of a variable:
pdf:
image: thisimage/this
build:
context: ../
dockerfile: ./docker/Dockerfile.name
args:
git_branch: ${GIT_BRANCH}
env_file:
- variables.env
environment:
- LOG=${LOG}
volumes:
- do-build:/src/do-build
And then, in the dockerfile call it and define it.
ARG log
This should leave it global for any script to use it.

Docker compose won't find $PWD environment variable

Here's my docker-compose:
version: '2'
services:
couchpotato:
build:
context: ./couchpotato
dockerfile: Dockerfile
ports:
- 5050:5050
volumes:
- "${PWD}/couchpotato/data:/home/CouchPotato/data/"
- "${PWD}/couchpotato/config:/home/CouchPotato/config/"
When I run it inside the shell, in the directory of the docker-compose.yml, I get:
WARNING: The PWD variable is not set. Defaulting to a blank string.
and the compose starts with PWD being empty.
I don't see any error in the file, as seen here: https://docs.docker.com/compose/environment-variables/
You don't need ${PWD} for this, you can just make the path relative and compose will expand it (one major difference between compose paths and those processed by docker run).
version: '2'
services:
couchpotato:
build:
context: ./couchpotato
dockerfile: Dockerfile
ports:
- 5050:5050
volumes:
- "./couchpotato/data:/home/CouchPotato/data/"
- "./couchpotato/config:/home/CouchPotato/config/"
As for why compose doesn't see this variable, that depends on your shell. Compose looks for an exported environment variable, contents of the .env file, and command line flags to the docker-compose command. If each of those comes up empty for the variable, you'll get that warning.
My advice: change all $PWD to .
$PWD will not work if you are running using sudo. Try the recommended settings from Docker for Linux https://docs.docker.com/engine/install/linux-postinstall/.
Sudo will run as a different user, with a different env.
$ sudo env | grep -i pwd
$ env | grep -i pwd
PWD=/home/user
OLDPWD=/
If you really need absolute paths, then call this before calling docker-compose up:
set PWD=%CD%
I had the same issue with one of my env vars. On looking at my bashrc file more closely, I found out that I hadn't exported that variable.
Before:
VAR=<value>
After:
export VAR=<value>

Docker and .bash_history

Is there any way to share a .bash_history volume with a docker container so that everytime I go into a shell I have my bash history available for scrolling through?
Would be awesome to be able to do the same thing with IPython too.
It is the example from the documentation about volume: Mount a host file as a data volume:
docker run --rm -it -v ~/.bash_history:/root/.bash_history ubuntu /bin/bash
This will drop you into a bash shell in a new container, you will have your bash history from the host and when you exit the container, the host will have the history of the commands typed while in the container.
In your docker-compose.override.yml:
version: '2'
services:
whatever:
…
volumes:
- …
- ~/.bash_history:/root/.bash_history
To keep IPython history, you can set the IPYTHONDIR environment variable to somewhere within your mapped volume.
The docker-compose.override.yml would look like this:
version: '2'
services:
some-service:
environment:
- IPYTHONDIR=/app/.ipython
volumes:
- .:/app
My solution is useful when:
you don't want to share your local .bash_history with .bash_history in your container
you use other shell (like fish shell) but you want to save .bash_history between your builds
you don't want to commit .bash_history to git repo but you want to create it automatically inside same directory when a container starts
I assume file structure to be:
docker-compose.yml
docker/
\--> bash/
\--> .bashrc
\--> .bash_history
docker-compose.yml
web-service:
build: .
volumes:
- ./docker/bash/.bashrc:/home/YOUR_USER_NAME/.bashrc
- ./docker/bash:/home/YOUR_USER_NAME/bash
./docker/bash/.bashrc - it will automatically create .bash_history:
export HISTFILE=~/bash/.bash_history
touch $HISTFILE
Optionally, you can add to .gitignore:
docker/bash/.bash_history
You can also achieve this with a named volume and tell bash where he can find the bash history file by defining the HISTFILE environment variable. I explained a bit more here:
https://antistatique.net/en/we/blog/2019/11/12/tips-docker-keep-your-bash-history
For bash
volumes:
- ./.data/shell_history/php_bash_history.txt:/home/www-data/.bash_history #bash
For sh
volumes:
- ./.data/shell_history/nginx_bash_history.txt:/root/.ash_history #sh

Resources