settings environment in makefile - makefile

I am trying to export an env variable in make to use in on the next lines. I am doing as suggested here Setting environment variables in a makefile.
eks-apps:
export KUBECONFIG=$(CURDIR)/terraform/kubernetes/cluster/$(shell ls terraform/kubernetes/cluster/ | grep kubeconfig)
kubectl get all
But its not using that kubeconfig in the kubectl command. What am I missing?

Every line in a recipe will be executed in a new shell. As this you have to use a single shell for all your commands:
eks-apps:
( \
export KUBECONFIG=$(CURDIR)/terraform/kubernetes/cluster/$(shell ls terraform/kubernetes/cluster/ | grep kubeconfig); \
kubectl get all \
)
From the answer you are linked to:
Please note: this implies that setting shell variables and invoking shell commands such as cd that set a context local to each process will not affect the following lines in the recipe.2 If you want to use cd to affect the next statement, put both statements in a single recipe line

Related

How do we access the variables set inside the tox environment again in another block of the tox?

I am using tox-docker and it sets POSTGRES_5432_TCP_PORT as an environment variable. How do I access this env variable again? I want to do this because I have to provide this to the pytest command.
[tox]
skipsdist = True
envlist = py37-django22
[testenv]
docker = postgres:9
dockerenv =
POSTGRES_USER=asd
POSTGRES_DB=asd
POSTGRES_PASSWORD=asd
setenv =
PYTHONDONTWRITEBYTECODE=1
DJANGO_SETTINGS_MODULE=app.settings.base
deps=-rrequirements.txt
-rrequirements_dev.txt
commands =
env
python -c "print('qweqwe', {env:POSTGRES_5432_TCP_PORT:'default_port'})"
pytest -sv --postgresql-port={env:POSTGRES_5432_TCP_PORT:} --cov-report html --cov-report term --cov=app -l --tb=long {posargs} --junitxml=junit/test-results.xml
here, POSTGRES_5432_TCP_PORT is set by the tox-docker. but when I try to access it inside tox it is not available. But when I execute the env command inside tox it prints the variable.
py37-django22 docker: run 'postgres:9'
py37-django22 run-test-pre: PYTHONHASHSEED='480168593'
py37-django22 run-test: commands[0] | env
PATH=
TOX_WORK_DIR=src/.tox
HTTPS_PROXY=http://0000:8000
LANG=C
HTTP_PROXY=http://0000:8000
PYTHONDONTWRITEBYTECODE=1
DJANGO_SETTINGS_MODULE=app.settings.base
PYTHONHASHSEED=480168593
TOX_ENV_NAME=py37-django22
TOX_ENV_DIR=/.tox/py37-django22
POSTGRES_USER=swordfish
POSTGRES_DB=swordfish
POSTGRES_PASSWORD=swordfish
POSTGRES_HOST=172.17.0.1
POSTGRES_5432_TCP_PORT=32822
POSTGRES_5432_TCP=32822
VIRTUAL_ENV=.tox/py37-django22
py37-django22 run-test: commands[1] | python -c 'print('"'"'qweqwe'"'"', '"'"'default_port'"'"')'
qweqwe default_port
py37-django22 run-test: commands[2] | pytest -sv --postgresql-port= --cov-report html --cov-report term --cov=app -l --tb=long --junitxml=junit/test-results.xml
If a script sets an environment variable, that envvar is visible to that process only. If it exports the variable, the variable will be visible whatever sub-shells that script may spawn. Once the script exits, all envvars set by the shell process and any child processes will be gone since they existed only in that memory space.
Not sure what you're trying to do, Docker is not my speciality, but 5432 is the common Postgres port. If you're trying to supply it to pytest, you could say
POSTGRES_5432_TCP_PORT=5432 pytest <test_name>
Or something to that effect.

Dockerfile: how to set env variable from file contents

I want to set an environment variable in my Dockerfile.
I've got a .env file that looks like this:
FOO=bar.
Inside my Dockerfile, I've got a command that parses the contents of that file and assigns it to FOO.
RUN 'export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")'
The problem I'm running into is that the script above doesn't return what I need it to. In fact, it doesn't return anything.
When I run docker-compose up --build, it fails with this error.
The command '/bin/sh -c 'export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")'' returned a non-zero code: 127
I know that the command /bin/sh -c 'echo "$(cut -d'=' -f2 <<< $(grep FOO .env))"' will generate the correct output, but I can't figure out how to assign that output to an environment variable.
Any suggestions on what I'm doing wrong?
Environment Variables
If you want to set a number of environment variables into your docker image (to be used within the containers) you can simply use env_file configuration option in your docker-compose.yml file. With that option, all the entries in the .env file will be set as the environment variables in image and hence into containers.
More Info about env_file
Build ARGS
If your requirement is to use some variables only within your Dockerfile then you specify them as below
ARG FOO
ARG FOO1
ARG FOO2
etc...
And you have to specify these arguments under the build key in your docker-compose.yml
build:
context: .
args:
FOO: BAR
FOO1: BAR1
FOO2: BAR2
More info about args
Accessing .env values within the docker-compose.yml file
If you are looking into passing some values into your docker-compose file from the .env then you can simply put your .env file same location as the docker-compose.yml file and you can set the configuration values as below;
ports:
- "${HOST_PORT}:80"
So, as an example you can set the host port for the service by setting it in your .env file
Please check this
First, the error you're seeing. I suspect there's a "not found" error message not included in the question. If that's the case, then the first issue is that you tried to run an executable that is the full string since you enclosed it in quotes. Rather than trying to run the shell command "export", it is trying to find a binary that is the full string with spaces in it and all. So to work past that error, you'd need to unquote your RUN string:
RUN export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")
However, that won't solve your underlying problem. The result of a RUN command is that docker saves the changes to the filesystem as a new layer to the image. Only changes to the filesystem are saved. The shell command you are running changes the shell state, but then that shell exits, the run command returns, and the state of that shell, including environment variables, is gone.
To solve this for your application, there are two options I can think of:
Option A: inject build args into your build for all the .env values, and write a script that calls build with the proper --build-arg flag for each variable. Inside the Dockerfile, you'll have two lines for each variable:
ARG FOO1=default value1
ARG FOO2=default value2
ENV FOO1=${FOO1} \
FOO2=${FOO2}
Option B: inject your .env file and process it with an entrypoint in your container. This entrypoint could run your export command before kicking off the actual application. You'll also need to do this for each RUN command during the build where you need these variables. One shorthand I use for pulling in the file contents to environment variables is:
set -a && . .env && set +a

Makefile: set the environment variables output by a command

I am trying to write a make target, where it sets the env. variables obtained by a shell command.
gcloud beta emulators datastore env-init command on the terminal output few exports statement like (doesn't set, but just echoes / prints).
export DATASTORE_EMULATOR_HOST=localhost:8432
export DATASTORE_PROJECT_ID=my-project-id
Normally, I have to copy and paste these lines into the terminal to set the variables.
Is it possible to make these printed export statement to execute so they will be set on the shell. I tried like, but didn't work.
target:
#export JAVA_HOME=$(JAVA_HOME); \
$(shell gcloud beta emulators datastore env-init); \
go run src/main.go
it prints out the script output if I do like,
target:
export JAVA_HOME=$(JAVA_HOME); \
gcloud beta emulators datastore env-init \
go run src/main.go
Similar to the JAVA_HOME, how can I also source output of gcloud beta emulators datastore env-init command (which are 4 lines of export commands) so they are set in the environment.
so I want something like, in effect,
target:
export JAVA_HOME=$(JAVA_HOME); \
export DATASTORE_EMULATOR_HOST=localhost:8432 \
export DATASTORE_PROJECT_ID=my-project-id \
go run src/main.go
thanks.
bsr
The output printed by make does not actually affect the environment contents of subprocesses.
If the variable assignments do not appear to have any effect, that's because make runs each line in a separate subshell, each with fresh environment. You need to join the commands together like this:
target:
export JAVA_HOME=$(JAVA_HOME); \
eval "$$(gcloud beta emulators datastore env-init)"; \
go run src/main.go
The eval is necessary so that the output of the gcloud command is evaluated in the current shell, and not a subshell. Also note the semicolon at the end of the second line of the recipe.

Append to a remote environment variable for a command started via ssh on RO filesystem

I can run a Python script on a remote machine like this:
ssh -t <machine> python <script>
And I can also set environment variables this way:
ssh -t <machine> "PYTHONPATH=/my/special/folder python <script>"
I now want to append to the remote PYTHONPATH and tried
ssh -t <machine> 'PYTHONPATH=$PYTHONPATH:/my/special/folder python <script>'
But that doesn't work because $PYTHONPATH won't get evaluated on the remote machine.
There is a quite similar question on SuperUser and the accepted answer wants me to create an environment file which get interpreted by ssh and another question which can be solved by creating and copying a script file which gets executed instead of python.
This is both awful and requires the target file system to be writable (which is not the case for me)!
Isn't there an elegant way to either pass environment variables via ssh or provide additional module paths to Python?
How about using /bin/sh -c '/usr/bin/env PYTHONPATH=$PYTHONPATH:/.../ python ...' as the remote command?
EDIT (re comments to prove this should do what it's supposed to given correct quoting):
bash-3.2$ export FOO=bar
bash-3.2$ /usr/bin/env FOO=$FOO:quux python -c 'import os;print(os.environ["FOO"])'
bar:quux
WFM here like this:
$ ssh host 'grep ~/.bashrc -e TEST'
export TEST="foo"
$ ssh host 'python -c '\''import os; print os.environ["TEST"]'\'
foo
$ ssh host 'TEST="$TEST:bar" python -c '\''import os; print os.environ["TEST"]'\'
foo:bar
Note the:
single quotes around the entire command, to avoid expanding it locally
embedded single quotes are thus escaped in the signature '\'' pattern (another way is '"'"')
double quotes in assignment (only required if the value has whitespace, but it's good practice to not depend on that, especially if the value is outside your control)
avoiding of $VAR in command: if I typed e.g. echo "$TEST", it would be expanded by shell before replacing the variable
a convenient way around this is to make var replacement a separate command:
$ ssh host 'export TEST="$TEST:bar"; echo "$TEST"'
foo:bar

Terraform `local-exec` to set a local alias

I'm trying to set up an alias to quickly ssh into the newly created host when I create an AWS instance in terraform. I do this by running
# Handy alias to quickly ssh into newly created host
provisioner "local-exec" {
command = "alias sshopenldap='ssh -i ${var.key_path} ubuntu#${aws_instance.ldap_instance.public_dns}'"
}
When I see the output of this execution:
aws_instance.ldap_instance (local-exec): Executing: /bin/sh -c "alias sshopenldap='ssh -i ~/.ssh/mykey.pem ubuntu#ec2-IP.compute-1.amazonaws.com'"
It seems to be ok, but the alias is not set. Could this be that the way the command is being run it wraps it in a new scope and not the one of the current shell? If I copy paste the command as is to the console the alias is set fine.
Is there a workaround for this?
I'm running terraform a MacOS X Mountain Lion's terminal.
You could try something like:
# Handy alias to quickly ssh into newly created host
provisioner "local-exec" {
command = "echo \"alias sshopenldap='ssh -i ${var.key_path} ubuntu#${aws_instance.ldap_instance.public_dns}'\" > script.sh && source script.sh && rm -rf source.sh"
}
Not sure how the quote escaping will go...
It is indeed not possible to set an alias for the current shell in a script file, which is what your are trying to do. The only way to get out of this is to not run a script, but instead source it. So:
source somefile.sh
instead of executing it should do the trick.

Resources