use local env to replace variables in cloudformation template - amazon-ec2

I have some variables i will like to replace in userdata of cloudformation template and i do not want to put these variables as parameters in cloudformation.
How can i do this?
Seems cloudformation wants one to always include any variable that need to be replaced as parameters but i feel this is not flexible enough. So not sure if someone else has figured out a way to do this.
Certain variables do not really need to tie to the infrastructure but there is need to replace those variables dynamically.
for example is i have this userdata
UserData:
"Fn::Base64":
!Sub |
#!/bin/bash -xe
cat >> /tmp/docker_compose.yaml << EOF
version: '3.5'
services:
ngnix:
container_name: nginx
image: nginx:$TAG
restart: always
ports:
- 80:80
environment:
SERVER_ID: $SERVER_ID
AWS_REGION: $AWS_REGION
EOF
and i want to be set the env variable values on the machine from where the cloudformation command will be ran
export TAG=1.9.9
export SERVER_ID=12
export AWS_REGION=us-east-1
How can i use these local env values to be replaced in the userdata without having those variables as parameters. I already tried all i can think of and i could not do this.
So wanted to tap into the power of internet if someone has thought of a way or hack.
Thanks

Here is one way of doing it via a script, there may be situations in which this script will give issues, but you'll have to test and see.
I don't want the environment variables being available outside of preparing my cloudformation script - so I've done everything inside one script file; loading the environment variables and substitution.
Note: You will need to install envsubst on your machine.
I have 3 files to start off with:
File one is my cloudformation script, in which I have a default value for each one of my parameters expressed as a bash variable:
cloudformation.yaml
Region
Default: $Region
InstanceType
Default: $InstanceType
Colour:
Default: $Colour
Then I have my variables file:
variables.txt
InstanceType=t2.micro
Colour=Blue
Region=eu-west-1
Then I have my script that does the substitution:
script.sh
#!/bin/bash
source variables.txt
export $(cut -d= -f1 variables.txt)
cat cloudformation.yaml | envsubst > subs_cloudformation.yaml
This is the contents of my folder:
cloudformation.yaml script.sh variables.txt
I make sure my script.sh has the correct permissions:
chmod +x script.sh
And run my script:
./script.sh
The contents of my folder is now:
cloudformation.yaml script.sh variables.txt subs_cloudformation.yaml
And if I view the contents of my subs_cloudformation.yaml file:
Region
Default: eu-west-1
InstanceType
Default: t2.micro
Colour:
Default: Blue
I can now run that cloudformation script, and cloudformation will do the job of substituting those defaults into my template - so all we're doing with the above script is giving cloudformation the defaults.
I've of course just given a snippet of the cloudformation template, you can further improve this by having a dev.txt, qa.txt, production.txt file of variables and substitute whichever one in.
Edit: It doesn't matter where in the file your variable is though, so it can be in userdata or parameters as default.. doesn't matter. You will also need to be careful, this won't check if you have a matching environment variable for every variable in your cloudformation file. If it isn't in your variable file, the substituted value will just be blank.

Related

How to export hostname in make file and use in compose file

I am working on docker and docker-compose file, where I need hostname. I am also using Makefile to start container. but this container need hostname.
Following is my Makefile where I start command and subcommand that executes.
This command does not export MY_HOST var value from hostname -i.
start:
export MY_HOST=`hostname -i`
echo ${MY_HOST}
docker -f test.yml up -d
following is my docker-compose yml file where I want to use exported variable.
MyImage:
image:registry.test:latest
restart:always
environment:
MY_HOST=${MY_HOST}
What's wrong with this code? can someone help on this.
Unfortunately it is impossible to pass env variables from one Makefile command to another, because each line execute separately. But you can define variable and reuse it later this way:
MY_HOST := `hostname`
start:
MY_HOST=${MY_HOST}\
docker-compose run --rm shell env
docker-compose.yml
MyImage:
image:registry.test:latest
restart:always
environment:
- MY_HOST
https://www.gnu.org/software/make/manual/make.html#Values
also
https://makefiletutorial.com/#variables
and
pass env variables in make command in makefile

Scriptable args in docker-compose file

In my docker-compose file (docker-compose.yaml), I would like to set an argument based on a small shell script like this:
services:
backend:
[...]
build:
[...]
args:
PKG_NAME: $(dpkg -l <my_package>)
In the Dockerfile, I read this argument like this:
ARG PKG_NAME
First of all: I know that this approach is OS-dependent (requires dpkg), but for starters I would be happy to make it run on Debian. Also, it's fine it the value is an empty string.
However, docker-compose up throws this error:
ERROR: Invalid interpolation format for "build" option in service "backend": "$(dpkg -l <my_package>)"
Is there a way to dynamically specify an argument in the docker-compose file through a shell script (or another way)?
You can only use variable substitution as described in compose file documentation
You are trying to inject a shell construct and this is not supported.
The documentation has several examples on how to pass vars to compose file. In your case, you could:
export the var in your environment:
export MY_PACKAGE=$(dpkg -l <my_package>)
use that var in your compose file with default:
args:
PKG_NAME: "${MY_PACKAGE:-some_default_pkg}"

Dockerfile: how to set env variable from file contents

I want to set an environment variable in my Dockerfile.
I've got a .env file that looks like this:
FOO=bar.
Inside my Dockerfile, I've got a command that parses the contents of that file and assigns it to FOO.
RUN 'export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")'
The problem I'm running into is that the script above doesn't return what I need it to. In fact, it doesn't return anything.
When I run docker-compose up --build, it fails with this error.
The command '/bin/sh -c 'export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")'' returned a non-zero code: 127
I know that the command /bin/sh -c 'echo "$(cut -d'=' -f2 <<< $(grep FOO .env))"' will generate the correct output, but I can't figure out how to assign that output to an environment variable.
Any suggestions on what I'm doing wrong?
Environment Variables
If you want to set a number of environment variables into your docker image (to be used within the containers) you can simply use env_file configuration option in your docker-compose.yml file. With that option, all the entries in the .env file will be set as the environment variables in image and hence into containers.
More Info about env_file
Build ARGS
If your requirement is to use some variables only within your Dockerfile then you specify them as below
ARG FOO
ARG FOO1
ARG FOO2
etc...
And you have to specify these arguments under the build key in your docker-compose.yml
build:
context: .
args:
FOO: BAR
FOO1: BAR1
FOO2: BAR2
More info about args
Accessing .env values within the docker-compose.yml file
If you are looking into passing some values into your docker-compose file from the .env then you can simply put your .env file same location as the docker-compose.yml file and you can set the configuration values as below;
ports:
- "${HOST_PORT}:80"
So, as an example you can set the host port for the service by setting it in your .env file
Please check this
First, the error you're seeing. I suspect there's a "not found" error message not included in the question. If that's the case, then the first issue is that you tried to run an executable that is the full string since you enclosed it in quotes. Rather than trying to run the shell command "export", it is trying to find a binary that is the full string with spaces in it and all. So to work past that error, you'd need to unquote your RUN string:
RUN export FOO=$(echo "$(cut -d'=' -f2 <<< $(grep FOO .env))")
However, that won't solve your underlying problem. The result of a RUN command is that docker saves the changes to the filesystem as a new layer to the image. Only changes to the filesystem are saved. The shell command you are running changes the shell state, but then that shell exits, the run command returns, and the state of that shell, including environment variables, is gone.
To solve this for your application, there are two options I can think of:
Option A: inject build args into your build for all the .env values, and write a script that calls build with the proper --build-arg flag for each variable. Inside the Dockerfile, you'll have two lines for each variable:
ARG FOO1=default value1
ARG FOO2=default value2
ENV FOO1=${FOO1} \
FOO2=${FOO2}
Option B: inject your .env file and process it with an entrypoint in your container. This entrypoint could run your export command before kicking off the actual application. You'll also need to do this for each RUN command during the build where you need these variables. One shorthand I use for pulling in the file contents to environment variables is:
set -a && . .env && set +a

Cloudformation Doesn't Recognize Shell Variables in UserData Scripts

I've noticed that scripts run with CloudFormation's UserData attribute don't recognize the EC2 instance's shell variables. For example, the template section below doesn't print any values when provisioning. Is there any way to get around this?
UserData:
Fn::Base64: !Sub |
#!/bin/bash
echo HOME: $HOME
echo USER: $USER
echo PATH: $PATH
Note that the environment where cloud-init's User-Data Script gets executed doesn't usually contain HOME and USER variables, since the script is executed as root in a non-login shell.
Try the env command in your UserData to see the full list of environment variables available:
Description: Output shell variables.
Resources:
Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-9be6f38c # amzn-ami-hvm-2016.09.1.20161221-x86_64-gp2
InstanceType: m3.medium
UserData:
Fn::Base64: !Sub |
#!/bin/bash
env
On an Amazon Linux AMI (note that the result will depend on the AMI you're running!), I get the following output in the Console Output:
TERM=linux
PATH=/sbin:/usr/sbin:/bin:/usr/bin
RUNLEVEL=3
runlevel=3
PWD=/
LANGSH_SOURCED=1
LANG=en_US.UTF-8
PREVLEVEL=N
previous=N
CONSOLETYPE=serial
SHLVL=4
UPSTART_INSTANCE=
UPSTART_EVENTS=runlevel
UPSTART_JOB=rc
_=/bin/env

How to get environment variables into Ansible script run by cloud-init?

I have a cloud-init bash script that pulls a zip from S3, unzips it, and then runs an Ansible playbook contained in it. Now, I also have environment variables baked into /etc/environment, and the Ansible playbook uses lookup('env') to grab those values. It works fine when run through bash, either as the main user or as root. But when it fires via cloud-init, the variables are not transferred through.
In my bash script, the first line is source /etc/environment, and I can echo them out just fine. It's only when the Ansible playbook does the lookup that it fails. Interestingly, I can force the variables as so:
FOO=$FOO BAR=$BAR ansible-playbook -c local ...
and that works. Does anyone have any idea how I can get around having to hardcode the variables into the playbook line, and just have them work as expected, i.e. pull from /etc/environment?
Edit: here's the cloud-init:
#!/bin/bash
source /etc/environment
doit() {
aws s3 cp s3://my/scripts/dev-s3-push.tar.gz /tmp/my.tar.gz
mkdir -p /app/deploy
tar -C /app/deploy -zxvf /tmp/my.tar.gz
cd /app/deploy
FOO=$FOO BAR=$BAR ansible-playbook -i "localhost," -c local run.yml
}
doit
This is added into the User Data section in AWS.
Okay, so I figured it out. lookup() uses os.getenv underneath, and I found a few other questions related to os.getenv not returning properly.
The issue is that in my /etc/environment, i had it as FOO=bar, where it should have been export FOO=bar. Changing all the values over to that makes it work. I still have the source line in the cloud-init function, but I think this is solved now.

Resources