Cloudformation Doesn't Recognize Shell Variables in UserData Scripts - shell

I've noticed that scripts run with CloudFormation's UserData attribute don't recognize the EC2 instance's shell variables. For example, the template section below doesn't print any values when provisioning. Is there any way to get around this?
UserData:
Fn::Base64: !Sub |
#!/bin/bash
echo HOME: $HOME
echo USER: $USER
echo PATH: $PATH

Note that the environment where cloud-init's User-Data Script gets executed doesn't usually contain HOME and USER variables, since the script is executed as root in a non-login shell.
Try the env command in your UserData to see the full list of environment variables available:
Description: Output shell variables.
Resources:
Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-9be6f38c # amzn-ami-hvm-2016.09.1.20161221-x86_64-gp2
InstanceType: m3.medium
UserData:
Fn::Base64: !Sub |
#!/bin/bash
env
On an Amazon Linux AMI (note that the result will depend on the AMI you're running!), I get the following output in the Console Output:
TERM=linux
PATH=/sbin:/usr/sbin:/bin:/usr/bin
RUNLEVEL=3
runlevel=3
PWD=/
LANGSH_SOURCED=1
LANG=en_US.UTF-8
PREVLEVEL=N
previous=N
CONSOLETYPE=serial
SHLVL=4
UPSTART_INSTANCE=
UPSTART_EVENTS=runlevel
UPSTART_JOB=rc
_=/bin/env

Related

Kubernetes: using bash variable expansion in container entrypoint

According to the [documentation][1] Kubernetes variables are expanded using the previous defined environment variables in the container using the syntax $(VAR_NAME). The variable can be used in the container's entrypoint.
For example:
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
Is this possible though to use bash expansion aka ${Var1:-${Var2}} inside the container's entrypoint for the kubernetes environment variables E.g.
env:
- name: Var1
value: "hello world"
- name: Var2
value: "no hello"
command: ['bash', '-c', "echo ${Var1:-$Var2}"]
Is this possible though to use bash expansion aka ${Var1:-${Var2}} inside the container's entrypoint ?
Yes, by using
command:
- /bin/bash
- "-c"
- "echo ${Var1:-${Var2}}"
but not otherwise -- kubernetes is not a wrapper for bash, it use the Linux exec system call to launch programs inside the container, and so the only way to get bash behavior is to launch bash
That's also why they chose $() syntax for their environment interpolation so it would be different from the ${} style that a shell would use, although this question comes up so much that one might wish they had not gone with $ anything to avoid further confusing folks

Use container's environment variable for a command lie

I would like to use some environment variables in a bash script that contains :
#!/usr/bin/env bash
docker-compose exec apache bash -c "\
printenv | grep BLACKFIRE\
&& blackfire-agent --register --server-id=$BLACKFIRE_SERVER_ID --server-token=$BLACKFIRE_SERVER_TOKEN \
&& /etc/init.d/blackfire-agent restart"
echo "Blackfire agent configured !"
I pass the variable from my .env :
BLACKFIRE_SERVER_ID=xxxxx
BLACKFIRE_SERVER_TOKEN=xxxxx
BLACKFIRE_CLIENT_ID=xxxxx
BLACKFIRE_CLIENT_TOKEN=xxxxx
using docker-compose.yml
environment:
- DAMART_ENV=dev
- BLACKFIRE_SERVER_ID=${BLACKFIRE_SERVER_ID}
- BLACKFIRE_SERVER_TOKEN=${BLACKFIRE_SERVER_TOKEN}
- BLACKFIRE_CLIENT_ID=${BLACKFIRE_CLIENT_ID}
- BLACKFIRE_CLIENT_TOKEN=${BLACKFIRE_CLIENT_TOKEN}
The apache container has the environment variables (here is the result of printenv)
If I change the variables by their values it works but I don't want to use them directly in this script.
How should I call the variables for them to work.
The code repository usually has a function to store secret keys and environment tokens. For instance, github secrets can be kept in https://github.com/user/repo/settings/secrets/actions for your actions or https://github.com/user/repo/settings/secrets/codespaces/new for the repo.
The problem comes from the use of " (double quotes) that made the string to be interpreted not by the container but by the machine running the docker exec. This machine did not have the env variables as they are set in the container.
Replace with ' (simple quotes) make the script to use the environment variables set in the container.
docker-compose exec apache bash -c '\
blackfire-agent --register --server-id=$BLACKFIRE_SERVER_ID --server-token=$BLACKFIRE_SERVER_TOKEN \
&& /etc/init.d/blackfire-agent restart'

ansible get environment variables to used as vars

guys. I want to use some environment variables that create in the shell by an export command like this
export TOKEN=xxxxx
export USER=xxxx
so I want to use these environment variables I created in my ansible playbook like
- name: login
command: USER="variable I created" TOKEN="same as USER" python xxx.py
I check out the document env module but I can't understand the only one example. so I come for help.
The thing you want is to use environment::
- name: login
command: python xxx.py
environment:
USER: "variable I created"
TOKEN: "same as USER"

use local env to replace variables in cloudformation template

I have some variables i will like to replace in userdata of cloudformation template and i do not want to put these variables as parameters in cloudformation.
How can i do this?
Seems cloudformation wants one to always include any variable that need to be replaced as parameters but i feel this is not flexible enough. So not sure if someone else has figured out a way to do this.
Certain variables do not really need to tie to the infrastructure but there is need to replace those variables dynamically.
for example is i have this userdata
UserData:
"Fn::Base64":
!Sub |
#!/bin/bash -xe
cat >> /tmp/docker_compose.yaml << EOF
version: '3.5'
services:
ngnix:
container_name: nginx
image: nginx:$TAG
restart: always
ports:
- 80:80
environment:
SERVER_ID: $SERVER_ID
AWS_REGION: $AWS_REGION
EOF
and i want to be set the env variable values on the machine from where the cloudformation command will be ran
export TAG=1.9.9
export SERVER_ID=12
export AWS_REGION=us-east-1
How can i use these local env values to be replaced in the userdata without having those variables as parameters. I already tried all i can think of and i could not do this.
So wanted to tap into the power of internet if someone has thought of a way or hack.
Thanks
Here is one way of doing it via a script, there may be situations in which this script will give issues, but you'll have to test and see.
I don't want the environment variables being available outside of preparing my cloudformation script - so I've done everything inside one script file; loading the environment variables and substitution.
Note: You will need to install envsubst on your machine.
I have 3 files to start off with:
File one is my cloudformation script, in which I have a default value for each one of my parameters expressed as a bash variable:
cloudformation.yaml
Region
Default: $Region
InstanceType
Default: $InstanceType
Colour:
Default: $Colour
Then I have my variables file:
variables.txt
InstanceType=t2.micro
Colour=Blue
Region=eu-west-1
Then I have my script that does the substitution:
script.sh
#!/bin/bash
source variables.txt
export $(cut -d= -f1 variables.txt)
cat cloudformation.yaml | envsubst > subs_cloudformation.yaml
This is the contents of my folder:
cloudformation.yaml script.sh variables.txt
I make sure my script.sh has the correct permissions:
chmod +x script.sh
And run my script:
./script.sh
The contents of my folder is now:
cloudformation.yaml script.sh variables.txt subs_cloudformation.yaml
And if I view the contents of my subs_cloudformation.yaml file:
Region
Default: eu-west-1
InstanceType
Default: t2.micro
Colour:
Default: Blue
I can now run that cloudformation script, and cloudformation will do the job of substituting those defaults into my template - so all we're doing with the above script is giving cloudformation the defaults.
I've of course just given a snippet of the cloudformation template, you can further improve this by having a dev.txt, qa.txt, production.txt file of variables and substitute whichever one in.
Edit: It doesn't matter where in the file your variable is though, so it can be in userdata or parameters as default.. doesn't matter. You will also need to be careful, this won't check if you have a matching environment variable for every variable in your cloudformation file. If it isn't in your variable file, the substituted value will just be blank.

Setting an environment variable in Ansible from a command output of bash command

I would like to set output of a shell command as an environment variable in Ansible.
I did the following to achieve it:
- name: Copy content of config.json into variable
shell: /bin/bash -l -c "cat /storage/config.json"
register: copy_config
tags: something
- name: set config
shell: "echo $TEMP_CONFIG"
environment:
TEMP_CONFIG: "{{copy_config}}"
tags: something
But somehow after the ansible run, when I do run the following command:
echo ${TEMP_CONFIG}
in my terminal it gives an empty result.
Any help would be appreciated.
There are at least two problems:
You should pass copy_config.stdout as a variable
- name: set config
shell: "echo $TEMP_CONFIG"
environment:
TEMP_CONFIG: "{{copy_config.stdout}}"
tags: something
You need to register the results of the above task and then again print the stdout, so:
- name: set config
shell: "echo $TEMP_CONFIG"
environment:
TEMP_CONFIG: "{{copy_config.stdout}}"
tags: something
register: shell_echo
- debug:
var: shell_echo.stdout
You never will be able to pass the variable to a non-related process this way. So unless you registered the results in an rc-file (like ~/.bash_profile which is sourced on interactive login if you use Bash) no other shell process would be able to see the value of TEMP_CONFIG. This is how system works.

Resources