nginx: use environment variables - shell

I have the following scenario: I have an env variable $SOME_IP defined and want to use it in a nginx block. Referring to the nginx documentation I use the env directive in the nginx.conf file like the following:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
env SOME_IP;
Now I want to use the variable for a proxy_pass. I tried it like the following:
location / {
proxy_pass http://$SOME_IP:8000;
}
But I end up with this error message: nginx: [emerg] unknown "some_ip" variable

With NGINX Docker image
Apply envsubst on template of the configuration file at container start. envsubst is included in official NGINX docker images.
Environment variable is referenced in a form $VARIABLE or ${VARIABLE}.
nginx.conf.template:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
access_log off;
return 200 '${MESSAGE}';
add_header Content-Type text/plain;
}
}
}
Dockerfile:
FROM nginx:1.17.8-alpine
COPY ./nginx.conf.template /nginx.conf.template
CMD ["/bin/sh" , "-c" , "envsubst < /nginx.conf.template > /etc/nginx/nginx.conf && exec nginx -g 'daemon off;'"]
Build and run docker:
docker build -t foo .
docker run --rm -it --name foo -p 8080:80 -e MESSAGE="Hellou World" foo
NOTE:If config template contains dollar sign $ which should not be substituted then list all used variables as parameter of envsubst so that only those are replaced. E.g.:
CMD ["/bin/sh" , "-c" , "envsubst '$USER_NAME $PASSWORD $KEY' < /nginx.conf.template > /etc/nginx/nginx.conf && exec nginx -g 'daemon off;'"]
Nginx Docker documentation for reference. Look for Using environment variables in nginx configuration.
Using environment variables in nginx configuration
Out-of-the-box, nginx doesn’t support environment variables inside
most configuration blocks. But envsubst may be used as a workaround if
you need to generate your nginx configuration dynamically before nginx
starts.
Here is an example using docker-compose.yml:
web:
image: nginx
volumes:
- ./mysite.template:/etc/nginx/conf.d/mysite.template
ports:
- "8080:80"
environment:
- NGINX_HOST=foobar.com
- NGINX_PORT=80
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
The mysite.template file may then contain variable references like
this:
listen ${NGINX_PORT};

You can access the variables via modules - I found options for doing it with Lua and Perl.
Wrote about it on my company's blog:
https://web.archive.org/web/20170712003702/https://docs.apitools.com/blog/2014/07/02/using-environment-variables-in-nginx-conf.html
The TL;DR:
env API_KEY;
And then:
http {
...
server {
location / {
# set var using Lua
set_by_lua $api_key 'return os.getenv("API_KEY")';
# set var using perl
perl_set $api_key 'sub { return $ENV{"API_KEY"}; }';
...
}
}
}
EDIT: original blog is dead, changed link to wayback machine cache

The correct usage would be $SOME_IP_from_env, but environment variables set from nginx.conf cannot be used in server, location or http blocks.
You can use environment variables if you use the openresty bundle, which includes Lua.

Since nginx 1.19 you can now use environment variables in your configuration with docker-compose. I used the following setup:
# file: docker/nginx/templates/default.conf.conf
upstream api-upstream {
server ${API_HOST};
}
# file: docker-compose.yml
services:
nginx:
image: nginx:1.19-alpine
environment:
NGINX_ENVSUBST_TEMPLATE_SUFFIX: ".conf"
API_HOST: api.example.com
I found this answer on other thread: https://stackoverflow.com/a/62844707/4479861

For simple environment variables substitution, can use the envsubst command and template feature since docker Nginx 1.19. Note: envsubst not support fallback default, eg: ${MY_ENV:-DefaultValue}.
For more advanced usage, consider use https://github.com/guyskk/envsub-njs, it's implemented via Nginx NJS, use Javascript template literals, powerful and works well in cross-platform. eg: ${Env('MY_ENV', 'DefaultValue')}
You can also consider https://github.com/kreuzwerker/envplate, it support syntax just like shell variables substitution.

If you're not tied to bare installation of nginx, you could use docker for the job.
For example nginx4docker implements a bunch of basic env variables that can be set through docker and you don't have to fiddle around with nginx basic templating and all it's drawbacks.
nginx4docker could also be extended with your custom env variables. only mount a file that lists all your env variables to docker ... --mount $(pwd)/CUSTOM_ENV:/ENV ...
When the worst case happens and you can't switch/user docker, a workaround maybe to set all nginx variables with their names (e.g. host="$host") in this case envsubst replaces $host with $host.

Related

Changing the nginx fcgiwrap user

I'm trying to use fcgi with a shell script and make it callable via nginx.
No matter what I do though, the script is ran with www-data user. I need it to run as nginx user that nginx is using.
nginx 1.15.1
Installed fcgiwrap:
apt get install fcgiwrap
Config is following:
nginx.conf:
user nginx nginx;
worker_processes 1;
worker_rlimit_nofile 100000;
http {
location ~ (\.sh)$ {
gzip off;
root /home/nginx/www;
autoindex on;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
include /usr/local/nginx/conf/fastcgi_params;
fastcgi_param DOCUMENT_ROOT /home/nginx/www;
fastcgi_param SCRIPT_FILENAME /home/nginx/www$fastcgi_script_name;
}
}
The problem I need to fix is access to some of the files other scripts have created that run under nginx user.
I also tried editing /etc/init.d/fcgiwrap to change
FCGI_USER="nginx"
FCGI_GROUP="nginx"
# Socket owner/group (will default to FCGI_USER/FCGI_GROUP if not defined)
FCGI_SOCKET_OWNER="nginx"
FCGI_SOCKET_GROUP="nginx"
But it had no effect. The script:
#!/bin/bash -e
echo 'Content-Type: text/plain'
echo ''
echo $(whoami)
echo $(groups)
The output is:
www-data
www-data
Define the path to the fcgiwrap.service file:
systemctl status fcgiwrap.service
Change the User and the Group in fcgiwrap.service file:
nano /lib/systemd/system/fcgiwrap.service

Include docker compose startup command in Dockerfile or docker run [duplicate]

I am trying to start an NGINX server within a docker container configured through docker-compose. The catch is, however, that I would like to substitute an environment variable inside of the http section, specifically within the "upstream" block.
It would be awesome to have this working, because I have several other containers that are all configured through environment variables, and I have about 5 environments that need to be running at any given time. I have tried using "envsubst" (as suggested by the official NGINX docs), perl_set, and set_by_lua, however none of them appear to be working.
Below is the NGINX config, as it is after my most recent trial
user nginx;
worker_processes 1;
env NGINXPROXY;
load_module modules/ngx_http_perl_module.so;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
perl_set $nginxproxy 'sub { return $ENV{"NGINXPROXY"}; }';
upstream api-upstream {
server ${nginxproxy};
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Below is the NGINX dockerfile
# build stage
FROM node:latest
WORKDIR /app
COPY ./ /app
RUN npm install
RUN npm run build
# production stage
FROM nginx:1.17.0-perl
COPY --from=0 /app/dist /usr/share/nginx/html
RUN apt-get update && apt-get install -y gettext-base
RUN rm /etc/nginx/conf.d/default.conf
RUN rm /etc/nginx/nginx.conf
COPY default.conf /etc/nginx/conf.d
COPY nginx.conf /etc/nginx
RUN mkdir /certs
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
Below is the section of the docker-compose.yml for the NGINX server (with names and IPs changed). The envsubst command is intentionally commented out at this point in my troubleshooting.
front-end:
environment:
- NGINXPROXY=172.31.67.100:9300
build: http://gitaccount:password#gitserver.com/group/front-end.git#develop
container_name: qa_front_end
image: qa-front-end
restart: always
networks:
qa_network:
ipv4_address: 172.28.0.215
ports:
- "9080:80"
# command: /bin/bash -c "envsubst '$$NGINXPROXY' < /etc/nginx/nginx.conf > /etc/nginx/nginx.conf && nginx -g 'daemon off;'"
What appears to be happening is when I reference the $nginxproxy variable in the upstream block (right after "server"), I get output that makes it look like it's referencing the string literal "$nginxproxy" rather than substituting the value of the variable.
qa3_front_end | 2019/06/18 12:35:36 [emerg] 1#1: host not found in upstream "${nginx_upstream}" in /etc/nginx/nginx.conf:19
qa3_front_end | nginx: [emerg] host not found in upstream "${nginx_upstream}" in /etc/nginx/nginx.conf:19
qa3_front_end exited with code 1
When I attempt to use envsubst, I get an error that makes it sound like the command messed with the format of the nginx.conf file
qa3_front_end | 2019/06/18 12:49:02 [emerg] 1#1: no "events" section in configuration
qa3_front_end | nginx: [emerg] no "events" section in configuration
qa3_front_end exited with code 1
I'm pretty stuck, so thanks in advance for your help.
Since nginx 1.19 you can now use environment variables in your configuration with docker-compose. I used the following setup:
# file: docker/nginx/templates/default.conf.conf
upstream api-upstream {
server ${API_HOST};
}
# file: docker-compose.yml
services:
nginx:
image: nginx:1.19-alpine
volumes:
- "./docker/nginx/templates:/etc/nginx/templates/"
environment:
NGINX_ENVSUBST_TEMPLATE_SUFFIX: ".conf"
API_HOST: api.example.com
I'm going off script a little from the example in the documentation. Note the extra .conf extension on the template file - this is not a typo. In the docs for the nginx image it is suggested to name the file, for example, default.conf.template. Upon startup, a script will take that file, substitute the environment variables, and then output the file to /etc/nginx/conf.d/ with the original file name, dropping the .template suffix.
By default that suffix is .template, but this breaks syntax highlighting unless you configure your editor. Instead, I specified .conf as the template suffix. If you only name your file default.conf the result will be a file named /etc/nginx/conf.d/default and your site won't be served as expected.
You can avoid some of the hassles with Compose interpreting environment variables by defining your own entrypoint. See this simple example:
entrypoint.sh (make sure this file is executable)
#!/bin/sh
export NGINXPROXY
envsubst '${NGINXPROXY}' < /config.template > /etc/nginx/nginx.conf
exec "$#"
docker-compose.yml
version: "3.7"
services:
front-end:
image: nginx
environment:
- NGINXPROXY=172.31.67.100:9300
ports:
- 80:80
volumes:
- ./config:/config.template
- ./entrypoint.sh:/entrypoint.sh
entrypoint: ["/entrypoint.sh"]
command: ["nginx", "-g", "daemon off;"]
My config file has the same content as your nginx.conf, aside from the fact that I had to comment the lines using the Perl module.
Note that I had to mount my config file to another location before I could envsubst it. I encountered some strange behaviour in the form that the file ends up empty after the substitution, which can be avoided by this approach. It shouldn't be a problem in your specific case, because you already embed it in your image on build time.
EDIT
For completeness, to change your setup as little as possible, you just have to make sure that you export your environment variable. Adapt your command like this:
command: ["/bin/bash", "-c", "export NGINXPROXY && envsubst '$$NGINXPROXY' < /etc/nginx/nginx.conf > /etc/nginx/nginx.conf && nginx -g 'daemon off;'"]
...and you should be good to go. I would always recommend the "cleaner" way with defining your own entrypoint, though.
So after some wrestling with this issue, I managed to get it working similarly to the answer provided by bellackn. I am going to post my exact solution here, in case anybody else needs to reference a complete solution.
Step1: Write your nginx.conf or default.conf how you would normally write it. Save the file as "nginx.conf.template", or "default.conf.template" depending on which you are trying to substitute variables into.
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream api-upstream {
server 192.168.25.254;
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Step2: Substitute a variable in the format ${VARNAME} for whatever value(s) you want to replace with an environment variable:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
upstream api-upstream {
server ${SERVER_NAME};
}
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile off;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Step 3: In your docker-file, copy your nginx configuration files (your nginx.conf.template or default.conf.template) into your container at the appropriate location:
# build stage
FROM node:latest
WORKDIR /app
COPY ./ /app
RUN npm install
RUN npm run build
# production stage
FROM nginx:1.17.0-perl
COPY --from=0 /app/dist /usr/share/nginx/html
RUN apt-get update && apt-get install -y gettext-base
RUN rm /etc/nginx/conf.d/default.conf
RUN rm /etc/nginx/nginx.conf
#-----------------------------------#
|COPY default.conf /etc/nginx/conf.d|
|COPY nginx.conf.template /etc/nginx|
#-----------------------------------#
RUN mkdir /certs
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
Step 4: Set your environment variable in your docker-compos.yml file using the "environment" section label. Make sure your environment variable name matches whatever variable name you chose within your nginx config file. Use the "envsubt" command within your docker container to substitute your variable values in for your variables within your nginx.conf.template, and write the output to a file named nginx.conf in the correct location. This can be done within the docker-compose.yml file by using the "command" section label:
version: '2.0'
services:
front-end:
environment:
- SERVER_NAME=172.31.67.100:9100
build: http://git-account:git-password#git-server.com/project-group/repository-name.git#branch-ame
container_name: qa_front_end
image: qa-front-end-vue
restart: always
networks:
qa_network:
ipv4_address: 172.28.0.215
ports:
- "9080:80"
command: >
/bin/sh -c
"envsubst '
$${SERVER_NAME}
'< /etc/nginx/nginx.conf.template
> /etc/nginx/nginx.conf
&& nginx -g 'daemon off;'"
Step 5: Run your stack with docker-compose up (with whatever additional switches you need) and your nginx server should now start with whatever value you supplied in the "environment" section of your docker-compose.yml
As mentioned in the solution above, you can also define your own entry point, however this solution has also proven to work pretty well, and keeps everything contained into a single configuration file, giving me the ability to run a stack of services directly from git with nothing but a docker-compose.yml file.
A big thank you to everybody who took the time to ready through this, and bellackn for taking the time to help me solve the issue.
Like already explained in Jody's answer, nowadays the official Nginx Docker image supports parsing templates. This uses envsubst and its handling ensures not to mess with Nginx variables such as $host and all. Nice. However, envsubst does not support default values like a regular shell and Docker Compose do when using ${MY_VAR:-My Default}. So, this built-in templating would always need a full setup of all variables, even when using the defaults.
To define defaults in the image itself, one can use a custom entry point to first set the defaults and then simply delegate to the original entrypoint. Like a docker-defaults.sh:
#!/usr/bin/env sh
set -eu
# As of version 1.19, the official Nginx Docker image supports templates with
# variable substitution. But that uses `envsubst`, which does not allow for
# defaults for missing variables. Here, first use the regular command shell
# to set the defaults:
export PROXY_API_DEST=${PROXY_API_DEST:-http://host.docker.internal:8000/api/}
# Due to `set -u` this would fail if not defined and no default was set above
echo "Will proxy requests for /api/* to ${PROXY_API_DEST}*"
# Finally, let the original Nginx entry point do its work, passing whatever is
# set for CMD. Use `exec` to replace the current process, to trap any signals
# (like Ctrl+C) that Docker may send it:
exec /docker-entrypoint.sh "$#"
Along with, say, some docker-nginx-default.conf:
# After variable substitution, this will replace /etc/nginx/conf.d/default.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /api/ {
# Proxy API calls to another destination; the default for the variable is
# set in docker-defaults.sh
proxy_pass $PROXY_API_DEST;
}
}
In the Dockerfile copy the template into /etc/nginx/templates/default.conf.template and set the custom entry point:
FROM nginx:stable-alpine
...
# Each time Nginx is started it will perform variable substition in all template
# files found in `/etc/nginx/templates/*.template`, and copy the results (without
# the `.template` suffix) into `/etc/nginx/conf.d/`. Below, this will replace the
# original `/etc/nginx/conf.d/default.conf`; see https://hub.docker.com/_/nginx
COPY docker-nginx-default.conf /etc/nginx/templates/default.conf.template
COPY docker-defaults.sh /
# Just in case the file mode was not properly set in Git
RUN chmod +x /docker-defaults.sh
# This will delegate to the original Nginx `docker-entrypoint.sh`
ENTRYPOINT ["/docker-defaults.sh"]
# The default parameters to ENTRYPOINT (unless overruled on the command line)
CMD ["nginx", "-g", "daemon off;"]
Now using, e.g., docker run --env PROXY_API_DEST=https://example.com/api/ ... will set a value, which in this example will default to http://host.docker.internal:8000/api/ if not set (which is actually http://localhost:8000/api/ on the local machine).
According to official documentation https://hub.docker.com/_/nginx
section "Using environment variables in nginx configuration (new in 1.19)"
you can use environment variables.
But it's does not work due to bug inside docker container script:
https://github.com/nginxinc/docker-nginx/blob/master/entrypoint/20-envsubst-on-templates.sh#L25
running this script always fails with error:
/docker-entrypoint.d/20-envsubst-on-templates.sh: line 25: 3: Bad file descriptor
I created issue https://github.com/nginxinc/docker-nginx/issues/645
and pull request https://github.com/nginxinc/docker-nginx/pull/646
As workaround for now I copied this script and change it locally/
You could switch to a more advanced nginx docker image. For example nginx4docker, it implements a bunch of basic env variables that can be set through docker and you don't have to fiddle around with nginx basic templating and all it's drawbacks.
nginx4docker could also be extended with your custom env variables. only mount a file that lists all your env variables to docker ... --mount $(pwd)/CUSTOM_ENV:/ENV ...
My solution is coping entrypoint sh file into /docker-entrypoint.d directory of nginx container. As mentioned above, you need to copy .template file. But you dont need to create two seperate files.
Copy the file config file with temporary name in Dockerfile. But it's important to not use ENTRYPOINT command in Dockerfile
FROM nginx
...
COPY ./default.conf /etc/nginx/conf.d/default.conf.temp
create an sh file named 05-docker-entrypoint.sh in your project directory (host) and put the following code into the sh file as mentioned above
#!/usr/bin/env sh
set -eu
envsubst '${MY_VARIABLE}' < /etc/nginx/conf.d/default.conf.temp > /etc/nginx/conf.d/default.conf
exec "$#"
Mount 05-docker-entrypoint.sh using docker-compose.yml file to /docker-entrypoint.d directory of nginx container or copy it using Dockerfile. This two options are looking like this :
Option 1. (i prefer this) Mounting file using compose file :
web:
expose:
- 80
environment:
- MY_VARIABLE=blabala
volumes:
- ./05-docker-entrypoint.sh:/docker-entrypoint.d/05-docker-entrypoint.sh
....
Option 2.
Use Dockerfile to copy files into container
Final Dockerfile with Option2 looks like :
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf.temp
COPY ./05-docker-entrypoint.sh /docker-entrypoint.d/05-docker-entrypoint.sh

How to pass arguments from docker-compose to Dockerfile?

I have a small Dockerfile in the folder backend:
FROM alpine:latest
ARG FTP_IP
ARG MONGO_IP
ARG QUORUM_IP
RUN apk add --update openjdk8 && mkdir /var/backend/
RUN apk update
COPY license-system-0.0.1-SNAPSHOT.jar /var/backend/
EXPOSE 8080
ENTRYPOINT [ "java", "-jar", "-Dspring.quorum.host=${QUORUM_IP}", "-Dspring.ftp.server=${FTP_IP}", "-Dspring.data.mongodb.host=${MONGO_IP}","/var/backend/license-system-0.0.1-SNAPSHOT.jar" ]
And even smaller docker-compose.yml:
version: "3"
services:
generator:
build: backend
ports:
- "8080:8080"
I am starting this with a bash script:
#!/usr/bin/env bash
FTP_IP=$1 MONGO_IP=$2 QUORUM_IP=$3 docker-compose up -d
Like this:
start-backend.sh 127.0.0.1 127.0.0.1 http://localhost:22000
But it is not working at all... when I call docker inspect on the created container I get:
"Id": "bd3e05a8fffba6bb7b5c650d1f48c0ed13dca9108e01e1a82ec534a5f19d4393",
"Created": "2019-05-29T09:38:32.723414205Z",
"Path": "java",
"Args": [
"-jar",
"-Dspring.quorum.host=${QUORUM_IP}",
"-Dspring.ftp.server=${FTP_IP}",
"-Dspring.data.mongodb.host=${MONGO_IP}",
"/var/backend/license-system-0.0.1-SNAPSHOT.jar"
]
What am I doing wrong?
in your script start-backend.sh you have used variables FTP_IP,MONGO_IP and QUORUM_IP which are local to the script, export them as env variables and it will work.
Keep in mind Values in the shell take precedence over those specified in the .env file and dockerfile so you might be overwriting the values defined there ...

Passing Laravel .env variable to Dockerfile

I have code in my Dockerfile that install NewRelic php client
RUN \
curl -L https://download.newrelic.com/php_agent/release/newrelic-php5-8.3.0.226-linux.tar.gz | tar -C /tmp -zx && \
NR_INSTALL_USE_CP_NOT_LN=1 NR_INSTALL_SILENT=1 /tmp/newrelic-php5-*/newrelic-install install && \
rm -rf /tmp/newrelic-php5-* /tmp/nrinstall* && \
sed -i -e 's/"REPLACE_WITH_REAL_KEY"/"${MY_NEWRELIC_KEY}"/' \
-e 's/newrelic.appname = "PHP Application"/newrelic.appname = "MyApp"/' \
/usr/local/etc/php/conf.d/newrelic.ini
How to pass variable MY_NEWRELIC_KEY that defined in Laravel .env file to DockerFile?
You need to define ARG and ENV values.
ARG are also known as build-time variables. They are only available from the moment they are 'announced' in the Dockerfile with an ARG instruction up to the moment when the image is built.
ENV variables are also available during the build, as soon as you introduce them with an ENV instruction.
Here is a Dockerfile example, both for default values and without them:
ARG some_variable
# or with a hard-coded default:
#ARG some_variable=default_value
RUN echo "Oh dang look at that $some_variable"
When building a Docker image from the commandline, you can set ARG values using –build-arg:
$ docker build --build-arg some_variable=a_value
Running that command, with the above Dockerfile, will result in the following line being printed (among others):
Oh dang look at that a_value
Here is a basic Dockerfile, using hard-coded ENV default values:
# no default value
ENV blablabla
# a default value
ENV foo /bar
# or ENV foo=/bar
# ENV values can be used during the build
ADD . $foo
# or ADD . ${foo}
# translates to: ADD . /bar
And here is an example of a Dockerfile, using dynamic on-build env values:
# expect a build-time variable
ARG A_VARIABLE
# use the value to set the ENV var default
ENV an_env_var=$A_VARIABLE
# if not overridden, that value of an_env_var will be available to your containers!
If you use docker-compose you may set it in the file (link):
version: '3'
services:
php:
image: my_php
environment:
- MY_NEWRELIC_KEY=keykey
EDIT:
You can specify a file to read values from.
The file above is called env_file (name arbitrary) and it’s located in the current directory. You can reference the filename, which is parsed to extract the environment variables to set:
$ docker run --env-file=env_file php env
With docker-compose.yml files, we just reference a env_file, and Docker parses it for the variables to set.
version: '3'
services:
php:
image: php
env_file: env_file

How do I make cloud-init startup scripts run every time my EC2 instance boots?

I have an EC2 instance running an AMI based on the Amazon Linux AMI. Like all such AMIs, it supports the cloud-init system for running startup scripts based on the User Data passed into every instance. In this particular case, my User Data input happens to be an Include file that sources several other startup scripts:
#include
http://s3.amazonaws.com/path/to/script/1
http://s3.amazonaws.com/path/to/script/2
The first time I boot my instance, the cloud-init startup script runs correctly. However, if I do a soft reboot of the instance (by running sudo shutdown -r now, for instance), the instance comes back up without running the startup script the second time around. If I go into the system logs, I can see:
Running cloud-init user-scripts
user-scripts already ran once-per-instance
[ OK ]
This is not what I want -- I can see the utility of having startup scripts that only run once per instance lifetime, but in my case these should run every time the instance starts up, like normal startup scripts.
I realize that one possible solution is to manually have my scripts insert themselves into rc.local after running the first time. This seems burdensome, however, since the cloud-init and rc.d environments are subtly different and I would now have to debug scripts on first launch and all subsequent launches separately.
Does anyone know how I can tell cloud-init to always run my scripts? This certainly sounds like something the designers of cloud-init would have considered.
In 11.10, 12.04 and later, you can achieve this by making the 'scripts-user' run 'always'.
In /etc/cloud/cloud.cfg you'll see something like:
cloud_final_modules:
- rightscale_userdata
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- keys-to-console
- phone-home
- final-message
This can be modified after boot, or cloud-config data overriding this stanza can be inserted via user-data. Ie, in user-data you can provide:
#cloud-config
cloud_final_modules:
- rightscale_userdata
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- [scripts-user, always]
- keys-to-console
- phone-home
- final-message
That can also be '#included' as you've done in your description.
Unfortunately, right now, you cannot modify the 'cloud_final_modules', but only override it. I hope to add the ability to modify config sections at some point.
There is a bit more information on this in the cloud-config doc at
https://github.com/canonical/cloud-init/tree/master/doc/examples
Alternatively, you can put files in /var/lib/cloud/scripts/per-boot , and they'll be run by the 'scripts-per-boot' path.
In /etc/init.d/cloud-init-user-scripts, edit this line:
/usr/bin/cloud-init-run-module once-per-instance user-scripts execute run-parts ${SCRIPT_DIR} >/dev/null && success || failure
to
/usr/bin/cloud-init-run-module always user-scripts execute run-parts ${SCRIPT_DIR} >/dev/null && success || failure
Good luck !
cloud-init supports this now natively, see runcmd vs bootcmd command descriptions in the documentation (http://cloudinit.readthedocs.io/en/latest/topics/examples.html#run-commands-on-first-boot):
"runcmd":
#cloud-config
# run commands
# default: none
# runcmd contains a list of either lists or a string
# each item will be executed in order at rc.local like level with
# output to the console
# - runcmd only runs during the first boot
# - if the item is a list, the items will be properly executed as if
# passed to execve(3) (with the first arg as the command).
# - if the item is a string, it will be simply written to the file and
# will be interpreted by 'sh'
#
# Note, that the list has to be proper yaml, so you have to quote
# any characters yaml would eat (':' can be problematic)
runcmd:
- [ ls, -l, / ]
- [ sh, -xc, "echo $(date) ': hello world!'" ]
- [ sh, -c, echo "=========hello world'=========" ]
- ls -l /root
- [ wget, "http://slashdot.org", -O, /tmp/index.html ]
"bootcmd":
#cloud-config
# boot commands
# default: none
# this is very similar to runcmd, but commands run very early
# in the boot process, only slightly after a 'boothook' would run.
# bootcmd should really only be used for things that could not be
# done later in the boot process. bootcmd is very much like
# boothook, but possibly with more friendly.
# - bootcmd will run on every boot
# - the INSTANCE_ID variable will be set to the current instance id.
# - you can use 'cloud-init-per' command to help only run once
bootcmd:
- echo 192.168.1.130 us.archive.ubuntu.com >> /etc/hosts
- [ cloud-init-per, once, mymkfs, mkfs, /dev/vdb ]
also note the "cloud-init-per" command example in bootcmd. From it's help:
Usage: cloud-init-per frequency name cmd [ arg1 [ arg2 [ ... ] ]
run cmd with arguments provided.
This utility can make it easier to use boothooks or bootcmd
on a per "once" or "always" basis.
If frequency is:
* once: run only once (do not re-run for new instance-id)
* instance: run only the first boot for a given instance-id
* always: run every boot
One possibility, although somewhat hackish, is to delete the lock file that cloud-init uses to determine whether or not the user-script has already run. In my case (Amazon Linux AMI), this lock file is located in /var/lib/cloud/sem/ and is named user-scripts.i-7f3f1d11 (the hash part at the end changes every boot). Therefore, the following user-data script added to the end of the Include file will do the trick:
#!/bin/sh
rm /var/lib/cloud/sem/user-scripts.*
I'm not sure if this will have any adverse effects on anything else, but it has worked in my experiments.
please use the below script above your bash script.
example: here m printing hello world to my file
stop instance before adding to userdata
script
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
/bin/echo "Hello World." >> /var/tmp/sdksdfjsdlf
--//
I struggled with this issue for almost two days, tried all of the solutions I could find and finally, combining several approaches, came up with the following:
MyResource:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
configSets:
setup_process:
- "prepare"
- "run_for_instance"
prepare:
commands:
01_apt_update:
command: "apt-get update"
02_clone_project:
command: "mkdir -p /replication && rm -rf /replication/* && git clone https://github.com/awslabs/dynamodb-cross-region-library.git /replication/dynamodb-cross-region-library/"
03_build_project:
command: "mvn install -DskipTests=true"
cwd: "/replication/dynamodb-cross-region-library"
04_prepare_for_apac:
command: "mkdir -p /replication/replication-west && rm -rf /replication/replication-west/* && cp /replication/dynamodb-cross-region-library/target/dynamodb-cross-region-replication-1.2.1.jar /replication/replication-west/replication-runner.jar"
run_for_instance:
commands:
01_run:
command: !Sub "java -jar replication-runner.jar --sourceRegion us-east-1 --sourceTable ${TableName} --destinationRegion ap-southeast-1 --destinationTable ${TableName} --taskName -us-ap >/dev/null 2>&1 &"
cwd: "/replication/replication-west"
Properties:
UserData:
Fn::Base64:
!Sub |
#cloud-config
cloud_final_modules:
- [scripts-user, always]
runcmd:
- /usr/local/bin/cfn-init -v -c setup_process --stack ${AWS::StackName} --resource MyResource --region ${AWS::Region}
- /usr/local/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource MyResource --region ${AWS::Region}
This is the setup for DynamoDb cross-region replication process.
If someone wants to do this on CDK, here's a python example.
For Windows, user data has a special persist tag, but for Linux, you need to use MultiPart User data to setup cloud-init first. This Linux example worked with cloud-config (see ref blog) part type instead of cloud-boothook which requires a cloud-init-per (see also bootcmd) call I couldn't test out (eg: cloud-init-pre always).
Linux example:
# Create some userdata commands
instance_userdata = ec2.UserData.for_linux()
instance_userdata.add_commands("apt update")
# ...
# Now create the first part to make cloud-init run it always
cinit_conf = ec2.UserData.for_linux();
cinit_conf .add_commands('#cloud-config');
cinit_conf .add_commands('cloud_final_modules:');
cinit_conf .add_commands('- [scripts-user, always]');
multipart_ud = ec2.MultipartUserData()
#### Setup to run every time instance starts
multipart_ud.add_part(ec2.MultipartBody.from_user_data(cinit_conf , content_type='text/cloud-config'))
#### Add the commands desired to run every time
multipart_ud.add_part(ec2.MultipartBody.from_user_data(instance_userdata));
ec2.Instance(
self, "myec2",
userdata=multipart_ud,
#other required config...
)
Windows example:
instance_userdata = ec2.UserData.for_windows()
# Bootstrap
instance_userdata.add_commands("Write-Output 'Run some commands'")
# ...
# Making all the commands persistent - ie: running on each instance start
data_script = instance_userdata.render()
data_script += "<persist>true</persist>"
ud = ec2.UserData.custom(data_script)
ec2.Instance(
self, "myWinec2",
userdata=ud,
#other required config...
)
Another approach is to use #cloud-boothook in your user data script. From the docs:
Cloud Boothook
Begins with #cloud-boothook or Content-Type: text/cloud-boothook.
This content is boothook data. It is stored in a file under /var/lib/cloud and then executed immediately.
This is the earliest "hook" available. There is no mechanism provided for running it only one time. The boothook must take care
of this itself. It is provided with the instance ID in the environment
variable INSTANCE_ID. Use this variable to provide a once-per-instance
set of boothook data.

Resources