Passing Laravel .env variable to Dockerfile - laravel

I have code in my Dockerfile that install NewRelic php client
RUN \
curl -L https://download.newrelic.com/php_agent/release/newrelic-php5-8.3.0.226-linux.tar.gz | tar -C /tmp -zx && \
NR_INSTALL_USE_CP_NOT_LN=1 NR_INSTALL_SILENT=1 /tmp/newrelic-php5-*/newrelic-install install && \
rm -rf /tmp/newrelic-php5-* /tmp/nrinstall* && \
sed -i -e 's/"REPLACE_WITH_REAL_KEY"/"${MY_NEWRELIC_KEY}"/' \
-e 's/newrelic.appname = "PHP Application"/newrelic.appname = "MyApp"/' \
/usr/local/etc/php/conf.d/newrelic.ini
How to pass variable MY_NEWRELIC_KEY that defined in Laravel .env file to DockerFile?

You need to define ARG and ENV values.
ARG are also known as build-time variables. They are only available from the moment they are 'announced' in the Dockerfile with an ARG instruction up to the moment when the image is built.
ENV variables are also available during the build, as soon as you introduce them with an ENV instruction.
Here is a Dockerfile example, both for default values and without them:
ARG some_variable
# or with a hard-coded default:
#ARG some_variable=default_value
RUN echo "Oh dang look at that $some_variable"
When building a Docker image from the commandline, you can set ARG values using –build-arg:
$ docker build --build-arg some_variable=a_value
Running that command, with the above Dockerfile, will result in the following line being printed (among others):
Oh dang look at that a_value
Here is a basic Dockerfile, using hard-coded ENV default values:
# no default value
ENV blablabla
# a default value
ENV foo /bar
# or ENV foo=/bar
# ENV values can be used during the build
ADD . $foo
# or ADD . ${foo}
# translates to: ADD . /bar
And here is an example of a Dockerfile, using dynamic on-build env values:
# expect a build-time variable
ARG A_VARIABLE
# use the value to set the ENV var default
ENV an_env_var=$A_VARIABLE
# if not overridden, that value of an_env_var will be available to your containers!
If you use docker-compose you may set it in the file (link):
version: '3'
services:
php:
image: my_php
environment:
- MY_NEWRELIC_KEY=keykey
EDIT:
You can specify a file to read values from.
The file above is called env_file (name arbitrary) and it’s located in the current directory. You can reference the filename, which is parsed to extract the environment variables to set:
$ docker run --env-file=env_file php env
With docker-compose.yml files, we just reference a env_file, and Docker parses it for the variables to set.
version: '3'
services:
php:
image: php
env_file: env_file

Related

Helm: Expanding environment variables in values.yaml with helm install or upgrade

In our Jenkins pipeline, I'm using a bash script to call the helm install command. We have a values.yaml containing most of the values to be passed to helm. However, few values are based upon environment variables and have to be passed using the --set argument. Here is the snippet:
helm install $RELEASE_NAME shared/phoenixmsp-app -f value.yaml \
--set global.env.production=$production \
--set global.cluster.hosts=${CONFIG[${CLUSTER_NAME}]} \
--set nameOverride=$RELEASE_NAME \
--set fullnameOverride=$RELEASE_NAME \
--set image.repository=myhelm.hub.mycloud.io/myrepo/mainservice \
--set-string image.tag=$DOCKER_TAG \
--wait --timeout 180s --namespace $APP_NAMESPACE"
We want to move these --set parameters to values.yaml. The goal is to get rid of --set and simply pass the values.yaml.
Question: Is it possible to expand Environment Variables in values.yaml while calling with helm install or helm upgrade?
The only way I think you can do that, if you really want to use a single yaml is to have a template values.yaml and either sed the values into it or use a templating language like jinja or mustache, then feed the resulting output into helm.
--set is a good solution here, but if you really don't want that, dynamically write a second values file for the run-time values.
echo "
global:
env:
production: $production
cluster:
hosts: ${CONFIG[${CLUSTER_NAME}]}
nameOverride: $RELEASE_NAME
fullnameOverride: $RELEASE_NAME
image:
repository: myhelm.hub.mycloud.io/myrepo/mainservice
tag: $DOCKER_TAG
" > runtime.yaml
helm install $RELEASE_NAME shared/phoenixmsp-app -f value.yaml -f runtime.yaml \
--wait --timeout 180s --namespace $APP_NAMESPACE
This really does nothing but slightly reduce the precedence, though.
If all these values are known ahead of time, maybe build your runtime.yaml in advance and throw it into a git repo people can peer-review before deployment time, and just use the variables to select the right file from the repo.

Kong custom golang plugin not working in kubernetes/helm setup

I have written custom golang kong plugin called go-wait following the example from the github repo https://github.com/redhwannacef/youtube-tutorials/tree/main/kong-gateway-custom-plugin
The only difference is I created a custom docker image so kong would have the mentioned plugin by default in it's /usr/local/bin directory
Here's the dockerfile
FROM golang:1.18.3-alpine as pluginbuild
COPY ./charts/custom-plugins/ /app/custom-plugins
RUN cd /app/custom-plugins && \
for d in ./*/ ; do (cd "$d" && go mod tidy && GOOS=linux GOARCH=amd64 go build .); done
RUN mkdir /app/all-plugin-execs && cd /app/custom-plugins && \
find . -type f -not -name "*.*" | xargs -i cp {} /app/all-plugin-execs/
FROM kong:2.8
COPY --from=pluginbuild /app/all-plugin-execs/ /usr/local/bin/
COPY --from=pluginbuild /app/all-plugin-execs/ /usr/local/bin/plugin-ref/
# Loop through the plugin-ref directory and create an entry for all of them in
# both KONG_PLUGINS and KONG_PLUGINSERVER_NAMES env vars respectively
# Additionally append `bundled` to KONG_PLUGINS list as without it any unused plugin will case Kong to error out
#### Example Env vars for a plugin named `go-wait`
# ENV KONG_PLUGINS=go-wait
# ENV KONG_PLUGINSERVER_NAMES=go-wait
# ENV KONG_PLUGINSERVER_GO_WAIT_QUERY_CMD="/usr/local/bin/go-wait -dump"
####
RUN cd /usr/local/bin/plugin-ref/ && \
PLUGINS=$(ls | tr '\n' ',') && PLUGINS=${PLUGINS::-1} && \
echo -e "KONG_PLUGINS=bundled,$PLUGINS\nKONG_PLUGINSERVER_NAMES=$PLUGINS" >> ~/.bashrc
# Loop through the plugin-ref directory and create an entry for QUERY_CMD entries needed to load the plugin
# format KONG_PLUGINSERVER_EG_PLUGIN_QUERY_CMD if the plugin name is `eg-plugin` and it should point to the
# plugin followed by `-dump` argument
RUN cd /usr/local/bin/plugin-ref/ && \
for f in *; do echo "$f" | tr "[:lower:]" "[:upper:]" | tr '-' '_' | \
xargs -I {} sh -c "echo 'KONG_PLUGINSERVER_{}_QUERY_CMD=' && echo '\"/usr/local/bin/{} -dump\"' | tr [:upper:] [:lower:] | tr '_' '-'" | \
sed -e '$!N;s/\n//' | xargs -i echo "{}" >> ~/.bashrc; done
This works fine in the docker-compose file and docker container. But when I tried to use the same image in the kubernetes environment along with kong-ingress-controller, I started running into errors "failed to fill-in defaults for plugin: go-wait" and/or a bunch of other errors including "plugin 'go-wait' enabled but not installed" in the logs and I ended up not being able to enable it.
Has anyone tried including go plugins in their kubernetes/helm kong setup. If so please shed some light on this
Update: Found the answer I was looking for, along with setting the environment variables generated by the image, there's modifications in the _helpers.tpl file of the kong helm chart itself.
The reason is that in the deployment charts, the configuration expects plugins to be configured in values-custom.yml used to override the default settings.
But the helm chart seems to be specific to values and plugins being loaded via configMaps which turned out to be a huge bottleneck, as any binary plugin you will generate in golang for kong is going to exceed the maximum allowed limit of the configMaps in kubernetes.
That's the whole reason I had set out on this endeavor to make the plugins part of my image.
TL;dr
I was able to clone the repo to my local system, make the changes for the following patch for loading the plugins from values without having to club them with the lua plugins. (Credits : Answer of thatbenguy from the discussion https://discuss.konghq.com/t/how-to-load-go-plugins-using-kong-helm-chart/5717/10)
--- a/charts/kong/templates/_helpers.tpl
+++ b/charts/kong/templates/_helpers.tpl
## -530,6 +530,9 ## The name of the service used for the ingress controller's validation webhook
{{- define "kong.plugins" -}}
{{ $myList := list "bundled" }}
+{{- range .Values.plugins.goPlugins -}}
+{{- $myList = append $myList .pluginName -}}
+{{- end -}}
{{- range .Values.plugins.configMaps -}}
{{- $myList = append $myList .pluginName -}}
{{- end -}}
Add the following block to my values-custom.yml and I was good to go.
Hopefully this helps anyone else also trying to write custom plugins for kong in golang for use in helm charts.
env:
database: "off"
plugins: bundled,go-wait
pluginserver_names: go-wait
pluginserver_go_wait_query_cmd: "/usr/local/bin/go-wait -dump"
plugins:
goPlugins:
- pluginName: "go-wait"
NOTE : Please remember all this still depends on you having the prebuilt custom kong plugins in your image, in my case I had built an image from the above dockerfile contents (in question) and pushed that to my own docker hub repo and replaced the image in the values-custom.yml using the following block
image:
repository: chalukyaj/kong-custom-image
tag: "1.0.1"
PS: As you guys might have noticed, the only disappointment I have with this is that the environment variables couldn't just be picked from the docker image's ~/.bashrc, which would have made this awesome. But nonetheless, this works, and I couldn't find a single post which showed how to use the new go-pdk (instead of the older go-pluginserver) to build the go plugins and use them in helm.

How to pass arguments from docker-compose to Dockerfile?

I have a small Dockerfile in the folder backend:
FROM alpine:latest
ARG FTP_IP
ARG MONGO_IP
ARG QUORUM_IP
RUN apk add --update openjdk8 && mkdir /var/backend/
RUN apk update
COPY license-system-0.0.1-SNAPSHOT.jar /var/backend/
EXPOSE 8080
ENTRYPOINT [ "java", "-jar", "-Dspring.quorum.host=${QUORUM_IP}", "-Dspring.ftp.server=${FTP_IP}", "-Dspring.data.mongodb.host=${MONGO_IP}","/var/backend/license-system-0.0.1-SNAPSHOT.jar" ]
And even smaller docker-compose.yml:
version: "3"
services:
generator:
build: backend
ports:
- "8080:8080"
I am starting this with a bash script:
#!/usr/bin/env bash
FTP_IP=$1 MONGO_IP=$2 QUORUM_IP=$3 docker-compose up -d
Like this:
start-backend.sh 127.0.0.1 127.0.0.1 http://localhost:22000
But it is not working at all... when I call docker inspect on the created container I get:
"Id": "bd3e05a8fffba6bb7b5c650d1f48c0ed13dca9108e01e1a82ec534a5f19d4393",
"Created": "2019-05-29T09:38:32.723414205Z",
"Path": "java",
"Args": [
"-jar",
"-Dspring.quorum.host=${QUORUM_IP}",
"-Dspring.ftp.server=${FTP_IP}",
"-Dspring.data.mongodb.host=${MONGO_IP}",
"/var/backend/license-system-0.0.1-SNAPSHOT.jar"
]
What am I doing wrong?
in your script start-backend.sh you have used variables FTP_IP,MONGO_IP and QUORUM_IP which are local to the script, export them as env variables and it will work.
Keep in mind Values in the shell take precedence over those specified in the .env file and dockerfile so you might be overwriting the values defined there ...

Snakemake conda env parameter is not taken from config.yaml file

I use a conda env that I create manually, not automatically using Snakemake. I do this to keep tighter version control.
Anyway, in my config.yaml I have the following line:
conda_env: '/rst1/2017-0205_illuminaseq/scratch/swo-406/snakemake'
Then, at the start of my Snakefile I read that variable (reading variables from config in your shell part does not seem to work, am I right?):
conda_env = config['conda_env']
Then in a shell part I hail said parameter like this:
rule rsem_quantify:
input:
os.path.join(fastq_dir, '{sample}_R1_001.fastq.gz'),
os.path.join(fastq_dir, '{sample}_R2_001.fastq.gz')
output:
os.path.join(analyzed_dir, '{sample}.genes.results'),
os.path.join(analyzed_dir, '{sample}.STAR.genome.bam')
threads: 8
shell:
'''
#!/bin/bash
source activate {conda_env}
rsem-calculate-expression \
--paired-end \
{input} \
{rsem_ref_base} \
{analyzed_dir}/{wildcards.sample} \
--strandedness reverse \
--num-threads {threads} \
--star \
--star-gzipped-read-file \
--star-output-genome-bam
'''
Notice the {conda_env}. Now this gives me the following error:
Could not find conda environment: None
You can list all discoverable environments with `conda info --envs`.
Now, if I change {conda_env} for its parameter directly /rst1/2017-0205_illuminaseq/scratch/swo-406/snakemake, it does work! I don't have any trouble reading other parameters using this method (like rsem_ref_base and analyzed_dir in the example rule above.
What could be wrong here?
Highest regards,
Freek.
The pattern I use is to load variables into params, so something along the lines of
rule rsem_quantify:
input:
os.path.join(fastq_dir, '{sample}_R1_001.fastq.gz'),
os.path.join(fastq_dir, '{sample}_R2_001.fastq.gz')
output:
os.path.join(analyzed_dir, '{sample}.genes.results'),
os.path.join(analyzed_dir, '{sample}.STAR.genome.bam')
params:
conda_env=config['conda_env']
threads: 8
shell:
'''
#!/bin/bash
source activate {params.conda_env}
rsem-calculate-expression \
...
'''
Although, I'd also never do this with a conda environment, because Snakemake has conda environment management built-in. See this section in the docs on Integrated Package Management for details. This makes reproducibility much more manageable.

nginx: use environment variables

I have the following scenario: I have an env variable $SOME_IP defined and want to use it in a nginx block. Referring to the nginx documentation I use the env directive in the nginx.conf file like the following:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
env SOME_IP;
Now I want to use the variable for a proxy_pass. I tried it like the following:
location / {
proxy_pass http://$SOME_IP:8000;
}
But I end up with this error message: nginx: [emerg] unknown "some_ip" variable
With NGINX Docker image
Apply envsubst on template of the configuration file at container start. envsubst is included in official NGINX docker images.
Environment variable is referenced in a form $VARIABLE or ${VARIABLE}.
nginx.conf.template:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
access_log off;
return 200 '${MESSAGE}';
add_header Content-Type text/plain;
}
}
}
Dockerfile:
FROM nginx:1.17.8-alpine
COPY ./nginx.conf.template /nginx.conf.template
CMD ["/bin/sh" , "-c" , "envsubst < /nginx.conf.template > /etc/nginx/nginx.conf && exec nginx -g 'daemon off;'"]
Build and run docker:
docker build -t foo .
docker run --rm -it --name foo -p 8080:80 -e MESSAGE="Hellou World" foo
NOTE:If config template contains dollar sign $ which should not be substituted then list all used variables as parameter of envsubst so that only those are replaced. E.g.:
CMD ["/bin/sh" , "-c" , "envsubst '$USER_NAME $PASSWORD $KEY' < /nginx.conf.template > /etc/nginx/nginx.conf && exec nginx -g 'daemon off;'"]
Nginx Docker documentation for reference. Look for Using environment variables in nginx configuration.
Using environment variables in nginx configuration
Out-of-the-box, nginx doesn’t support environment variables inside
most configuration blocks. But envsubst may be used as a workaround if
you need to generate your nginx configuration dynamically before nginx
starts.
Here is an example using docker-compose.yml:
web:
image: nginx
volumes:
- ./mysite.template:/etc/nginx/conf.d/mysite.template
ports:
- "8080:80"
environment:
- NGINX_HOST=foobar.com
- NGINX_PORT=80
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
The mysite.template file may then contain variable references like
this:
listen ${NGINX_PORT};
You can access the variables via modules - I found options for doing it with Lua and Perl.
Wrote about it on my company's blog:
https://web.archive.org/web/20170712003702/https://docs.apitools.com/blog/2014/07/02/using-environment-variables-in-nginx-conf.html
The TL;DR:
env API_KEY;
And then:
http {
...
server {
location / {
# set var using Lua
set_by_lua $api_key 'return os.getenv("API_KEY")';
# set var using perl
perl_set $api_key 'sub { return $ENV{"API_KEY"}; }';
...
}
}
}
EDIT: original blog is dead, changed link to wayback machine cache
The correct usage would be $SOME_IP_from_env, but environment variables set from nginx.conf cannot be used in server, location or http blocks.
You can use environment variables if you use the openresty bundle, which includes Lua.
Since nginx 1.19 you can now use environment variables in your configuration with docker-compose. I used the following setup:
# file: docker/nginx/templates/default.conf.conf
upstream api-upstream {
server ${API_HOST};
}
# file: docker-compose.yml
services:
nginx:
image: nginx:1.19-alpine
environment:
NGINX_ENVSUBST_TEMPLATE_SUFFIX: ".conf"
API_HOST: api.example.com
I found this answer on other thread: https://stackoverflow.com/a/62844707/4479861
For simple environment variables substitution, can use the envsubst command and template feature since docker Nginx 1.19. Note: envsubst not support fallback default, eg: ${MY_ENV:-DefaultValue}.
For more advanced usage, consider use https://github.com/guyskk/envsub-njs, it's implemented via Nginx NJS, use Javascript template literals, powerful and works well in cross-platform. eg: ${Env('MY_ENV', 'DefaultValue')}
You can also consider https://github.com/kreuzwerker/envplate, it support syntax just like shell variables substitution.
If you're not tied to bare installation of nginx, you could use docker for the job.
For example nginx4docker implements a bunch of basic env variables that can be set through docker and you don't have to fiddle around with nginx basic templating and all it's drawbacks.
nginx4docker could also be extended with your custom env variables. only mount a file that lists all your env variables to docker ... --mount $(pwd)/CUSTOM_ENV:/ENV ...
When the worst case happens and you can't switch/user docker, a workaround maybe to set all nginx variables with their names (e.g. host="$host") in this case envsubst replaces $host with $host.

Resources