flask/gunicorn: setting environment variable from environment variable - heroku

On python/flask/gunicorn/heroku stack, I need to set an environment variable based on the content of another env variable.
For background, I run a python/Flask app on heroku.
I communicate with an addon via a environment variable that contains credentials and url.
The library I use to communicate with the addon needs that data, but needs it in a different format.
Also, it needs it as an environment variable.
So far, I had cloned and reformatted the environment variable manually, but that just brought disaster because the add-on provider was changing passwords.
OK, so I need to automate reading one environment variable and setting another, before the library starts looking for it.
The naive approach I tried was (file app.py):
app = Flask(__name__, ...)
env_in = os.environ['ADDON_ENV_VAR']
os.environ['LIB_ENV_VAR'] = some_processing(env_in)
...
if __name__ == '__main__':
app.run(host='0.0.0.0', port='5000')
That works fine when doing python app.py for debugging, but it fails when running via gunicorn app:app -b '0.0.0.0:5000' (as a Procfilefor foreman) for deploying a real webserver. In the second case, the env var doesn't seem to make it to the OS level. I'm not sure about how wsgi works, but maybe the environment changes once gunicorn starts running the app.
What can I do to have the environment variable set at the place it's needed?

you could also set the enviroment variables at run time as such
gunicorn -b 0.0.0.0:5000 -e env_var1=enviroment1 -e env_var2=environment2

OK, so the answer (via Kenneth R, Heroku) is to set the environment before running gunicorn. I.e. write a Procfile like
web: sh appstarter.sh
which calls a wrapper (shell, python, ..) that sets up the environment variable and then runs the gunicorn command, like for example
appstarter.sh:
export LIB_ENV_VAR=${ADDON_ENV_VAR}/some/additional_string
gunicorn app:app -b '0.0.0.0:5000'
Just in case it helps anyone else out there.

Set environment variable (key=value).
Pass variables to the execution environment. Ex.:
$ gunicorn -b 127.0.0.1:8000 --env FOO=1 test:app
and test for the foo variable environment in your application.
from: http://docs.gunicorn.org/en/stable/settings.html

Related

How to retain existing env variables in a new shell

I know I must be doing something silly here, but I'm trying to pass environment variables to a command being run under /bin/sh -c in Cloud Build.
My Cloud Build file looks like this:
- id: db-migrate
name: node:16.13.0
dir: 'packages/backend'
entrypoint: '/bin/sh'
args:
- '-c'
- '(/workspace/cloud_sql_proxy -dir=/workspace -instances=$_DB_CONNECTION=tcp:127.0.0.1:5432 & sleep 2) && yarn db:migrate'
env:
- 'DB_HOST=127.0.0.1'
- 'DB_USER=$_DB_USER'
- 'DB_PASSWORD=$_DB_PASSWORD'
- 'DB_NAME=$_DATABASE'
My Cloud Build Trigger has the substitutions set, and when I look at the build details it shows the environment variables as set.
However the command yarn db:migrate acts as if there are no env variables set. I believe this is because they aren't being passed from the machine to the command.
Any idea what I'm doing wrong?
The problem here is that when we call bin/sh it's creating a new shell with it's own environment variables. While I read through the manual on SH/Dash I will leave this question here:
How do I retain existing env variables in a new shell?
Alright, I figured this out.
We were using TypeORM, and originally used a ormconfig.json file. Turns out, this was still being picked up somehow on the system and was overriding all env variables.
Posting this response to help others in case they make this same mistake.

Pass environment variable from command line to yarn

I have a code that reads port number from environment variable or from config. Code looks like this
const port = process.env.PORT || serverConfig.port;
await app.listen(port);
To run app without defining environment variable, I run following yarn command.
yarn start:dev
This command works successfully in Linux shell and Windows command line.
Now, I want to pass environment variable. I tried following,
PORT=2344 yarn start:dev
This commands works successfully in Linux shell but failing in Windows command line. I tried following ways but couldn't get it to work.
Tried: PORT=2344 yarn start:dev
I got error: 'PORT' is not recognized as an internal or external command,
operable program or batch file.
Tried: yarn PORT=2344 start:dev
I got error: yarn run v1.17.3
error Command "PORT=2344" not found.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Any idea please? I know, I can define environment variables from System Properties in Windows. But any way if I can do it from command line?
i'd suggest you use the NPM module called cross-env. it allows adding particular env variables on the command line regardless of platform. with that said, you may try:
$ cross-env PORT=2344 yarn start:dev
You can chain commands on the Windows command prompt with &(or &&). To set an environment variable you need to use the set command.
The result should look like this: set PORT=1234 && yarn start:dev.
Found a solution for this problem in Windows command prompt.
Create a .env file in project root folder (outside src folder).
Define PORT in it. In my case, contents of .env file will be,
PORT=2344
Run yarn start:dev
Application will use port number that you have specified in .env file.
Put .env file at root. Then following command will expose content of .env file and then run yarn start command
$ source .env && yarn start
or this command
$ export $(cat .env) && yarn start
If update any variable in .env then close the terminal and open new terminal window and can again run above command. Or else can also run unset command to remove existing var.
unset VAR_NAME
You can use popular package dotenv:
create a file .env in root directory
put all your env vars
e.g.:
ENV=DEVELOPMENT
run your code like this
$ node -r dotenv/config your_script.js
here the explanation:
[https://github.com/motdotla/dotenv#preload]
To define environment variables in the Windows command prompt we can use the set command, you can then split your call into two lines.
set PORT=2344
yarn start:dev
The set command persists within the current command prompt, so you only need to run it once.
The equivalent command in bash is 'export'.
FYI (not a direct answer). I was attempting this in VS Code - passing .env variables through yarn to a JavaScript app. Google had very few examples so I'm sharing this for posterity as it's somewhat related.
The following simply substitutes text normally placed directly into the package.json or script file. Use this to quickly obfuscate or externalize your delivery configurations.
In Environment Variable File (.env)
PORT=2344
In Yarn File (package.json)
source .env; yarn ./start.sh --port $PORT
In Yarn Script (start.sh)
#!/bin/bash
while [ $? != 0 ]; do
node dist/src/index.js $1; #replace with your app call#
done
The app then accepts port as a variable. Great for multi-tenant deployments.

Copy Current Env Vars into `docker run`'s Scope

If I'm using a docker container with an entry point set, I can run that container via the following command
docker run -it my-container-tag
If the program in my container requires an environmental variable, I can pass that var via the -e flag
docker run -it -e FOO=bar my-container-tag
If I have a program that uses many environment variable, I get an unwieldy mess that becomes hard to type.
docker run -it -e FOO=bar -e BAZ=zip -e ZAP=zing -e ETC=omg-stop my-container-tag
Is there a way to tell docker run to inherit all the env variables currently set in my shell's scope? If not, are there common practices for working around needing to type in these variables again and again?
You cant inherit the envs, I usually use docker-compose to set my envs when there is too much, or build the container with the environments variables inside it if you dont need to change frequently.

Docker Ubuntu environment variables

During the build stage of my docker images, i would like to set some environment variables automatically for every subsequent "RUN" command.
However, I would like to set these variables from within the docker conatiner, because setting them depends on some internal logic.
Using the dockerfile "ENV" command is not good, because that cannot rely on internal logic. (It cannot rely on a command run inside the docker container)
Normally (if this were not docker) I would set my ~/.profile file. However, docker does not load this file in non-interactive shells.
So at them moment I have to run each docker RUN command with:
RUN bash -c "source ~/.profile && do_something_here"
However, this is very tedious (and unclean) when I have to repeat this every time I want to run a bash command. Is there some other "profile" file I can use instead.
you can try setting the arg as env like this
ARG my_env
ENV my_env=${my_env}
in Dockerfile,
and pass the 'my_env=prod' in build-args so that you can use the set env for subsequent RUN commands
you can also use env_file: option in docker compose yml file in case of a stack deploy
I had a similar problem and couldn't find a satisfactory solution. What I did was creating a script that would source the variables, then do the operation. I would then rewrite the RUN commands in the Dockerfile to use that script instead.
In your case, if you need to run multiple commands, you could create a wrapper that loads the variables, runs the command given as argument, and include that script in the docker image.

running a bash client at docker creation to set an environment variable

From examples I've seen one can set environment variables in docker-compose.yml like so:
services:
postgres:
image: my_node_app
ports: -8080:8080
environment:
APP_PASSWORD: mypassword
...
For security reasons, my use case requires me to fetch the password from a server that we have a bash client for:
#!/bin/bash
get_credential <server> <dev-environment> <role> <key>
In docker documentation, I found this, which says that I can pass in shell environment variable values to docker compose. So I can run the bash client to grab the passwords in my starting shell that creates the docker instances. However, that requires me to have my bash client outside docker and inside my maven project.
Another way to do this would be to run/cmd/entrypoint a bash script that can set environment variable for the docker instance. Since my docker image runs node.js, currently my Dockerfile is like this:
FROM node:4-slim
MAINTAINER myself
# ... do Dockerfile stuff
# TRIAL #1: run a bash script to set the environment varable --- UNSUCCESSFUL!
COPY set_en_var.sh /
RUN chmod +x /set_en_var.sh
RUN /bin/bash /set_en_var.sh
# original entry point
#ENTRYPOINT ["node", "mynodeapp.js", "configuration.js"]
# TRIAL #2: use a bash script as entrypoint that sets
# the environment variable and runs my node app . --- UNSUCCESSFUL TOO!
ENTRYPOINT ["/entrypoint.sh"]
Here is the code for entrypoint.sh:
. mybashclient.sh
cred_str=$(get_credential <server> <dev-environment> <role> <key>)
export APP_PASSWORD=( $cred_str )
# run the original entrypoint command
node mynodeapp.js configuration.js
And here is code for my set_en_var.sh:
. mybashclient.sh
cred_str=$(get_credential <server> <dev-environment> <role> <key>
export APP_PASSWORD=( $cred_str )
So 2 questions:
Which is a better choice, having my bash client for password live inside docker or outside docker?
If I were to have it inside docker, how can I use cmd/run/entrypoint to achieve this?
Which is a better choice, having my bash client for password live inside docker or outside docker?
Always have it inside. You don't want dependencies on the host OS. You want to avoid that situation as much as possible
If I were to have it inside docker, how can I use cmd/run/entrypoint to achieve this?
Consider the below line of code you used
RUN /bin/bash /set_en_var.sh
This won't work at all. Because you don't make any change to the docker container as such. You just run a bash which gets some environment variables and then the bash exits and nothing on the OS gets changes. Dockerfile build will only maintain changes that happened to the OS from that command. And in your case except for that session of the bash, nothing changes.
Next your approach to do this during the build time is also not justified. If you build it with the environment variables inside it then you are breaking the purpose of having a command to fetch the latest credentials. Suppose your change the password, then this would require you to rebuild the image (in case it had worked)
Now your entrypoint.sh approach is the right one and it should work. You should just check what is going wrong with it. Also echo the cred_str for your testing to make sure you are getting the right credentials detail back from the command
Last you should change the line
node mynodeapp.js configuration.js
to
exec node mynodeapp.js configuration.js
This makes sure that your node process becomes the PID 1.

Resources