How to retain existing env variables in a new shell - bash

I know I must be doing something silly here, but I'm trying to pass environment variables to a command being run under /bin/sh -c in Cloud Build.
My Cloud Build file looks like this:
- id: db-migrate
name: node:16.13.0
dir: 'packages/backend'
entrypoint: '/bin/sh'
args:
- '-c'
- '(/workspace/cloud_sql_proxy -dir=/workspace -instances=$_DB_CONNECTION=tcp:127.0.0.1:5432 & sleep 2) && yarn db:migrate'
env:
- 'DB_HOST=127.0.0.1'
- 'DB_USER=$_DB_USER'
- 'DB_PASSWORD=$_DB_PASSWORD'
- 'DB_NAME=$_DATABASE'
My Cloud Build Trigger has the substitutions set, and when I look at the build details it shows the environment variables as set.
However the command yarn db:migrate acts as if there are no env variables set. I believe this is because they aren't being passed from the machine to the command.
Any idea what I'm doing wrong?
The problem here is that when we call bin/sh it's creating a new shell with it's own environment variables. While I read through the manual on SH/Dash I will leave this question here:
How do I retain existing env variables in a new shell?

Alright, I figured this out.
We were using TypeORM, and originally used a ormconfig.json file. Turns out, this was still being picked up somehow on the system and was overriding all env variables.
Posting this response to help others in case they make this same mistake.

Related

How to replace hardcoded value in pipeline.yaml which is running under buildkite tool

I have a use case where I need to run the same pipeline but with a different environment variable, Like dev/qa/stage/prod.
I tried to use an environment variable to make the changes at the run time but not succeeded
pipeline.yaml
steps:
- label: ":wrench: Run ui tests on dev"
command: "docker run --rm -e INSTANCE=${dev} gcr.io/xyz/tests:${BUILDKITE_COMMIT:0:7}"
- wait
How I am passing the env value.
Please help me to solve this.

GitLab CI Script variables

I have gitlab deployment activem and I want to get the deploy script to have some custom information about the deployment process (like $CI_PIPELINE_ID).
However, the script doesn't get the variables, instead it gets the "raw text".
the call performed by the script is: $ python deploy/deploy.py $CI_COMMIT_TAG $CI_ENVIRONMENT_URL $CI_PIPELINE_ID
How can i get it to use the variables?
My .gitlab-ci.yml:
image: python:2.7
before_script:
- whoami
- sudo apt-get --quiet update --yes
- sudo chmod +x deploy/deploy.py
deploy_production:
stage: deploy
environment: Production
only:
- tags
- trigger
except:
# - develop
- /^feature\/.*$/
- /^hotfix\/.*$/
- /^release\/.*$/
script:
- python deploy/deploy.py $CI_COMMIT_TAG $CI_ENVIRONMENT_URL $CI_PIPELINE_ID
It looks like potentially that you could be using a different environmental variable that you should be using.
bash/sh $variable
windows batch %variable%
PowerShell $env:variable
See using CI variables in your job script.
I don't get what you mean with "raw text", but you can declare the variables in your project settings. Also, have you configured you're runner?
Go to Settings->CI/CD->Secret Variables and just put them right there.
You can also find valuable information in the documentation.

Docker Ubuntu environment variables

During the build stage of my docker images, i would like to set some environment variables automatically for every subsequent "RUN" command.
However, I would like to set these variables from within the docker conatiner, because setting them depends on some internal logic.
Using the dockerfile "ENV" command is not good, because that cannot rely on internal logic. (It cannot rely on a command run inside the docker container)
Normally (if this were not docker) I would set my ~/.profile file. However, docker does not load this file in non-interactive shells.
So at them moment I have to run each docker RUN command with:
RUN bash -c "source ~/.profile && do_something_here"
However, this is very tedious (and unclean) when I have to repeat this every time I want to run a bash command. Is there some other "profile" file I can use instead.
you can try setting the arg as env like this
ARG my_env
ENV my_env=${my_env}
in Dockerfile,
and pass the 'my_env=prod' in build-args so that you can use the set env for subsequent RUN commands
you can also use env_file: option in docker compose yml file in case of a stack deploy
I had a similar problem and couldn't find a satisfactory solution. What I did was creating a script that would source the variables, then do the operation. I would then rewrite the RUN commands in the Dockerfile to use that script instead.
In your case, if you need to run multiple commands, you could create a wrapper that loads the variables, runs the command given as argument, and include that script in the docker image.

running a bash client at docker creation to set an environment variable

From examples I've seen one can set environment variables in docker-compose.yml like so:
services:
postgres:
image: my_node_app
ports: -8080:8080
environment:
APP_PASSWORD: mypassword
...
For security reasons, my use case requires me to fetch the password from a server that we have a bash client for:
#!/bin/bash
get_credential <server> <dev-environment> <role> <key>
In docker documentation, I found this, which says that I can pass in shell environment variable values to docker compose. So I can run the bash client to grab the passwords in my starting shell that creates the docker instances. However, that requires me to have my bash client outside docker and inside my maven project.
Another way to do this would be to run/cmd/entrypoint a bash script that can set environment variable for the docker instance. Since my docker image runs node.js, currently my Dockerfile is like this:
FROM node:4-slim
MAINTAINER myself
# ... do Dockerfile stuff
# TRIAL #1: run a bash script to set the environment varable --- UNSUCCESSFUL!
COPY set_en_var.sh /
RUN chmod +x /set_en_var.sh
RUN /bin/bash /set_en_var.sh
# original entry point
#ENTRYPOINT ["node", "mynodeapp.js", "configuration.js"]
# TRIAL #2: use a bash script as entrypoint that sets
# the environment variable and runs my node app . --- UNSUCCESSFUL TOO!
ENTRYPOINT ["/entrypoint.sh"]
Here is the code for entrypoint.sh:
. mybashclient.sh
cred_str=$(get_credential <server> <dev-environment> <role> <key>)
export APP_PASSWORD=( $cred_str )
# run the original entrypoint command
node mynodeapp.js configuration.js
And here is code for my set_en_var.sh:
. mybashclient.sh
cred_str=$(get_credential <server> <dev-environment> <role> <key>
export APP_PASSWORD=( $cred_str )
So 2 questions:
Which is a better choice, having my bash client for password live inside docker or outside docker?
If I were to have it inside docker, how can I use cmd/run/entrypoint to achieve this?
Which is a better choice, having my bash client for password live inside docker or outside docker?
Always have it inside. You don't want dependencies on the host OS. You want to avoid that situation as much as possible
If I were to have it inside docker, how can I use cmd/run/entrypoint to achieve this?
Consider the below line of code you used
RUN /bin/bash /set_en_var.sh
This won't work at all. Because you don't make any change to the docker container as such. You just run a bash which gets some environment variables and then the bash exits and nothing on the OS gets changes. Dockerfile build will only maintain changes that happened to the OS from that command. And in your case except for that session of the bash, nothing changes.
Next your approach to do this during the build time is also not justified. If you build it with the environment variables inside it then you are breaking the purpose of having a command to fetch the latest credentials. Suppose your change the password, then this would require you to rebuild the image (in case it had worked)
Now your entrypoint.sh approach is the right one and it should work. You should just check what is going wrong with it. Also echo the cred_str for your testing to make sure you are getting the right credentials detail back from the command
Last you should change the line
node mynodeapp.js configuration.js
to
exec node mynodeapp.js configuration.js
This makes sure that your node process becomes the PID 1.

flask/gunicorn: setting environment variable from environment variable

On python/flask/gunicorn/heroku stack, I need to set an environment variable based on the content of another env variable.
For background, I run a python/Flask app on heroku.
I communicate with an addon via a environment variable that contains credentials and url.
The library I use to communicate with the addon needs that data, but needs it in a different format.
Also, it needs it as an environment variable.
So far, I had cloned and reformatted the environment variable manually, but that just brought disaster because the add-on provider was changing passwords.
OK, so I need to automate reading one environment variable and setting another, before the library starts looking for it.
The naive approach I tried was (file app.py):
app = Flask(__name__, ...)
env_in = os.environ['ADDON_ENV_VAR']
os.environ['LIB_ENV_VAR'] = some_processing(env_in)
...
if __name__ == '__main__':
app.run(host='0.0.0.0', port='5000')
That works fine when doing python app.py for debugging, but it fails when running via gunicorn app:app -b '0.0.0.0:5000' (as a Procfilefor foreman) for deploying a real webserver. In the second case, the env var doesn't seem to make it to the OS level. I'm not sure about how wsgi works, but maybe the environment changes once gunicorn starts running the app.
What can I do to have the environment variable set at the place it's needed?
you could also set the enviroment variables at run time as such
gunicorn -b 0.0.0.0:5000 -e env_var1=enviroment1 -e env_var2=environment2
OK, so the answer (via Kenneth R, Heroku) is to set the environment before running gunicorn. I.e. write a Procfile like
web: sh appstarter.sh
which calls a wrapper (shell, python, ..) that sets up the environment variable and then runs the gunicorn command, like for example
appstarter.sh:
export LIB_ENV_VAR=${ADDON_ENV_VAR}/some/additional_string
gunicorn app:app -b '0.0.0.0:5000'
Just in case it helps anyone else out there.
Set environment variable (key=value).
Pass variables to the execution environment. Ex.:
$ gunicorn -b 127.0.0.1:8000 --env FOO=1 test:app
and test for the foo variable environment in your application.
from: http://docs.gunicorn.org/en/stable/settings.html

Resources