Docker-compose - passing environment variables to Flask using script - bash

This my project strcuture at host:
set_env_vars.sh
dev/
docker-compose-dev.yml
/services/
web/
.env-dev? <------
project/
config.py
api/
resources/
auth.py
set_env_vars.sh
export SPOTIFY_CLIENT_ID=my_id
export SPOTIFY_CLIENT_SECRET=my_secret
export SPOTIFY_REDIRECT_URI=http://localhost
export SPOTIFY_CACHE_PATH=/project/api/auth/spotify/.cache
which I run like so:
$ source ./set_env_vars.sh
docker-compose-dev.yml
services:
web:
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- SPOTIFY_CLIENT_ID=${SPOTIFY_CLIENT_ID}
- SPOTIFY_CLIENT_SECRET=${SPOTIFY_CLIENT_SECRET}
- SPOTIFY_REDIRECT_URI=${SPOTIFY_REDIRECT_URI}
- SPOTIFY_CACHE_PATH=${SPOTIFY_CACHE_PATH}
config.py
class DevelopmentConfig(BaseConfig):
SPOTIFY_CLIENT_ID = os.environ.get('SPOTIFY_CLIENT_ID')
SPOTIFY_CLIENT_SECRET = os.environ.get('SPOTIFY_CLIENT_SECRET')
SPOTIFY_REDIRECT_URI_ = os.environ.get('SPOTIFY_REDIRECT_URI')
SPOTIFY_CACHE_PATH = os.environ.get('SPOTIFY_CACHE_PATH')
auth.py
from project.config import DevelopmentConfig
sp = spotipy.Spotify(auth_manager=spotipy.oauth2.SpotifyOAuth(
DevelopmentConfig.SPOTIFY_CLIENT_ID,
DevelopmentConfig.SPOTIFY_CLIENT_SECRET,
DevelopmentConfig.SPOTIFY_REDIRECT_URI,
scope=DevelopmentConfig.SCOPE,
cache_path=DevelopmentConfig.SPOTIFY_CACHE_PATH))
But I'm getting the following error:
spotipy.oauth2.SpotifyOauthError: No client_id. Pass it or set a SPOTIPY_CLIENT_ID environment variable.
What am I missing?

This certainly looks like you need to pass the variables with the SPOTIPY_ spelling, as per the docs.
However I also noted that your code repeats the same variable names several times. Possibly this could lead to typos as you try to maintain the same variable names across several files.
A simpler way to approach this might be to have the variables contained in a .env-dev file:
SPOTIPY_CLIENT_ID=my_id
SPOTIPY_CLIENT_SECRET=my_secret
SPOTIPY_REDIRECT_URI=http://localhost
SPOTIPY_CACHE_PATH=/project/api/auth/spotify/.cache
Then load these in your docker-compose-dev.yml file:
services:
web:
env_file:
- .env-dev
Then in your Python code you could do:
import os, DevelopmentConfig
sp = spotipy.Spotify(auth_manager=spotipy.oauth2.SpotifyOAuth(
os.environ.get('SPOTIPY_CLIENT_ID'),
os.environ.get('SPOTIPY_CLIENT_SECRET'),
os.environ.get('SPOTIPY_REDIRECT_URI'),
scope=DevelopmentConfig.SCOPE,
cache_path = os.environ.get('SPOTIPY_CACHE_PATH')))
This method has less repitition, although loading of a configuration bypasses your config.DevelopmentConfig object for these variables.
However this method avoids loading the variables into the host's shell, and instead sets them inside a specific service. It also separates secerts so you can commit docker-compose.yml to source control.

Related

Serverless stage environment variables using dotenv (.env)

I'm new to serverless,
So far I was be able to deploy and use .env for the app.
then, under provider in stage property in serverless.yml file, I change it to different stage. I also made new.env.{stage}.
after re-deploy using sls deploy, It still reads the default .env file.
the documentation states:
The framework looks for .env and .env.{stage} files in service directory and then tries to load them using dotenv. If .env.{stage} is found, .env will not be loaded. If stage is not explicitly defined, it defaults to dev.
So, I still don't understand "If stage is not explicitly defined, it defaults to dev". How to explicitly define it?
The dotenv File is choosen based on your stage property configuration. You need to explicitly define the stage property in your serverless.yaml or set it within your deployment command.
This will use the .env.dev file
useDotenv: true
provider:
name: aws
stage: dev # dev [default], stage, prod
memorySize: 3008
timeout: 30
Or you set the stage property via deploy command.
This will use the .env.prod file
sls deploy --stage prod
In your serverless.yml you need to define the stage property inside the provider object.
Example:
provider:
name: aws
[...]
stage: prod
As Feb 2023 I'm going to attempt to give my solution. I'm using the Nx tootling for monorepo (this shouldn't matter but just in case) and I'm using the serverless.ts instead.
I see the purpose of this to be to enhance the developer experience in the sense that it is nice to just nx run users:serve --stage=test (in my case using Nx) or sls offline --stage=test and serverless to be able to load the appropriate variables for that specific environment.
Some people went the route of using several .env.<stage> per environment. I tried to go this route but because I'm not that good of a developer I couldn't make it work. The approach that worked for the was to concatenate variable names inside the serverless.ts. Let me explain...
I'm using just one .env file instead but changing variable names based on the --stage. The magic is happening in the serverless.ts
// .env
STAGE_development=test
DB_NAME_development=mycraftypal
DB_USER_development=postgres
DB_PASSWORD_development=abcde1234
DB_PORT_development=5432
READER_development=localhost // this could be aws rds uri per db instances
WRITER_development=localhost // this could be aws rds uri per db instances
# TEST
STAGE_test=test
DB_NAME_test=mycraftypal
DB_USER_test=postgres
DB_PASSWORD_test=abcde1234
DB_PORT_test=5433
READER_test=localhost // this could be aws rds uri per db instances
WRITER_test=localhost // this could be aws rds uri per db instances
// serverless.base.ts or serverless.ts based on your configuration
...
useDotenv: true, // this property is at the root level
...
provider: {
...
stage: '${opt:stage, "development"}', // get the --stage flag value or default to development
...,
environment: {
STAGE: '${env:STAGE_${self:provider.stage}}}',
DB_NAME: '${env:DB_NAME_${self:provider.stage}}',
DB_USER: '${env:DB_USER_${self:provider.stage}}',
DB_PASSWORD: '${env:DB_PASSWORD_${self:provider.stage}}',
READER: '${env:READER_${self:provider.stage}}',
WRITER: '${env:WRITER_${self:provider.stage}}',
DB_PORT: '${env:DB_PORT_${self:provider.stage}}',
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
}
...
}
When one is utilizing the useDotenv: true, serverless loads your variables from the .env and puts them in the env variable so you can access them env:STAGE.
Now I can access the variable with dynamic stage like so ${env:DB_PORT_${self:provider.stage}}. If you look at the .env file each variable has the ..._<stage> at the end. In this way I can retrieve dynamically each value.
I'm still figuring it out since I don't want to have the word production in my url but still get the values dynamically and since I'm concatenating this value ${env:DB_PORT_${self:provider.stage}}... then the actual variable becomes DB_PORT_ instead of DB_PORT.

How can I override ddev's php-fpm.conf or pool.d/www.conf?

There is no obvious way to override some php-fpm configuration in DDEV-Local's web container. Although it's easy to provide custom PHP configuration it's not as obvious how one would configure the php-fpm process itself.
In my case I want to change the security.limit-extensions value in pool.d/www.conf
There are two ways to do this. I'll create two separate answers to explain how.
The first technique is to create a custom Dockerfile (docs) which edits the www.conf (or any other file). You can also use the Dockerfile ADD command to add a complete file and override them.
In the case of this specific problem, we'll create a .ddev/web-build/Dockerfile with these contents:
# You can copy this Dockerfile.example to Dockerfile to add configuration
# or packages or anything else to your webimage
ARG BASE_IMAGE
FROM $BASE_IMAGE
ENV PHP_VERSION=7.4
RUN echo "security.limit_extensions = .php .html" >> /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf
After you ddev start you'll have the new configuration.
Instead of the RUN echo approach that just appends to the file, given here for simplicity, you could RUN a sed/awk/perl statement to change the file in place.
And alternatively you could put the version of the www.conf that you want into the .ddev/web-build directory and
COPY www.conf /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf
The second way to approach this is to use a custom docker-compose.*.yaml file (docs.
Here you'll copy the desired www.conf (or any other file) into your project's .ddev directory and then mount it into the web container on top of the previously provided one. For this specific example, you can copy the www.conf into the .ddev folder by cd .ddev && docker cp ddev-<projectname>-web:/etc/php/7.4/fpm/pool.d/www.conf . and edit it as you need to (edit it with "security.limit_extensions = .php .html").
Then a custom .ddev/docker-compose.*.yaml file like this can mount it into the proper directory (mine is called docker-compose.wwwconf.yaml):
version: "3.6"
services:
web:
volumes:
- "./www.conf:/etc/php/7.4/fpm/pool.d/www.conf"
if you are using docker-compose, mount zz-docker.conf where your customized configure placed, sample as bellow
php:
build: ./php
image: ctc/php:latest
container_name: ctc-php
expose:
- 9000
volumes:
- ./html:/var/www/html
- ./php/log:/var/log/php-fpm
- ./php/php-fpm.d/zz-docker.conf:/usr/local/etc/php-fpm.d/zz-docker.conf
networks:
- koogua
restart: always
zz-docker.conf looks like bellow:
[global]
daemonize = no
[www]
listen = 9000
pm.max_children = 50
pm.start_servers = 20
pm.min_spare_servers = 10
pm.max_spare_servers = 30
pm.max_requests = 500
note: mount www.conf will cause error

google deployment manager, can you import files in jinja template that you call directly with --template?

https://cloud.google.com/deployment-manager/docs/configuration/templates/create-basic-template
I can deploy a template directly like this: gcloud deployment-manager deployments create a-single-vm --template vm_template.jinja
But what if that template depends on other files that need to be imported? If using a --config file you can define import in that file and call the template as a resource. But you cant pass parameter/properties to a config file. I want to call a template directly to pass --properties via the command line but that template also needs to import other files.
EDIT: What I needed was a top level jinja template instead of a config. My confusion was that you cant use imports in a jinja template without a schema file- it was failing and I thought it wasnt supported. So the solution was just swap out the config with a jinja template (with schema file) and then I can use --properies
Maybe you can try importing the dependent files into your config file as follows:
imports:
- path: vm-template.jinja
- path: vm-template-2.jinja
# In the resources section below, the properties of the resources are replaced
# with the names of the templates.
resources:
- name: vm-1
type: vm-template.jinja
- name: vm-2
type: vm-template-2.jinja
and Set Arbitrary Metadata insito create a special variable that you can pass and might use in other applications outside of Deployment Manager:
properties:
size:
type: integer
default: 2
description: Number of Mongo Slaves
variable-x: ultra-secret-sauce
More info about gcloud deployment-manager deployments create optional flags and example can be found here.
More info about passing properties using a Schema can be found here
Hope it helps

How to reuse anchored entry under already unwrapped anchor?

I am trying to write a CircleCI config that will allow me to reuse both whole list/mapping(?) entries and its properties.
Having the following:
image_definitions:
docker:
- &default_localstack_image
image: localstack/localstack:0.10.3
environment:
KINESIS_LATENCY: 0
defaults_env: &defaults_env
environment:
PG_PORT: 5432
PG_USER: root
I would like to be able to replace:
test: &test
docker:
- image: localstack/localstack:0.10.3
<<: *defaults_env
with something like:
test: &test
docker:
- *default_localstack_image
<<: *defaults_env
but it doesn't work this way.
I've also tried:
test: &test
docker:
- *default_localstack_image
*defaults_env
but that also didn't work.
How can I do that?
According to the documentation:
test: &test
docker:
- <<: [*default_localstack_image, *defaults_env]
However, be aware that the merge feature is not part of the YAML spec and has only been defined for outdated YAML 1.1. I don't know if this is actually implemented. Even if it is, be aware that this merge key is the odd man out – violating the spec that says every tag is to be mapped to a type, it is instead interpreted as transformation instruction even though the loading process as defined by the spec has no place for executing transformation steps.
Similar functionality (for example for concatenating scalars) is more or less frequently requested on SO but is not available (and will probably never be) and if you need to do something like this, my advice is to do what e.g. Ansible and SaltStack do and use a templating engine as preprocessor for your YAML file.

Concat variable names in GitLab

We use a Gitlab Project in a team. Each developer has his own Kubernetes cluster in the cloud and an own branch within GitLab. We use GitLab-CI to automatically build new containers and deploy them to our Kubernetes clusters.
At the moment we have a .gitlab-ci.yml looks something like this:
variables:
USERNAME: USERNAME
CI_K8S_PROJECT: ${USERNAME_CI_K8S_PROJECT}
REGISTRY_JSON_KEY_FILE: ${USERNAME_REGISTRY_JSON_KEY_FILE}
[...]
stages:
- build
- deploy
- remove
build-zeppelin:
stage: build
image: docker:latest
variables:
image_name: "zeppelin"
only:
- ${USERNAME}#Gitlab-Repo
tags:
- cloudrunner
script:
- docker login -u _json_key -p "${REGISTRY_JSON_KEY_FILE?}" https://eu.gcr.io
- image_name_fqdn="eu.gcr.io/${CI_K8S_PROJECT?}/${image_name?}:latest"
- docker build -t ${image_name_fqdn?} .
- docker push ${image_name_fqdn?}
- echo "Your new image is '${image_name_fqdn?}'. Have fun!"
[...]
So in the beginning we reference the important information by using a USERNAME-prefix. This works quite well, but is problematic, since we need to correct them after every pull request from another user.
So we search for a way to keep the gitlab-ci file the same to every developer while still referencing some gitlab-variables different for every developer.
Things we thought about, that don't seem to work:
Use multiple yml files and import them into each other => not supported.
Try to combine Gitlab Environment variables as Prefix:
CI_K8S_PROJECT: ${${GITLAB_USER_ID}_CI_K8S_PROJECT}
or
INDIVIDUAL_CI_K8S_PROJECT: ${GITLAB_USER_ID}_CI_K8S_PROJECT
CI_K8S_PROJECT: ${INDIVIDUAL_CI_K8S_PROJECT}
We found a solution using indirect expansion (bash feature):
before_script:
- variableName=${GITLAB_USER_ID}_CI_K8S_PROJECT
- export wantedValue=${!variableName}
But we also recognised, that our setup was somehow stupid: It does not make sense to have multiple branches for each user and use prefixed variables, since this leads to problems such as the above and security concerns, since all variables are accessible to all users.
It is way easier if each user forks the root project and simply creates a merge request for new features. This way there is no renaming/prefixing of variables or branches necessary at all.
Solution from #nik will work only for bash. For sh will work:
before_script:
- variableName=...
- export wantedValue=$( eval echo \$$variableName )
Something like this works (on 15.0.5-ee):
variables:
IMAGE_NAME: "test-$CI_PROJECT_NAME"

Resources